From wes.turner at gmail.com Mon Dec 1 00:10:53 2014 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 30 Nov 2014 17:10:53 -0600 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <20141130115557.427918a2@limelight.wooz.org> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> Message-ID: On Sun, Nov 30, 2014 at 10:55 AM, Barry Warsaw wrote: > On Nov 30, 2014, at 09:54 AM, Ian Cordasco wrote: > > >- Migrating "data" from GitHub is easy. There are free-as-in-freedom > >tools to do it and the only cost is the time it would take to monitor > >the process > > *Extracting* data may be easy, but migrating it is a different story. As > the > Mailman project has seen in trying to migrate from Confluence to Moin, > there > is a ton of very difficult work involved after extracting the data. > Parsing > the data, ensuring that you have all the bits you need, fitting it into the > new system's schema, working out the edge cases, adapting to semantic > differences and gaps, ensuring that all the old links are redirected, and > so > on, were all exceedingly difficult[*]. > The GitHub API is currently at Version 3. These may be useful references for the PEP: https://developer.github.com/v3/ https://developer.github.com/libraries/ https://github.com/jaxbot/github-issues.vim (:Gissues) https://developer.github.com/webhooks/ There are integrations for many platforms here: https://zapier.com/developer/documentation/ https://zapier.com/zapbook/apps/#sort=popular&filter=developer-tools > > Even converting between two FLOSS tools is an amazing amount of work. > Look at > what Eric Raymond did with reposurgeon to convert from Bazaar to git. > > It's a good thing that your data isn't locked behind a proprietary door, > for > now. That's only part of the story. But also, because github is a closed > system, there's no guarantee that today's data-freeing APIs will still > exist, > continue to be functional for practical purposes, remain complete, or stay > at > parity with new features. > > Cheers, > -Barry > > [*] And our huge gratitude goes to Paul Boddie for his amazing amount of > work > on the project. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Mon Dec 1 00:38:04 2014 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 30 Nov 2014 18:38:04 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> Message-ID: On 11/30/2014 1:05 PM, Guido van Rossum wrote: > I don't feel it's my job to accept or reject this PEP, but I do have an > opinion. ... > - I am basically the only remaining active PEP editor, so I see most PEP > contributions by non-core-committers. Almost all of these uses github. > Not bitbucket, not some other git host, but github. I spend a fair > amount of time applying patches. It would most definitely be easier if I > could get them to send me pull requests. The scope of the PEP is still apparently somewhat fluid. I said elsewhere that I think the principal maintainers of a specialized single-branch repository should have the principal say in where it lives. So I think you should be the one to decide on a PEP limited to moving the PEP repository. My understanding is that if the peps were moved to github, then I would be able to suggest changes via an online web form that would generate a pull request from edited text. If so, I would say go ahead and move them and see how it goes. To me, the multibranch CPython repository is a very different issue. I think it should stay where it is for now, especially with 2.7 support extended. I think for this we should better focus on better use of developer time and getting more developers active. -- Terry Jan Reedy From tjreedy at udel.edu Mon Dec 1 00:43:34 2014 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 30 Nov 2014 18:43:34 -0500 Subject: [Python-Dev] Unicode decode exception In-Reply-To: References: Message-ID: On 11/30/2014 3:07 AM, balaji marisetti wrote: > Hi, > > When I try to iterate through the lines of a > file("openssl-1.0.1j/crypto/bn/asm/x86_64-gcc.c"), I get a > UnicodeDecodeError (in python 3.4.0 on Ubuntu 14.04). But there is no > such error with python 2.7.6. What could be the problem? Questions about using current version should be directed to python-list or other support forums. Py-dev is for development of future versions and releases. -- Terry Jan Reedy From tjreedy at udel.edu Mon Dec 1 00:41:52 2014 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 30 Nov 2014 18:41:52 -0500 Subject: [Python-Dev] advice needed: best approach to enabling "metamodules"? In-Reply-To: <547B6FBE.8040205@stoneleaf.us> References: <547B6FBE.8040205@stoneleaf.us> Message-ID: On 11/30/2014 2:27 PM, Ethan Furman wrote: > On 11/30/2014 11:15 AM, Guido van Rossum wrote: >> On Sun, Nov 30, 2014 at 6:15 AM, Brett Cannon wrote: >>> On Sat, Nov 29, 2014, 21:55 Guido van Rossum wrote: >>>> >>>> All the use cases seem to be about adding some kind of getattr hook >>>> to modules. They all seem to involve modifying the CPython C code >>>> anyway. So why not tackle that problem head-on and modify module_getattro() >>>> to look for a global named __getattr__ and if it exists, call that instead >>>> of raising AttributeError? >>> >>> Not sure if anyone thought of it. :) Seems like a reasonable solution to me. >>> Be curious to know what the benchmark suite said the impact was. >> >> Why would there be any impact? The __getattr__ hook would be similar to the >> one on classes -- it's only invoked at the point where otherwise AttributeError >> would be raised. > > I think the bigger question is how do we support it back on 2.7? I do not understand this question. We don't add new features to 2.7 and this definitely is one. -- Terry Jan Reedy From ben+python at benfinney.id.au Mon Dec 1 01:17:02 2014 From: ben+python at benfinney.id.au (Ben Finney) Date: Mon, 01 Dec 2014 11:17:02 +1100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> Message-ID: <85zjb8usu9.fsf@benfinney.id.au> Donald Stufft writes: > I have never heard of git losing history. In my experience talking with Git users about this problem, that depends on a very narrow definition of ?losing history?. Git encourages re-writing, and thereby losing prior versions of, the history of a branch. The commit information remains, but the history of how they link together is lost. That is a loss of information, which is not the case in the absence of such history re-writing. Git users differ in whether they consider that information loss important; but it is, objectively, losing history information. So Ethan's impression is correct on this point. -- \ ?If you see an animal and you can't tell if it's a skunk or a | `\ cat, here's a good saying to help: ?Black and white, stinks all | _o__) right. Tabby-colored, likes a fella.?? ?Jack Handey | Ben Finney From donald at stufft.io Mon Dec 1 01:30:59 2014 From: donald at stufft.io (Donald Stufft) Date: Sun, 30 Nov 2014 19:30:59 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <85zjb8usu9.fsf@benfinney.id.au> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> Message-ID: > On Nov 30, 2014, at 7:17 PM, Ben Finney wrote: > > Donald Stufft writes: > >> I have never heard of git losing history. > > In my experience talking with Git users about this problem, that depends > on a very narrow definition of ?losing history?. > > Git encourages re-writing, and thereby losing prior versions of, the > history of a branch. The commit information remains, but the history of > how they link together is lost. That is a loss of information, which is > not the case in the absence of such history re-writing. > > Git users differ in whether they consider that information loss > important; but it is, objectively, losing history information. So > Ethan's impression is correct on this point. > It?s not lost, the only thing that?s ?gone? is a pointer to the HEAD commit of that branch. Each commit points to it?s parent commit so if you find the HEAD and give it a name you?ll restore the branch. It just so happens inside the reflog you?ll see a list of the old HEADs of branches so you can get the old commit ID from the HEAD there. In addition depending on how you rewrote the branch and if you did anything else there is likely a reference to the old head at ORIG_HEAD. If you don?t have the reflog (this is per copy of the repository, so a different computer or deleting the repo and recreating it will lose it) and for similar reasons you don?t have the ORIG_HEAD, if you have any reference to the previous HEAD (email, commit messages, whatever) that?s enough to restore it assuming that the commits have not been garbage collected yet (which happens in 90 days or 30 days depending on what kind of unreferenced commit it is) you can restore it. The important thing to realize is that a ?branch? isn?t anything special in git. All a branch does is act as a sort of symlink to a commit ID. Anything more beyond ?what is the HEAD commit in this branch? is stored as part of the commits themselves and doesn?t rely on the branch to be named. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ben+python at benfinney.id.au Mon Dec 1 01:43:10 2014 From: ben+python at benfinney.id.au (Ben Finney) Date: Mon, 01 Dec 2014 11:43:10 +1100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> Message-ID: <85sih0urmp.fsf@benfinney.id.au> Donald Stufft writes: > It?s not lost, [? a long, presumably-accurate discourse of the many > conditions that must be met before ?] you can restore it. This isn't the place to discuss the details of Git's internals, I think. I'm merely pointing out that: > The important thing to realize is that a ?branch? isn?t anything > special in git. Because of that, Ethan's impression ? that Git's default behaviour encourages losing history (by re-writing the history of commits to be other than what they were) is true, and ?Git never loses history? simply isn't true. Whether that is a *problem* is a matter of debate, but the fact that Git's common workflow commonly discards information that some consider valuable, is a simple fact. If Ethan chooses to make that a factor in his decisions about Git, the facts are on his side. -- \ ?One of the most important things you learn from the internet | `\ is that there is no ?them? out there. It's just an awful lot of | _o__) ?us?.? ?Douglas Adams | Ben Finney From donald at stufft.io Mon Dec 1 01:50:37 2014 From: donald at stufft.io (Donald Stufft) Date: Sun, 30 Nov 2014 19:50:37 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <85sih0urmp.fsf@benfinney.id.au> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> Message-ID: <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> > On Nov 30, 2014, at 7:43 PM, Ben Finney wrote: > > Donald Stufft writes: > >> It?s not lost, [? a long, presumably-accurate discourse of the many >> conditions that must be met before ?] you can restore it. > > This isn't the place to discuss the details of Git's internals, I think. > I'm merely pointing out that: > >> The important thing to realize is that a ?branch? isn?t anything >> special in git. > > Because of that, Ethan's impression ? that Git's default behaviour > encourages losing history (by re-writing the history of commits to be > other than what they were) is true, and ?Git never loses history? simply > isn't true. > > Whether that is a *problem* is a matter of debate, but the fact that > Git's common workflow commonly discards information that some consider > valuable, is a simple fact. > > If Ethan chooses to make that a factor in his decisions about Git, the > facts are on his side. Except it?s not true at all. That data is all still there if you want it to exist and it?s not a real differentiator between Mercurial and git because Mercurial has the ability to do the same thing. Never mind the fact that ?lose? your history makes it sound accidental instead of on purpose. It?s like saying that ``rm foo.txt`` will ?lose? the data in foo.txt. So either it was a misunderstanding in which case I wanted to point out that those operations don?t magically lose information or it?s a purposely FUDish statement in which case I want to point out that the statement is inaccurate. The only thing that is true is that git users are more likely to use the ability to rewrite history than Mercurial users are, but you?ll typically find that people generally don?t do this on public branches, only on private branches. Which again doesn?t make much sense in this context since generally currently the way people are using Mercurial with CPython you?re using patches to transfer the changes from the contributor to the committer so you?re ?losing? that history regardless. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From njs at pobox.com Mon Dec 1 01:59:19 2014 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 1 Dec 2014 00:59:19 +0000 Subject: [Python-Dev] advice needed: best approach to enabling "metamodules"? In-Reply-To: <547B96D8.7050700@hotpy.org> References: <547B96D8.7050700@hotpy.org> Message-ID: On Sun, Nov 30, 2014 at 10:14 PM, Mark Shannon wrote: > Hi, > > This discussion has been going on for a while, but no one has questioned the > basic premise. Does this needs any change to the language or interpreter? > > I believe it does not. I'm modified your original metamodule.py to not use > ctypes and support reloading: > https://gist.github.com/markshannon/1868e7e6115d70ce6e76 Interesting approach! As written, your code will blow up on any python < 3.4, because when old_module gets deallocated it'll wipe the module dict clean. And I guess even on >=3.4, this might still happen if old_module somehow manages to get itself into a reference loop before getting deallocated. (Hopefully not, but what a nightmare to debug if it did.) However, both of these issues can be fixed by stashing a reference to old_module somewhere in new_module. The __class__ = ModuleType trick is super-clever but makes me irrationally uncomfortable. I know that this is documented as a valid method of fooling isinstance(), but I didn't know that until your yesterday, and the idea of objects where type(foo) is not foo.__class__ strikes me as somewhat blasphemous. Maybe this is all fine though. The pseudo-module objects generated this way will still won't pass PyModule_Check, so in theory this could produce behavioural differences. I can't name any specific places where this will break things, though. From a quick skim of the CPython source, a few observations: It means the PyModule_* API functions won't work (e.g. PyModule_GetDict); maybe these aren't used enough to matter. It looks like the __reduce__ methods on "method objects" (Objects/methodobject.c) have a special check for ->m_self being a module object, and won't pickle correctly if ->m_self ends up pointing to one of these pseudo-modules. I have no idea how one ends up with a method whose ->m_self points to a module object, though -- maybe it never actually happens. PyImport_Cleanup treats module objects differently from non-module objects during shutdown. I guess it also has the mild limitation that it doesn't work with extension modules, but eh. Mostly I'd be nervous about the two points above. -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From njs at pobox.com Mon Dec 1 02:02:11 2014 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 1 Dec 2014 01:02:11 +0000 Subject: [Python-Dev] advice needed: best approach to enabling "metamodules"? In-Reply-To: References: <547B96D8.7050700@hotpy.org> Message-ID: On Mon, Dec 1, 2014 at 12:59 AM, Nathaniel Smith wrote: > On Sun, Nov 30, 2014 at 10:14 PM, Mark Shannon wrote: >> Hi, >> >> This discussion has been going on for a while, but no one has questioned the >> basic premise. Does this needs any change to the language or interpreter? >> >> I believe it does not. I'm modified your original metamodule.py to not use >> ctypes and support reloading: >> https://gist.github.com/markshannon/1868e7e6115d70ce6e76 > > Interesting approach! > > As written, your code will blow up on any python < 3.4, because when > old_module gets deallocated it'll wipe the module dict clean. And I > guess even on >=3.4, this might still happen if old_module somehow > manages to get itself into a reference loop before getting > deallocated. (Hopefully not, but what a nightmare to debug if it did.) > However, both of these issues can be fixed by stashing a reference to > old_module somewhere in new_module. > > The __class__ = ModuleType trick is super-clever but makes me > irrationally uncomfortable. I know that this is documented as a valid > method of fooling isinstance(), but I didn't know that until your > yesterday, and the idea of objects where type(foo) is not > foo.__class__ strikes me as somewhat blasphemous. Maybe this is all > fine though. > > The pseudo-module objects generated this way will still won't pass > PyModule_Check, so in theory this could produce behavioural > differences. I can't name any specific places where this will break > things, though. From a quick skim of the CPython source, a few > observations: It means the PyModule_* API functions won't work (e.g. > PyModule_GetDict); maybe these aren't used enough to matter. It looks > like the __reduce__ methods on "method objects" > (Objects/methodobject.c) have a special check for ->m_self being a > module object, and won't pickle correctly if ->m_self ends up pointing > to one of these pseudo-modules. I have no idea how one ends up with a > method whose ->m_self points to a module object, though -- maybe it > never actually happens. PyImport_Cleanup treats module objects > differently from non-module objects during shutdown. Actually, there is one showstopper here -- in the first version where reload() uses isinstance() is actually 3.4. Before that you need a real module subtype for reload to work. But this is in principle workaroundable by using subclassing + ctypes on old versions of python and the __class__ = hack on new versions. > I guess it also has the mild limitation that it doesn't work with > extension modules, but eh. Mostly I'd be nervous about the two points > above. > > -n > > -- > Nathaniel J. Smith > Postdoctoral researcher - Informatics - University of Edinburgh > http://vorpus.org -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From pierre-yves.david at ens-lyon.org Mon Dec 1 02:14:38 2014 From: pierre-yves.david at ens-lyon.org (Pierre-Yves David) Date: Sun, 30 Nov 2014 17:14:38 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> Message-ID: <547BC0FE.7080202@ens-lyon.org> On 11/30/2014 04:31 AM, Paul Moore wrote: > On 29 November 2014 at 23:27, Donald Stufft wrote: >> In previous years there was concern about how well supported git was on Windows >> in comparison to Mercurial. However git has grown to support Windows as a first >> class citizen. In addition to that, for Windows users who are not well aquanted >> with the Windows command line there are GUI options as well. > > I have little opinion on the PEP as a whole, but is the above > statement true? From the git website, version 2.2.0 is current, and > yet the downloadable Windows version is still 1.9.4. That's a fairly > significant version lag for a "first class citizen". > > I like git, and it has a number of windows-specific extensions that > are really useful (more than Mercurial, AFAIK), but I wouldn't say > that the core product supported Windows on an equal footing to Linux. I'm curious about These useful extension. Can you elaborate? -- Pierre-Yves David From pierre-yves.david at ens-lyon.org Mon Dec 1 02:11:51 2014 From: pierre-yves.david at ens-lyon.org (Pierre-Yves David) Date: Sun, 30 Nov 2014 17:11:51 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> Message-ID: <547BC057.7080107@ens-lyon.org> On 11/30/2014 08:45 AM, Donald Stufft wrote: > I don?t make branches in Mercurial because > i?m afraid I?m going to push a permanent branch to hg.python.org > and screw > something up. There is no need to be afraid there, Mercurial is not going to let you push new head/branch unless you explicitly use `hg push --force`. I you are really paranoid about this, you can configure your Mercurial to make all new commit as secret (no pushable) and explicitly make commit ready to push as such. This can be achieved by adding [phases] new-commit=secret See `hg help phases` for details. -- Pierre-Yves David From pierre-yves.david at ens-lyon.org Mon Dec 1 02:19:46 2014 From: pierre-yves.david at ens-lyon.org (Pierre-Yves David) Date: Sun, 30 Nov 2014 17:19:46 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> Message-ID: <547BC232.4040008@ens-lyon.org> On 11/30/2014 04:31 AM, Paul Moore wrote: > On 29 November 2014 at 23:27, Donald Stufft wrote: >> >In previous years there was concern about how well supported git was on Windows >> >in comparison to Mercurial. However git has grown to support Windows as a first >> >class citizen. In addition to that, for Windows users who are not well aquanted >> >with the Windows command line there are GUI options as well. Mercurial have robust Windows support for a long time. This support is native (not using cygwin) and handle properly all kind of strange corner case. We have large scale ecosystem (http://unity3d.com/) using Mercurial on windows. We also have full featured GUI client http://tortoisehg.bitbucket.org/. It is actively developed by people who stay in touch with the Mercurial upstream so new feature tend to land in the GUI really fast. -- Pierre-Yves David From donald at stufft.io Mon Dec 1 02:25:12 2014 From: donald at stufft.io (Donald Stufft) Date: Sun, 30 Nov 2014 20:25:12 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <547BC057.7080107@ens-lyon.org> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547BC057.7080107@ens-lyon.org> Message-ID: > On Nov 30, 2014, at 8:11 PM, Pierre-Yves David wrote: > > > > On 11/30/2014 08:45 AM, Donald Stufft wrote: >> I don?t make branches in Mercurial because >> i?m afraid I?m going to push a permanent branch to hg.python.org >> and screw >> something up. > > There is no need to be afraid there, Mercurial is not going to let you push new head/branch unless you explicitly use `hg push --force`. > > I you are really paranoid about this, you can configure your Mercurial to make all new commit as secret (no pushable) and explicitly make commit ready to push as such. This can be achieved by adding > > [phases] > new-commit=secret > > See `hg help phases` for details. Yea Benjamin mentioned that the hg.python.org repositories have commit hooks to prevent that from happening too. To be clear the fact I don?t really know Mercurial very well isn?t what I think is a compelling argument for not using Mercurial. It?s mostly a tangent to this PEP which is primarily focused on the ?network effects? of using a more popular tool. The technical benefits mostly come from Github generally being a higher quality product than it?s competitors, both FOSS and not. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From guido at python.org Mon Dec 1 02:24:58 2014 From: guido at python.org (Guido van Rossum) Date: Sun, 30 Nov 2014 17:24:58 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: Can we please stop the hg-vs-git discussion? We've established earlier that the capabilities of the DVCS itself (hg or git) are not a differentiator, and further he-said-she-said isn't going to change anybody's opinion. What's left is preferences of core developers, possibly capabilities of the popular websites (though BitBucket vs. GitHub seems to be a wash as well), and preferences of contributors who aren't core developers (using popularity as a proxy). It seems the preferences of the core developers are mixed, while the preferences of non-core contributors are pretty clear, so we have a problem weighing these two appropriately. Also, let's not get distracted by the needs of the CPython repo, issue tracker, and code review tool. Arguments about core developers vs. contributors for CPython shouldn't affect the current discussion. Next, two of the three repos mentioned in Donald's PEP 481 are owned by Brett Cannon, according to the Contact column listed on hg.python.org. I propose to let Brett choose whether to keep these on hg.python.org, move to BitBucket, or move to GitHub. @Brett, what say you? (Apart from "I'm tired of the whole thread." :-) The third one is the peps repo, which has python-dev at python.org as Contact. It turns out that Nick is by far the largest contributor (he committed 215 of the most recent 1000 changes) so I'll let him choose. Finally, I'd like to get a few more volunteers for the PEP editors list, preferably non-core devs: the core devs are already spread too thinly, and I really shouldn't be the one who picks new PEP numbers and checks that PEPs are well-formed according to the rules of PEP 1. A PEP editor shouldn't have to pass judgment on the contents of a PEP (though they may choose to correct spelling and grammar). Knowledge of Mercurial is a plus. :-) On Sun, Nov 30, 2014 at 4:50 PM, Donald Stufft wrote: > > > On Nov 30, 2014, at 7:43 PM, Ben Finney > wrote: > > > > Donald Stufft writes: > > > >> It?s not lost, [? a long, presumably-accurate discourse of the many > >> conditions that must be met before ?] you can restore it. > > > > This isn't the place to discuss the details of Git's internals, I think. > > I'm merely pointing out that: > > > >> The important thing to realize is that a ?branch? isn?t anything > >> special in git. > > > > Because of that, Ethan's impression ? that Git's default behaviour > > encourages losing history (by re-writing the history of commits to be > > other than what they were) is true, and ?Git never loses history? simply > > isn't true. > > > > Whether that is a *problem* is a matter of debate, but the fact that > > Git's common workflow commonly discards information that some consider > > valuable, is a simple fact. > > > > If Ethan chooses to make that a factor in his decisions about Git, the > > facts are on his side. > > Except it?s not true at all. > > That data is all still there if you want it to exist and it?s not a real > differentiator between Mercurial and git because Mercurial has the ability > to do the same thing. Never mind the fact that ?lose? your history makes it > sound accidental instead of on purpose. It?s like saying that ``rm > foo.txt`` > will ?lose? the data in foo.txt. So either it was a misunderstanding in > which case I wanted to point out that those operations don?t magically lose > information or it?s a purposely FUDish statement in which case I want to > point out that the statement is inaccurate. > > The only thing that is true is that git users are more likely to use the > ability to rewrite history than Mercurial users are, but you?ll typically > find that people generally don?t do this on public branches, only on > private > branches. Which again doesn?t make much sense in this context since > generally > currently the way people are using Mercurial with CPython you?re using > patches to transfer the changes from the contributor to the committer so > you?re > ?losing? that history regardless. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Dec 1 02:27:39 2014 From: guido at python.org (Guido van Rossum) Date: Sun, 30 Nov 2014 17:27:39 -0800 Subject: [Python-Dev] advice needed: best approach to enabling "metamodules"? In-Reply-To: References: <547B96D8.7050700@hotpy.org> Message-ID: Nathaniel, did you look at Brett's LazyLoader? It overcomes the subclass issue by using a module loader that makes all modules instances of a (trivial) Module subclass. I'm sure this approach can be backported as far as you need to go. On Sun, Nov 30, 2014 at 5:02 PM, Nathaniel Smith wrote: > On Mon, Dec 1, 2014 at 12:59 AM, Nathaniel Smith wrote: > > On Sun, Nov 30, 2014 at 10:14 PM, Mark Shannon wrote: > >> Hi, > >> > >> This discussion has been going on for a while, but no one has > questioned the > >> basic premise. Does this needs any change to the language or > interpreter? > >> > >> I believe it does not. I'm modified your original metamodule.py to not > use > >> ctypes and support reloading: > >> https://gist.github.com/markshannon/1868e7e6115d70ce6e76 > > > > Interesting approach! > > > > As written, your code will blow up on any python < 3.4, because when > > old_module gets deallocated it'll wipe the module dict clean. And I > > guess even on >=3.4, this might still happen if old_module somehow > > manages to get itself into a reference loop before getting > > deallocated. (Hopefully not, but what a nightmare to debug if it did.) > > However, both of these issues can be fixed by stashing a reference to > > old_module somewhere in new_module. > > > > The __class__ = ModuleType trick is super-clever but makes me > > irrationally uncomfortable. I know that this is documented as a valid > > method of fooling isinstance(), but I didn't know that until your > > yesterday, and the idea of objects where type(foo) is not > > foo.__class__ strikes me as somewhat blasphemous. Maybe this is all > > fine though. > > > > The pseudo-module objects generated this way will still won't pass > > PyModule_Check, so in theory this could produce behavioural > > differences. I can't name any specific places where this will break > > things, though. From a quick skim of the CPython source, a few > > observations: It means the PyModule_* API functions won't work (e.g. > > PyModule_GetDict); maybe these aren't used enough to matter. It looks > > like the __reduce__ methods on "method objects" > > (Objects/methodobject.c) have a special check for ->m_self being a > > module object, and won't pickle correctly if ->m_self ends up pointing > > to one of these pseudo-modules. I have no idea how one ends up with a > > method whose ->m_self points to a module object, though -- maybe it > > never actually happens. PyImport_Cleanup treats module objects > > differently from non-module objects during shutdown. > > Actually, there is one showstopper here -- in the first version where > reload() uses isinstance() is actually 3.4. Before that you need a > real module subtype for reload to work. But this is in principle > workaroundable by using subclassing + ctypes on old versions of python > and the __class__ = hack on new versions. > > > I guess it also has the mild limitation that it doesn't work with > > extension modules, but eh. Mostly I'd be nervous about the two points > > above. > > > > -n > > > > -- > > Nathaniel J. Smith > > Postdoctoral researcher - Informatics - University of Edinburgh > > http://vorpus.org > > > > -- > Nathaniel J. Smith > Postdoctoral researcher - Informatics - University of Edinburgh > http://vorpus.org > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus at unterwaditzer.net Mon Dec 1 02:27:10 2014 From: markus at unterwaditzer.net (Markus Unterwaditzer) Date: Mon, 01 Dec 2014 02:27:10 +0100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <85zjb8usu9.fsf@benfinney.id.au> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> Message-ID: On 1 December 2014 01:17:02 CET, Ben Finney wrote: >Donald Stufft writes: > >> I have never heard of git losing history. > >In my experience talking with Git users about this problem, that >depends >on a very narrow definition of ?losing history?. > >Git encourages re-writing, and thereby losing prior versions of, the >history of a branch. The commit information remains, but the history of >how they link together is lost. That is a loss of information, which is >not the case in the absence of such history re-writing. "Losing data" is generally used in the sense that either the application or the filesystem accidentally deletes or overwrites data without the user's consent or knownledge. Rewriting and deleting (not "losing") history in git is explicitly done by the user, encouraged or not. -- Markus From njs at pobox.com Mon Dec 1 02:30:32 2014 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 1 Dec 2014 01:30:32 +0000 Subject: [Python-Dev] advice needed: best approach to enabling "metamodules"? In-Reply-To: References: Message-ID: On Sun, Nov 30, 2014 at 8:54 PM, Guido van Rossum wrote: > On Sun, Nov 30, 2014 at 11:29 AM, Nathaniel Smith wrote: >> >> On Sun, Nov 30, 2014 at 2:54 AM, Guido van Rossum >> wrote: >> > All the use cases seem to be about adding some kind of getattr hook to >> > modules. They all seem to involve modifying the CPython C code anyway. >> > So >> > why not tackle that problem head-on and modify module_getattro() to look >> > for >> > a global named __getattr__ and if it exists, call that instead of >> > raising >> > AttributeError? >> >> You need to allow overriding __dir__ as well for tab-completion, and >> some people wanted to use the properties API instead of raw >> __getattr__, etc. Maybe someone will want __getattribute__ semantics, >> I dunno. > > Hm... I agree about __dir__ but the other things feel too speculative. > >> So since we're *so close* to being able to just use the >> subclassing machinery, it seemed cleaner to try and get that working >> instead of reimplementing bits of it piecewise. > > That would really be option 1, right? It's the one that looks cleanest from > the user's POV (or at least from the POV of a developer who wants to build a > framework using this feature -- for a simple one-off use case, __getattr__ > sounds pretty attractive). I think that if we really want option 1, the > issue of PyModuleType not being a heap type can be dealt with. Options 1-4 all have the effect of making it fairly simple to slot an arbitrary user-defined module subclass into sys.modules. Option 1 is the cleanest API though :-). >> >> That said, __getattr__ + __dir__ would be enough for my immediate use >> cases. > > > Perhaps it would be a good exercise to try and write the "lazy submodule > import"(*) use case three ways: (a) using only CPython 3.4; (b) using > __class__ assignment; (c) using customizable __getattr__ and __dir__. I > think we can learn a lot about the alternatives from this exercise. I > presume there's already a version of (a) floating around, but if it's been > used in practice at all, it's probably too gnarly to serve as a useful > comparison (though its essence may be extracted to serve as such). (b) and (c) are very straightforward and trivial. Probably I could do a better job of faking dir()'s default behaviour on modules, but basically: ##### __class__ assignment__ ##### import sys, types, importlib class MyModule(types.ModuleType): def __getattr__(self, name): if name in _lazy_submodules: # implicitly assigns submodule to self.__dict__[name] return importlib.import_module(name, package=self.__package__) def __dir__(self): entries = set(self.__dict__) entries.update(__lazy_submodules__) return sorted(entries) sys.modules[__name__].__class__ = MyModule _lazy_submodules = {"foo", "bar"} ##### customizable __getattr__ and __dir__ ##### import importlib def __getattr__(name): if name in _lazy_submodules: # implicitly assigns submodule to globals()[name] return importlib.import_module(name, package=self.__package__) def __dir__(): entries = set(globals()) entries.update(__lazy_submodules__) return sorted(entries) _lazy_submodules = {"foo", "bar"} > FWIW I believe all proposals here have a big limitation: the module *itself* > cannot benefit much from all these shenanigans, because references to > globals from within the module's own code are just dictionary accesses, and > we don't want to change that. I think that's fine -- IMHO the main uses cases here are about controlling the public API. And a module that really wants to can always import itself if it wants to pull more shenanigans :-) (i.e., foo/__init__.py can do "import foo; foo.blahblah" instead of just "blahblah".) -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From donald at stufft.io Mon Dec 1 02:40:12 2014 From: donald at stufft.io (Donald Stufft) Date: Sun, 30 Nov 2014 20:40:12 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: <68082201-601A-4A17-9CE4-EC1E07B5545C@stufft.io> > On Nov 30, 2014, at 8:24 PM, Guido van Rossum wrote: > > Can we please stop the hg-vs-git discussion? We've established earlier that the capabilities of the DVCS itself (hg or git) are not a differentiator, and further he-said-she-said isn't going to change anybody's opinion. > > What's left is preferences of core developers, possibly capabilities of the popular websites (though BitBucket vs. GitHub seems to be a wash as well), and preferences of contributors who aren't core developers (using popularity as a proxy). It seems the preferences of the core developers are mixed, while the preferences of non-core contributors are pretty clear, so we have a problem weighing these two appropriately. > > Also, let's not get distracted by the needs of the CPython repo, issue tracker, and code review tool. Arguments about core developers vs. contributors for CPython shouldn't affect the current discussion. > > Next, two of the three repos mentioned in Donald's PEP 481 are owned by Brett Cannon, according to the Contact column listed on hg.python.org . I propose to let Brett choose whether to keep these on hg.python.org , move to BitBucket, or move to GitHub. @Brett, what say you? (Apart from "I'm tired of the whole thread." :-) > > The third one is the peps repo, which has python-dev at python.org as Contact. It turns out that Nick is by far the largest contributor (he committed 215 of the most recent 1000 changes) so I'll let him choose. > > Finally, I'd like to get a few more volunteers for the PEP editors list, preferably non-core devs: the core devs are already spread too thinly, and I really shouldn't be the one who picks new PEP numbers and checks that PEPs are well-formed according to the rules of PEP 1. A PEP editor shouldn't have to pass judgment on the contents of a PEP (though they may choose to correct spelling and grammar). Knowledge of Mercurial is a plus. :-) > I?m not sure if it got lost in the discussion or if it was purposely left out. However I did come up with another idea, where we enable people to make PRs against these repositories with PR integration within roundup. Using the fact that it?s trivial to turn a PR into a patch core contributors (and the ?single source of truth?) for the repositories can remain Mercurial with core contributors needing to download a .patch file from Github instead of a .patch from from Roundup. This could allow non-committers to use git if they want, including PRs but without moving things around. The obvious cost is that since the committer side of things is still using the existing tooling there?s no ?Merge button? or the other committer benefits of Github, it would strictly be enabling people who aren?t committing directly to the repository to use git and Github. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Mon Dec 1 02:41:58 2014 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sun, 30 Nov 2014 18:41:58 -0700 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547BC057.7080107@ens-lyon.org> Message-ID: On Sun, Nov 30, 2014 at 6:25 PM, Donald Stufft wrote: >The technical benefits mostly come from Github generally being a higher > quality product than it?s competitors, both FOSS and not. Here's a solution to allow contribution via PR while not requiring anything to switch VCS or hosting: 1. Set up mirrors of a desired repo on any hosting providers we choose. 2. Set up a webhook for PRs that automatically creates/re-uses a tracker ticket with the diff from the PR. The workflow does not change for the committer, but it gets easier to contribute. I did something like this for juju (https://github.com/juju/juju) when we switched to github, weren't satisfied with their code review tool, and switched to something else. We have a web hook that automatically creates a review request for new PRs and updates the review request when the PR gets updated. -eric From njs at pobox.com Mon Dec 1 02:42:13 2014 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 1 Dec 2014 01:42:13 +0000 Subject: [Python-Dev] advice needed: best approach to enabling "metamodules"? In-Reply-To: References: <547B96D8.7050700@hotpy.org> Message-ID: On Mon, Dec 1, 2014 at 1:27 AM, Guido van Rossum wrote: > Nathaniel, did you look at Brett's LazyLoader? It overcomes the subclass > issue by using a module loader that makes all modules instances of a > (trivial) Module subclass. I'm sure this approach can be backported as far > as you need to go. The problem is that by the time your package's code starts running, it's too late to install such a loader. Brett's strategy works well for lazy-loading submodules (e.g., making it so 'import numpy' makes 'numpy.testing' available, but without the speed hit of importing it immediately), but it doesn't help if you want to actually hook attribute access on your top-level package (e.g., making 'numpy.foo' trigger a DeprecationWarning -- we have a lot of stupid exported constants that we can never get rid of because our rules say that we have to deprecate things before removing them). Or maybe you're suggesting that we define a trivial heap-allocated subclass of PyModule_Type and use that everywhere, as a quick-and-dirty way to enable __class__ assignment? (E.g., return it from PyModule_New?) I considered this before but hesitated b/c it could potentially break backwards compatibility -- e.g. if code A creates a PyModule_Type object directly without going through PyModule_New, and then code B checks whether the resulting object is a module by doing isinstance(x, type(sys)), this will break. (type(sys) is a pretty common way to get a handle to ModuleType -- in fact both types.py and importlib use it.) So in my mind I sorta lumped it in with my Option 2, "minor compatibility break". OTOH maybe anyone who creates a module object without going through PyModule_New deserves whatever they get. -n > On Sun, Nov 30, 2014 at 5:02 PM, Nathaniel Smith wrote: >> >> On Mon, Dec 1, 2014 at 12:59 AM, Nathaniel Smith wrote: >> > On Sun, Nov 30, 2014 at 10:14 PM, Mark Shannon wrote: >> >> Hi, >> >> >> >> This discussion has been going on for a while, but no one has >> >> questioned the >> >> basic premise. Does this needs any change to the language or >> >> interpreter? >> >> >> >> I believe it does not. I'm modified your original metamodule.py to not >> >> use >> >> ctypes and support reloading: >> >> https://gist.github.com/markshannon/1868e7e6115d70ce6e76 >> > >> > Interesting approach! >> > >> > As written, your code will blow up on any python < 3.4, because when >> > old_module gets deallocated it'll wipe the module dict clean. And I >> > guess even on >=3.4, this might still happen if old_module somehow >> > manages to get itself into a reference loop before getting >> > deallocated. (Hopefully not, but what a nightmare to debug if it did.) >> > However, both of these issues can be fixed by stashing a reference to >> > old_module somewhere in new_module. >> > >> > The __class__ = ModuleType trick is super-clever but makes me >> > irrationally uncomfortable. I know that this is documented as a valid >> > method of fooling isinstance(), but I didn't know that until your >> > yesterday, and the idea of objects where type(foo) is not >> > foo.__class__ strikes me as somewhat blasphemous. Maybe this is all >> > fine though. >> > >> > The pseudo-module objects generated this way will still won't pass >> > PyModule_Check, so in theory this could produce behavioural >> > differences. I can't name any specific places where this will break >> > things, though. From a quick skim of the CPython source, a few >> > observations: It means the PyModule_* API functions won't work (e.g. >> > PyModule_GetDict); maybe these aren't used enough to matter. It looks >> > like the __reduce__ methods on "method objects" >> > (Objects/methodobject.c) have a special check for ->m_self being a >> > module object, and won't pickle correctly if ->m_self ends up pointing >> > to one of these pseudo-modules. I have no idea how one ends up with a >> > method whose ->m_self points to a module object, though -- maybe it >> > never actually happens. PyImport_Cleanup treats module objects >> > differently from non-module objects during shutdown. >> >> Actually, there is one showstopper here -- in the first version where >> reload() uses isinstance() is actually 3.4. Before that you need a >> real module subtype for reload to work. But this is in principle >> workaroundable by using subclassing + ctypes on old versions of python >> and the __class__ = hack on new versions. >> >> > I guess it also has the mild limitation that it doesn't work with >> > extension modules, but eh. Mostly I'd be nervous about the two points >> > above. >> > >> > -n >> > >> > -- >> > Nathaniel J. Smith >> > Postdoctoral researcher - Informatics - University of Edinburgh >> > http://vorpus.org >> >> >> >> -- >> Nathaniel J. Smith >> Postdoctoral researcher - Informatics - University of Edinburgh >> http://vorpus.org >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > > -- > --Guido van Rossum (python.org/~guido) -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From donald at stufft.io Mon Dec 1 02:44:02 2014 From: donald at stufft.io (Donald Stufft) Date: Sun, 30 Nov 2014 20:44:02 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547BC057.7080107@ens-lyon.org> Message-ID: <0F8A87D2-67A0-47E7-8022-3DE23BE405B6@stufft.io> > On Nov 30, 2014, at 8:41 PM, Eric Snow wrote: > > On Sun, Nov 30, 2014 at 6:25 PM, Donald Stufft wrote: >> The technical benefits mostly come from Github generally being a higher >> quality product than it?s competitors, both FOSS and not. > > Here's a solution to allow contribution via PR while not requiring > anything to switch VCS or hosting: > > 1. Set up mirrors of a desired repo on any hosting providers we choose. > 2. Set up a webhook for PRs that automatically creates/re-uses a > tracker ticket with the diff from the PR. > > The workflow does not change for the committer, but it gets easier to > contribute. > > I did something like this for juju (https://github.com/juju/juju) when > we switched to github, weren't satisfied with their code review tool, > and switched to something else. We have a web hook that automatically > creates a review request for new PRs and updates the review request > when the PR gets updated. > > -eric Yea this is essentially what I meant. We already have ?unofficial? mirrors for PEPs and CPython itself on Github that are updated a few times a day. It wouldn?t be very difficult I think to make them official mirrors and update them immediately after a push. Then just some integration with Roundup would enable people to send PRs on Github. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ericsnowcurrently at gmail.com Mon Dec 1 02:44:48 2014 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sun, 30 Nov 2014 18:44:48 -0700 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <68082201-601A-4A17-9CE4-EC1E07B5545C@stufft.io> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> <68082201-601A-4A17-9CE4-EC1E07B5545C@stufft.io> Message-ID: On Sun, Nov 30, 2014 at 6:40 PM, Donald Stufft wrote: > I?m not sure if it got lost in the discussion or if it was purposely left > out. However I did come up with another idea, where we enable people to make > PRs against these repositories with PR integration within roundup. Using the > fact that it?s trivial to turn a PR into a patch core contributors (and the > ?single source of truth?) for the repositories can remain Mercurial with > core contributors needing to download a .patch file from Github instead of a > .patch from from Roundup. This could allow non-committers to use git if they > want, including PRs but without moving things around. Hah. I just had a similar idea. > > The obvious cost is that since the committer side of things is still using > the existing tooling there?s no ?Merge button? or the other committer > benefits of Github, it would strictly be enabling people who aren?t > committing directly to the repository to use git and Github. This is not an added cost. It's just the status quo and something that can be addressed separately. -eric From ericsnowcurrently at gmail.com Mon Dec 1 02:53:52 2014 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sun, 30 Nov 2014 18:53:52 -0700 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <0F8A87D2-67A0-47E7-8022-3DE23BE405B6@stufft.io> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547BC057.7080107@ens-lyon.org> <0F8A87D2-67A0-47E7-8022-3DE23BE405B6@stufft.io> Message-ID: On Sun, Nov 30, 2014 at 6:44 PM, Donald Stufft wrote: > Yea this is essentially what I meant. We already have ?unofficial? mirrors > for PEPs and CPython itself on Github that are updated a few times a day. > It wouldn?t be very difficult I think to make them official mirrors and > update them immediately after a push. > > Then just some integration with Roundup would enable people to send PRs > on Github. Exactly. In my mind this eliminates almost all the controversy in this discussion. The question of switching hosting provider or DVCS becomes something to discuss separately (and probably just drop since at that point it doesn't improve much over the status quo). Of course, it does not address the more pressing concern of how to get contributions landed more quickly/steadily. :( However, that is largely what Nick addresses in PEP 462. :) -eric From pierre-yves.david at ens-lyon.org Mon Dec 1 02:55:44 2014 From: pierre-yves.david at ens-lyon.org (Pierre-Yves David) Date: Sun, 30 Nov 2014 17:55:44 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <50A56F56-E636-43BB-BF5D-8DC30920BDB1@stufft.io> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <4ADD805B-B707-4244-9290-77AB0DAE396A@stufft.io> <547AC258.5090803@hastings.org> <50A56F56-E636-43BB-BF5D-8DC30920BDB1@stufft.io> Message-ID: <547BCAA0.8070403@ens-lyon.org> On 11/30/2014 08:30 AM, Donald Stufft wrote: > >> On Nov 30, 2014, at 2:08 AM, Larry Hastings > > wrote: >> >> >> On 11/29/2014 04:37 PM, Donald Stufft wrote: >>> On Nov 29, 2014, at 7:15 PM, Alex Gaynor wrote: >>>> Despite being a regular hg >>>> user for years, I have no idea how to create a local-only branch, or a branch >>>> which is pushed to a remote (to use the git term). >>> I also don?t know how to do this. >> >> Instead of collectively scratching your heads, could one of you guys >> do the research and figure out whether or not hg supports this >> workflow? One of the following two things must be true: >> >> 1. hg supports this workflow (or a reasonable fascimile), which may >> lessen the need for this PEP. >> 2. hg doesn't support this workflow, which may strengthen the need >> for this PEP. >> >> Saying "I've been using hg for years and I don't know whether it >> supports this IMPORTANT THING" is not a particularly compelling argument. >> > > Comments like this make me feel like I didn?t explain myself very well > in the > PEP. > > While I do think that supporting this workflow via an extension is worse > than > supporting it in core, I was about to point that bookmark are no longer an extension for more than 3 years, but someone was faster than me. But I would like to reply a bit more on this extension FUD. There is three kind of Mercurial feature: 1) The one in Mercurial core 2) The one in official extension 3) the one in third party extension (1) and (2) Have the -same- level of support and stability. All of them are part of the same repo and same test suite, offer the same backward compatibility promise and are installed as part of the same package. Official extensions are usually not in core for various reasons: 1) It is exotic feature (eg: communication bugzilla extension) 2) Nobody did the work to move it into core (unfortunatly) (eg: progress bar extension) (similar situation: how long did it took to get pip shipped with python?) 3) We think it is a important feature but we are not happy with the current UX and would like somethign better before moving it into core (eg: histedit) Not using official extensions because they are extensions is similar to not use the python standard library because "They are modules, not part of the core language" > this isn?t why this PEP exists. The current > workflow for > contributing is painful, for the repositories this is talking about if I?m a > non-comitter I have to email patches to a particular closed mailing list and > then sit around and wait. Your workflow issue does not seems to be particularly tied to the tool (Mercurial) itself but more to (1) very partial usage of the tool (2) current project workflow. It would not be hard to (1) use the tools in a less painful way (2) rethink project workflow to reduce the pain. This seems orthogonal to changing the tool. > The Pull Request based workflow is *massively* better than > uploading/emailing > patches around. So the question then becomes, if we're going to move to a PR > based workflow how do we do it? PEP 474 says that we should install some > software that works with Mercurial and supports Pull Requests. Another > thread > suggested that we should just use to bitbucket which also supports > Mercurial > and use that. (note: Yes Manual upload of patch is terrible and tracking email by patch is terrible too. But github is fairly bad at doing pull request. It emphasis the final result instead of the actual pull request content. Encouraging massive patches and making the history muddier.) > This PEP says that git and Github have the popular vote, which is extremely > compelling as a feature because: Sure, Git//Github is more popular than Mercurial nowaday. But can I point to the irony of Python using the "more popular" argument? If most of people here would have seen this definitive argument, they would been currently writing Java code instead of discussing Python on this list. -- Pierre-Yves David From pierre-yves.david at ens-lyon.org Mon Dec 1 03:03:42 2014 From: pierre-yves.david at ens-lyon.org (Pierre-Yves David) Date: Sun, 30 Nov 2014 18:03:42 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <4ADD805B-B707-4244-9290-77AB0DAE396A@stufft.io> Message-ID: <547BCC7E.1050208@ens-lyon.org> On 11/29/2014 05:15 PM, Chris Angelico wrote: > On Sun, Nov 30, 2014 at 11:37 AM, Donald Stufft wrote: >> I also don?t know how to do this. When I?m doing multiple things for CPython >> my ?branching? strategy is essentially using hg diff to create a patch file >> with my ?branch? name (``hg diff > my-branch.patch``), then revert all of my >> changes (``hg revert ?all ?no-backup``), then either work on a new ?branch? >> or switch to an old ?branch? by applying the corresponding patch >> (``patch -p1 < other-branch.patch``). > > IMO, this is missing out on part of the benefit of a DVCS. When your > patches are always done purely on the basis of files, and have to be > managed separately, everything will be manual; and your edits won't > (normally) contain commit messages, authorship headers, date/time > stamps, and all the other things that a commit will normally have. > Using GitHub automatically makes all that available; when someone > forks the project and adds a commit, that commit will exist and have > its full identity, metadata, etc, and if/when it gets merged into > trunk, all that will be carried through automatically. There is no reason to make this `hg diff` dance (but ignorance). - You can make plain commit with your changes. - You can export commit content using `hg export` - You can change you patch content will all kind of tools (amend, rebase, etc) - You can have multiple branches without any issue to handle concurrent workflow. We (Mercurial developer) will be again sprinting at Pycon 2015. We can probably arrange some workflow discussion//training there. -- Pierre-Yves David From pierre-yves.david at ens-lyon.org Mon Dec 1 03:09:05 2014 From: pierre-yves.david at ens-lyon.org (Pierre-Yves David) Date: Sun, 30 Nov 2014 18:09:05 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> Message-ID: <547BCDC1.8000601@ens-lyon.org> On 11/30/2014 08:44 AM, Brett Cannon wrote: > For me personally, if I knew a simple patch integrated cleanly and > passed on at least one buildbot -- when it wasn't a platform-specific > fix -- then I could easily push a "Commit" button and be done with it > (although this assumes single branch committing; doing this across > branches makes all of this difficult unless we finally resolve our > Misc/NEWS conflict issues so that in some instances it can be > automated). Instead I have to wait until I have a clone I can push from, > download a patch, apply it, run the unit tests myself, do the commit, > and then repeat a subset of that to whatever branches make sense. It's a > lot of work for which some things could be automated. The Misc/NEWS issue could be easily solved. Mercurial allow to specify a custom merge tool for specific file. And I already succesfully wrote dedicated merge tools for file with similar issue. I've already discussed that with various people Larry, Nick, etc. And what is needed now is someone actually doing the work. Once you have such tool, you can have automatic pull request merge/rebasing through a web ui. -However-, You can only do that if you actually own the said interface. Because propri?tary plateform are not going to let your run arbitrary code on their machine. -- Pierre-Yves David From pierre-yves.david at ens-lyon.org Mon Dec 1 03:12:18 2014 From: pierre-yves.david at ens-lyon.org (Pierre-Yves David) Date: Sun, 30 Nov 2014 18:12:18 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> Message-ID: <547BCE82.9020204@ens-lyon.org> On 11/30/2014 09:09 AM, Donald Stufft wrote: >> Even converting between two FLOSS tools is an amazing amount of work. Look at >> >what Eric Raymond did with reposurgeon to convert from Bazaar to git. > I fail to see how this is a reasonable argument to make at all since, as you > mentioned, converting between two FLOSS tools can be an amazing amount of work. > Realistically the amount of work is going to be predicated on whether or not > there is a tool that already handles the conversion for you. Assuming of course > that the data is available to you at all. The statement that switch a whole infra from one tool to another is "cheap" and predictable sounds extremely naive to me. > As a particularly relevant example, switching from Mercurial to Git is as easy > as installing hg-git, creating a bookmark for master that tracks default, and > then pushing to a git repository. Migrating the DVCS content is usually easy. The hard part is then to find all the script, tools, doc that rely on the previous tools to upgrade them to the new one, sometimes trying to find feature parity. This is rarely cheap and never "predictable" -- Pierre-Yves David From wes.turner at gmail.com Mon Dec 1 03:49:18 2014 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 30 Nov 2014 20:49:18 -0600 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <547BCDC1.8000601@ens-lyon.org> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <547BCDC1.8000601@ens-lyon.org> Message-ID: On Sun, Nov 30, 2014 at 8:09 PM, Pierre-Yves David < pierre-yves.david at ens-lyon.org> wrote: > > > On 11/30/2014 08:44 AM, Brett Cannon wrote: > >> For me personally, if I knew a simple patch integrated cleanly and >> passed on at least one buildbot -- when it wasn't a platform-specific >> fix -- then I could easily push a "Commit" button and be done with it >> (although this assumes single branch committing; doing this across >> branches makes all of this difficult unless we finally resolve our >> Misc/NEWS conflict issues so that in some instances it can be >> automated). Instead I have to wait until I have a clone I can push from, >> download a patch, apply it, run the unit tests myself, do the commit, >> and then repeat a subset of that to whatever branches make sense. It's a >> lot of work for which some things could be automated. >> > > The Misc/NEWS issue could be easily solved. Mercurial allow to specify a > custom merge tool for specific file. And I already succesfully wrote > dedicated merge tools for file with similar issue. > > You might take a look at the hubflow/gitflow branching workflow diagrams? https://datasift.github.io/gitflow/IntroducingGitFlow.html (GitFlow -> Hubflow) feature/name, develop, hotfix/name, releases/v0.0.1, master I've already discussed that with various people Larry, Nick, etc. And what > is needed now is someone actually doing the work. > > Once you have such tool, you can have automatic pull request > merge/rebasing through a web ui. -However-, You can only do that if you > actually own the said interface. Because propri?tary plateform are not > going to let your run arbitrary code on their machine. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Mon Dec 1 04:06:03 2014 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 30 Nov 2014 22:06:03 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <27ABC536-C712-4109-9ED7-3B15C7818143@stufft.io> <3AC4141B-354B-47BA-8CFD-3CAAAC852ECC@stufft.io> Message-ID: On 11/30/2014 4:45 PM, Donald Stufft wrote: I think you are stimulating more heated discussion than is necessary by trying to do too much, both in terms of physical changes and in terms of opinion persuasion. I am reminded of the integer division change. The initial discussion was initially over-heated partly because the actual change to Python was coupled, quite unnecessarily, with a metaphysical opinion as the the ontological status of natural numbers versus integers -- an opinion that many, including me, considered to be wrong. Once the metaphysical question was dropped, discussion went more smoothly. > Here?s another idea for an experiment that might be more generally useful. I think a true experiment with one repository is easily justified. The PEP repository is an obvious choice because a) the main editor is in favor, b) many but not all core committers interact with it, and c) it is not tied to the tracker. The easier and more controversial option would be to move it completely to GitHub. I expect part of the result would be pull requests from committers who would otherwise commit directly to hg. The harder and, I think, more useful (generalizable) option would be to set up a mirror (or keep the hg version as a mirror ;-) and experiment with coordinating the two mirrors. Such an experiment should not preclude other experiments. If Brett wants to similarly experiment with devinabox on another site, let him. If the horrible-to-Nick prospect of possibly moving CPython to GitHub, if nothing else is done, provokes Nick to improve the workflow otherwise, great. If the mirror experiment is successful, the devguide might be the next experiment. It does not have any one maintainer, and *is* tied to the tracker. But herein lies the problem with the devguide. There are 22 issues, down just 1 from about a year ago. All but 2 are more than a year old. Many (most?) have patches, but enough consensus for anyone to push is hard. As with other doc issues, there is no 'test' for when a non-trivial patch is 'good enough' and hence, in my opinion, too much bikeshedding and pursuit of the perfect. > As we've said there are two sides to the coin here, non-comitters and > comitters, a lot of the benefit of moving to Github is focused at > non-comitters although there are benefits for comitters themselves. For maintaining Idle, I do not see the benefit. Downloading patches from the tracker to my dev directory is trivial. I then apply to the current 3.x maintenance version, possibly with some hand editing, revise (always, that I can remember), and test. Once satisfied, I backport to 2.7. > What if we focused an experiment on the benefits to non-comitters? Users benefit by more patches being applied. How do non-commiters benefit, really, by making it easier for them to submit patches that sit for years? Ignore that. Guido says that working with PEPs on GitHub would benefit him as a committer. > It's possible to maintain a git mirror of a Mercurial repository, in fact > we already have that at github.com/python/cpython > . What if we permit people > to make PRs against that repository, and then take those PRs and paste them > into roundup? Sort of like the "Remote hg repo" field. Then we can > create some integration that would post a comment to the ticket whenever > that PR is updated > (sort of like the notification that happens when a new patch is uploaded). > The cannonical repository would still be hg.python.org > and in order to actually > commit the PR commiters would need to turn the PR into a patch > (trivially easy, just add .diff or .patch to the PR URL). This would be the focus of an experiment with the devguide, even if we have to generate some somewhat artificial pull requests for testing. I really hope you try to make the above work. The 3rd stage would be to expand on the above for doc patches. This is one area where we would get small ready-to-commit patches -- that do not need to be reported to the tracker. Would it be possible to automate the following? Turn a doc PR into a patch, apply the patch to all 3 branches (perhaps guided by the PR message), and generate a report, along with, currently, a 2.7 and 3.4 PR. (I am thinking about how to do some of this doc patches with hg on windows.) [snip premature discussion of moving cpython 'wholehog' to github.] Summary research plan: 3 experiments, each depending of the preceding. 1. Link 2 repositories, one with pull requests 2. Link the PRs with the tracker 3. Make PRs work better with our multibranch, 2-head monster. Report after each experiment (ending with 'success' or 'give-up'). -- Terry Jan Reedy From pierre-yves.david at ens-lyon.org Mon Dec 1 04:08:03 2014 From: pierre-yves.david at ens-lyon.org (Pierre-Yves David) Date: Sun, 30 Nov 2014 19:08:03 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <9BC1BE36-8A24-4AEA-A399-B2FE61A8BBF3@stufft.io> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <9BC1BE36-8A24-4AEA-A399-B2FE61A8BBF3@stufft.io> Message-ID: <547BDB93.3010003@ens-lyon.org> On 11/29/2014 06:01 PM, Donald Stufft wrote: > The reason the PEP primarily focuses on the popularity of the the tool is > because as you mentioned, issues like poor documentation, bad support for a > particular platform, a particular workflow not being very good can be > solved by working with the tool authors to solve that particular problem. I wouldn?t > consider those issues in a vacuum to be a good reason to migrate away from that > tool. As I understand it[1] my current employer (Facebook) picked Mercurial over git because these very reason end up being important. And this analysis have been validated by another big company[2]. Git implementation is very tied to the linux world and this slowed down its gain of a windows support. This is not something that will change by discussing with author: "btw can you rewrite your tool with different techno and concept?". Mercurial is extensible in Python, very extensible. In my previous job one of our client switched to Mercurial and was able to get an extension adding commands to match it exact previous code-review workflow in a couple of hundred line of python. (you could have the same for python). Mercurial developer are already connected to the Python community. They are invited to language submit, regular pycon speaker and attendees etc. All these things contradict "bah any project would not make a difference" > However there?s very little that CPython can do to get more people using > Mercurial, and presumably the authors of Mercurial are already doing what they > can to get people to use them. Mercurial is an open source project. We have no communication department, no communication budget actually. Over the year, more and more contributors are actually paid to do so, but they usually focus on "making employer's users" happy. Something that rarely involves getting more outside-world users. We mostly rely on the network effect to gain more users, (yes, we are losing to git on this, but still growing anyway). Part of this network effect is having big project like CPython using Mercurial. It also imply that CPython dev are willing to look at how the tools works and that the Project tries to take advantage of the tools strength. This would turn the situation into mutual benefits. You are happy with Mercurial and we are happy with Python. However moving to git and github send a very different signal: If you want to be a successful command line tool, use C and bash. If you want to be a successful website use ruby on rails. -- Pierre-Yves David [1] This is a personal statement is not to be linked to the opinion of my employer PR department. [2] That I'm not naming in fear of there PR assasins. From donald at stufft.io Mon Dec 1 04:43:53 2014 From: donald at stufft.io (Donald Stufft) Date: Sun, 30 Nov 2014 22:43:53 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <547BDB93.3010003@ens-lyon.org> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <9BC1BE36-8A24-4AEA-A399-B2FE61A8BBF3@stufft.io> <547BDB93.3010003@ens-lyon.org> Message-ID: <5413B236-F415-4F86-BDD8-58B8E388B857@stufft.io> > On Nov 30, 2014, at 10:08 PM, Pierre-Yves David wrote: > > > > On 11/29/2014 06:01 PM, Donald Stufft wrote: >> The reason the PEP primarily focuses on the popularity of the the tool is >> because as you mentioned, issues like poor documentation, bad support for a >> particular platform, a particular workflow not being very good can be >> solved by working with the tool authors to solve that particular problem. I wouldn?t >> consider those issues in a vacuum to be a good reason to migrate away from that >> tool. > > As I understand it[1] my current employer (Facebook) picked Mercurial over git because these very reason end up being important. And this analysis have been validated by another big company[2]. > > Git implementation is very tied to the linux world and this slowed down its gain of a windows support. This is not something that will change by discussing with author: "btw can you rewrite your tool with different techno and concept?". > Mercurial is extensible in Python, very extensible. In my previous job one of our client switched to Mercurial and was able to get an extension adding commands to match it exact previous code-review workflow in a couple of hundred line of python. (you could have the same for python). > Mercurial developer are already connected to the Python community. They are invited to language submit, regular pycon speaker and attendees etc. > > All these things contradict "bah any project would not make a difference" > >> However there?s very little that CPython can do to get more people using >> Mercurial, and presumably the authors of Mercurial are already doing what they >> can to get people to use them. > > Mercurial is an open source project. We have no communication department, no communication budget actually. Over the year, more and more contributors are actually paid to do so, but they usually focus on "making employer's users" happy. Something that rarely involves getting more outside-world users. We mostly rely on the network effect to gain more users, (yes, we are losing to git on this, but still growing anyway). Part of this network effect is having big project like CPython using Mercurial. It also imply that CPython dev are willing to look at how the tools works and that the Project tries to take advantage of the tools strength. This would turn the situation into mutual benefits. You are happy with Mercurial and we are happy with Python. > > However moving to git and github send a very different signal: If you want to be a successful command line tool, use C and bash. If you want to be a successful website use ruby on rails. I want to adress this point specifically, because it?s not particularly related to the hg vs git discussion that Guido has asked people to stop doing. The idea that unless Python as a project always picks something written in Python over something written in something else we?re somehow signaling to the world that if you want to write X kind of tool you should do it in some other language is laughable. It completely ignores anything at all about the tools except what language they are written in. From tjreedy at udel.edu Mon Dec 1 04:48:04 2014 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 30 Nov 2014 22:48:04 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <547BC232.4040008@ens-lyon.org> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547BC232.4040008@ens-lyon.org> Message-ID: On 11/30/2014 8:19 PM, Pierre-Yves David wrote: > Mercurial have robust Windows support for a long time. This support is > native (not using cygwin) and handle properly all kind of strange corner > case. We have large scale ecosystem (http://unity3d.com/) using > Mercurial on windows. > > We also have full featured GUI client http://tortoisehg.bitbucket.org/. Tortoisehg comes with Hg Workbench, which I use for everything I can. The only thing I do not use it for is file (versus repository/changeset) commands, like file annotation. I do that with right click in Windows Explorer. In my current physical state, having to do everything at a Windows command prompt would be more onerous. There exists a TortoiseGit, but it lacks the unifying Workbench GUI. There exists at least one 3rd party workbench that purports to work with both hg and git. I have not looked at it yet. > It is actively developed by people who stay in touch with the Mercurial > upstream so new feature tend to land in the GUI really fast. -- Terry Jan Reedy From pierre-yves.david at ens-lyon.org Mon Dec 1 04:56:20 2014 From: pierre-yves.david at ens-lyon.org (Pierre-Yves David) Date: Sun, 30 Nov 2014 19:56:20 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <5413B236-F415-4F86-BDD8-58B8E388B857@stufft.io> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <9BC1BE36-8A24-4AEA-A399-B2FE61A8BBF3@stufft.io> <547BDB93.3010003@ens-lyon.org> <5413B236-F415-4F86-BDD8-58B8E388B857@stufft.io> Message-ID: <547BE6E4.3010701@ens-lyon.org> On 11/30/2014 07:43 PM, Donald Stufft wrote: > The idea that unless Python as a project always picks something written in Python over something written in something else we?re somehow signaling to the world that if you want to write X kind of tool you should do it in some other language is laughable. It completely ignores anything at all about the tools except what language they are written in. My point is not "Python should always" pick python over any other language. My point is: a tool written in python and with developer already involved with the community is something to takes in account. And is we simplify the technical debate (and shut it down as requested) by saying both are equivalent. This is part of the other consideration to take in account. -- Pierre-Yves David From guido at python.org Mon Dec 1 05:06:06 2014 From: guido at python.org (Guido van Rossum) Date: Sun, 30 Nov 2014 20:06:06 -0800 Subject: [Python-Dev] advice needed: best approach to enabling "metamodules"? In-Reply-To: References: <547B96D8.7050700@hotpy.org> Message-ID: On Sun, Nov 30, 2014 at 5:42 PM, Nathaniel Smith wrote: > On Mon, Dec 1, 2014 at 1:27 AM, Guido van Rossum wrote: > > Nathaniel, did you look at Brett's LazyLoader? It overcomes the subclass > > issue by using a module loader that makes all modules instances of a > > (trivial) Module subclass. I'm sure this approach can be backported as > far > > as you need to go. > > The problem is that by the time your package's code starts running, > it's too late to install such a loader. Brett's strategy works well > for lazy-loading submodules (e.g., making it so 'import numpy' makes > 'numpy.testing' available, but without the speed hit of importing it > immediately), but it doesn't help if you want to actually hook > attribute access on your top-level package (e.g., making 'numpy.foo' > trigger a DeprecationWarning -- we have a lot of stupid exported > constants that we can never get rid of because our rules say that we > have to deprecate things before removing them). > > Or maybe you're suggesting that we define a trivial heap-allocated > subclass of PyModule_Type and use that everywhere, as a > quick-and-dirty way to enable __class__ assignment? (E.g., return it > from PyModule_New?) I considered this before but hesitated b/c it > could potentially break backwards compatibility -- e.g. if code A > creates a PyModule_Type object directly without going through > PyModule_New, and then code B checks whether the resulting object is a > module by doing isinstance(x, type(sys)), this will break. (type(sys) > is a pretty common way to get a handle to ModuleType -- in fact both > types.py and importlib use it.) So in my mind I sorta lumped it in > with my Option 2, "minor compatibility break". OTOH maybe anyone who > creates a module object without going through PyModule_New deserves > whatever they get. > Couldn't you install a package loader using some install-time hook? Anyway, I still think that the issues with heap types can be overcome. Hm, didn't you bring that up before here? Was the conclusion that it's impossible? -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Mon Dec 1 05:30:15 2014 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Mon, 01 Dec 2014 13:30:15 +0900 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <85sih0urmp.fsf@benfinney.id.au> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> Message-ID: <87sih0m1pk.fsf@uwakimon.sk.tsukuba.ac.jp> Ben Finney writes: > Whether that is a *problem* is a matter of debate, but the fact that > Git's common workflow commonly discards information that some consider > valuable, is a simple fact. It *was* a simple fact in git 0.99. Since the advent of reflogs (years ago), it is simply false. Worse, common hg workflows (as developed at the same time as your impression of "Git's common workflow") *also* discard information that some (namely, me) consider valuable, because it *never gets recorded*. Exploring the git reflog has taught me things about my workflow and skills (and lack thereof ;-) that I'd never learn from an hg or bzr branch. In the end, the logs look very similar. I can only conclude that the rebasing that I do in git is implicit in the process of composing a "coherent changeset" in hg or bzr. I also typically have a bunch of "loose commits" lying around in Mercurial queues or bzr stashes, which amount to rebases when reapplied. > If Ethan chooses to make that a factor in his decisions about Git, > the facts are on his side. Hardly. All he needs to do is pretend git is hg, and avoid rebase. The only thing that should matter is the annoyance of learning a new tool. From rosuav at gmail.com Mon Dec 1 05:54:24 2014 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 1 Dec 2014 15:54:24 +1100 Subject: [Python-Dev] Joining the PEP Editors team Message-ID: In response to Guido's call for volunteers, I'm offering myself as a PEP editor. Who is in charge of this kind of thing? Who manages public key lists etc? ChrisA From ethan at stoneleaf.us Mon Dec 1 07:25:21 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Sun, 30 Nov 2014 22:25:21 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> Message-ID: <547C09D1.2090201@stoneleaf.us> One argument that keeps coming up is transferability of knowledge: knowing git and/or GitHub, as many seem to, it therefore becomes easier to commit to the Python ecosystem. What about the transferability of Python knowledge? Because I know Python, I can customize hg; because I know Python I can customize Roundup. I do not choose tools simply because they are written in Python -- I choose them because, being written in Python, I can work on them if I need to: I can enhance them, I can fix them, I can learn from them. -- ~Ethan~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From ethan at stoneleaf.us Mon Dec 1 07:27:34 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Sun, 30 Nov 2014 22:27:34 -0800 Subject: [Python-Dev] advice needed: best approach to enabling "metamodules"? In-Reply-To: References: <547B6FBE.8040205@stoneleaf.us> Message-ID: <547C0A56.20205@stoneleaf.us> On 11/30/2014 03:41 PM, Terry Reedy wrote: > On 11/30/2014 2:27 PM, Ethan Furman wrote: >> On 11/30/2014 11:15 AM, Guido van Rossum wrote: >>> On Sun, Nov 30, 2014 at 6:15 AM, Brett Cannon wrote: >>>> On Sat, Nov 29, 2014, 21:55 Guido van Rossum wrote: >>>>> >>>>> All the use cases seem to be about adding some kind of getattr hook >>>>> to modules. They all seem to involve modifying the CPython C code >>>>> anyway. So why not tackle that problem head-on and modify module_getattro() >>>>> to look for a global named __getattr__ and if it exists, call that instead >>>>> of raising AttributeError? >>>> >>>> Not sure if anyone thought of it. :) Seems like a reasonable solution to me. >>>> Be curious to know what the benchmark suite said the impact was. >>> >>> Why would there be any impact? The __getattr__ hook would be similar to the >>> one on classes -- it's only invoked at the point where otherwise AttributeError >>> would be raised. >> >> I think the bigger question is how do we support it back on 2.7? > > I do not understand this question. We don't add new features to 2.7 and this definitely is one. My understanding of one of the use-cases was being able to issue warnings about deprecated attributes, which would be most effective if a backport could be written for current versions. -- ~Ethan~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From donald at stufft.io Mon Dec 1 08:43:11 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 1 Dec 2014 02:43:11 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <547C09D1.2090201@stoneleaf.us> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547C09D1.2090201@stoneleaf.us> Message-ID: <8798604C-2E43-4F89-BD6B-C12C9D572FDE@stufft.io> > On Dec 1, 2014, at 1:25 AM, Ethan Furman wrote: > > One argument that keeps coming up is transferability of knowledge: knowing git and/or GitHub, as many seem to, it > therefore becomes easier to commit to the Python ecosystem. > > What about the transferability of Python knowledge? Because I know Python, I can customize hg; because I know Python I > can customize Roundup. > > I do not choose tools simply because they are written in Python -- I choose them because, being written in Python, I can > work on them if I need to: I can enhance them, I can fix them, I can learn from them. > > -- > ~Ethan~ > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/donald%40stufft.io Git uses the idea of small individual commands that do small things, so you can write your own commands that work on text streams to extend git and you can even write those in Python. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From solipsis at pitrou.net Mon Dec 1 10:53:24 2014 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 1 Dec 2014 10:53:24 +0100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <27ABC536-C712-4109-9ED7-3B15C7818143@stufft.io> <3AC4141B-354B-47BA-8CFD-3CAAAC852ECC@stufft.io> Message-ID: <20141201105324.4ecdf37d@fsol> On Sun, 30 Nov 2014 22:06:03 -0500 Terry Reedy wrote: > > If the mirror experiment is successful, the devguide might be the next > experiment. It does not have any one maintainer, and *is* tied to the > tracker. But herein lies the problem with the devguide. There are 22 > issues, down just 1 from about a year ago. All but 2 are more than a > year old. Many (most?) have patches, but enough consensus for anyone to > push is hard. As with other doc issues, there is no 'test' for when a > non-trivial patch is 'good enough' and hence, in my opinion, too much > bikeshedding and pursuit of the perfect. Speaking as someone who contributed to the devguide, I think it has become good enough and have therefore largely stopped caring. Also, many requests seem to be of the "please add this thing" kind, which is a slippery slope. Regards Antoine. From mcepl at cepl.eu Mon Dec 1 08:40:27 2014 From: mcepl at cepl.eu (=?UTF-8?Q?Mat=C4=9Bj?= Cepl) Date: Mon, 1 Dec 2014 08:40:27 +0100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <7F8618A9-3167-4BF0-8F98-CCF9DA8539BC@stufft.io> <854mthvsvr.fsf@benfinney.id.au> Message-ID: On 2014-11-30, 11:18 GMT, Ben Finney wrote: > Donald Stufft writes: > >> I think there is a big difference here between using a closed source >> VCS or compiler and using a closed source code host. Namely in that >> the protocol is defined by git so switching from one host to another >> is easy. > > GitHub deliberately encourages proprietary features that create valuable > data that cannot be exported ? the proprietary GitHub-specific pull > requests being a prime example. What I really don?t understand is why this discussion is hg v. GitHub, when it should be hg v. git. Particular hosting is a secondary issue and it could be GitHub or git.python.org (with some FLOSS git hosting package ... cgit/gitolite, gitorious, gitlab, etc.) or python.gitorious.org (I believe Gitorious people might be happy to host you) or whatever else. Best, Mat?j From mcepl at cepl.eu Mon Dec 1 08:46:46 2014 From: mcepl at cepl.eu (=?UTF-8?Q?Mat=C4=9Bj?= Cepl) Date: Mon, 1 Dec 2014 08:46:46 +0100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547BCE82.9020204@ens-lyon.org> Message-ID: On 2014-12-01, 02:12 GMT, Pierre-Yves David wrote: > Migrating the DVCS content is usually easy. This is lovely mantra, but do you speak from your own experience? I did move rope from Bitbucket to https://github.com/python-rope and it was A LOT of work (particularly issues, existing pull requests, and other related stuff like many websites the projects holds). And rope is particularly simple (and almost dead so inactive) project. Best, Mat?j From mcepl at cepl.eu Mon Dec 1 09:01:11 2014 From: mcepl at cepl.eu (=?UTF-8?Q?Mat=C4=9Bj?= Cepl) Date: Mon, 1 Dec 2014 09:01:11 +0100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: On 2014-12-01, 00:50 GMT, Donald Stufft wrote: > The only thing that is true is that git users are more likely to use the > ability to rewrite history than Mercurial users are, but you?ll typically > find that people generally don?t do this on public branches, only on private > branches. And I would add that any reasonable git repository manager (why we are talking only about GitHub as if there was no cgit, gitorious, gitlab, gitblit, etc.?) can forbid forced-push so the history could be as sacrosanct as with Mercurial. Mat?j From solipsis at pitrou.net Mon Dec 1 11:07:32 2014 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 1 Dec 2014 11:07:32 +0100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547BCE82.9020204@ens-lyon.org> Message-ID: <20141201110732.161022d7@fsol> On Mon, 1 Dec 2014 08:46:46 +0100 Mat?j Cepl wrote: > On 2014-12-01, 02:12 GMT, Pierre-Yves David wrote: > > Migrating the DVCS content is usually easy. > > This is lovely mantra, but do you speak from your own > experience? I did move rope from Bitbucket to > https://github.com/python-rope and it was A LOT of work > (particularly issues, existing pull requests, and other related > stuff like many websites the projects holds). He did say "DVCS content" (as in: stuff that's stored in git or hg), not ancillary data such as pull requests, issues, wikis and Web sites. But you're making his point: migrating a source code repository from hg to git or vice-versa is relatively easy, it's the surrounding stuff that's hard. Regards Antoine. From steve at pearwood.info Mon Dec 1 14:37:22 2014 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 2 Dec 2014 00:37:22 +1100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> References: <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> Message-ID: <20141201133722.GG11424@ando.pearwood.info> On Sun, Nov 30, 2014 at 02:56:22PM -0500, Donald Stufft wrote: > As I mentioned in my other email, we?re already supporting two > different tools, and it?s a hope of mine to use this as a sort of > testbed to moving the other repositories as well. If we go down this path, can we have some *concrete* and *objective* measures of success? If moving to git truly does improve things, then the move can be said to be a success. But if it makes no concrete difference, then we've wasted our time. In six months time, how will we know which it is? Can we have some concrete and objective measures of what would count as success, and some Before and After measurements? Just off the top of my head... if the number of documentation patches increases significiantly (say, by 30%) after six months, that's a sign the move was successful. It's one thing to say that using hg is discouraging contributors, and that hg is much more popular. It's another thing to say that moving to git will *actually make a difference*. Maybe all the would-be contributors using git are too busy writing kernel patches for Linus or using Node.js and wouldn't be caught dead with Python :-) With concrete and objective measures of success, you will have ammunition to suggest moving the rest of Python to git in a few years time. And without it, we'll also have good evidence that any further migration to git may be a waste of time and effort and we should focus our energy elsewhere rather than git vs hg holy wars. [...] > I also think it?s hard to look at a company like bitbucket, for > example, and say they are *better* than Github just because they > didn?t have a public and inflammatory event. We can't judge companies on what they might be doing behind closed doors, only on what we can actually see of them. Anybody might be rotten bounders and cads in private, but how would we know? It's an imperfect world and we have imperfect knowledge but still have to make a decision as best we can. > Attempting to reduce the cognitive burden for contributing and aligning ourselves > with the most popular tools allows us to take advantage of the network effects > of these tools popularity. This can be the difference between someone with limited > amount of time being able to contribute or not, which can make real inroads towards > making it easier for under privileged people to contribute much more than refusing > to use a product of one group of people over another just because the other group > hasn?t had a public and inflammatory event. In other contexts, that could be a pretty awful excuse for inaction against the most aggregiously bad behaviour. "Sure, Acme Inc might have adulterated baby food with arsenic, but other companies might have done worse things that we haven't found out about. So we should keep buying Acme's products, because they're cheaper and that's good for the poor." Not that I'm comparing GitHub's actions with poisoning babies. What GitHub did was much worse. *wink* -- Steven From steve at pearwood.info Mon Dec 1 14:48:50 2014 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 2 Dec 2014 00:48:50 +1100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <20141201133722.GG11424@ando.pearwood.info> References: <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <20141201133722.GG11424@ando.pearwood.info> Message-ID: <20141201134850.GI11424@ando.pearwood.info> On Tue, Dec 02, 2014 at 12:37:22AM +1100, Steven D'Aprano wrote: [...] > It's one thing to say that using hg is discouraging contributors, and > that hg is much more popular. /s/more/less/ -- Steven From barry at python.org Mon Dec 1 15:33:33 2014 From: barry at python.org (Barry Warsaw) Date: Mon, 1 Dec 2014 09:33:33 -0500 Subject: [Python-Dev] Joining the PEP Editors team In-Reply-To: References: Message-ID: <20141201093333.5eee9fd7@limelight.wooz.org> On Dec 01, 2014, at 03:54 PM, Chris Angelico wrote: >In response to Guido's call for volunteers, I'm offering myself as a >PEP editor. Who is in charge of this kind of thing? Who manages public >key lists etc? I can add you to the pep editors mailing list. Please send me a off-list message with your preferred email address. I'd prefer it if you GPG signed that message. See here for getting your SSH key registered such that you can make commits to the PEP repo. https://docs.python.org/devguide/faq.html#ssh Cheers, -Barry From wes.turner at gmail.com Mon Dec 1 15:37:16 2014 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 1 Dec 2014 08:37:16 -0600 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <547C09D1.2090201@stoneleaf.us> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547C09D1.2090201@stoneleaf.us> Message-ID: On Mon, Dec 1, 2014 at 12:25 AM, Ethan Furman wrote: > One argument that keeps coming up is transferability of knowledge: > knowing git and/or GitHub, as many seem to, it > therefore becomes easier to commit to the Python ecosystem. > > What about the transferability of Python knowledge? Because I know > Python, I can customize hg; because I know Python I > can customize Roundup. > > I do not choose tools simply because they are written in Python -- I > choose them because, being written in Python, I can > work on them if I need to: I can enhance them, I can fix them, I can > learn from them. > > There are lots of Python tools written with Git: * https://pypi.python.org/pypi/vcs * https://pypi.python.org/pypi/dulwich * https://pypi.python.org/pypi/hg-git * http://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html ("GitFS") * https://github.com/libgit2/pygit2 (C) * https://pypi.python.org/pypi/GitPython (Python) * https://pypi.python.org/pypi/pyrpo (subprocess wrapper for git, hg, bzr, svn) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Dec 1 16:07:47 2014 From: brett at python.org (Brett Cannon) Date: Mon, 01 Dec 2014 15:07:47 +0000 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: On Sun Nov 30 2014 at 8:25:25 PM Guido van Rossum wrote: > Can we please stop the hg-vs-git discussion? We've established earlier > that the capabilities of the DVCS itself (hg or git) are not a > differentiator, and further he-said-she-said isn't going to change > anybody's opinion. > +1 from me as well. I view this as a discussion of platforms and not DVCSs. > > What's left is preferences of core developers, possibly capabilities of > the popular websites (though BitBucket vs. GitHub seems to be a wash as > well), and preferences of contributors who aren't core developers (using > popularity as a proxy). It seems the preferences of the core developers are > mixed, while the preferences of non-core contributors are pretty clear, so > we have a problem weighing these two appropriately. > > Also, let's not get distracted by the needs of the CPython repo, issue > tracker, and code review tool. Arguments about core developers vs. > contributors for CPython shouldn't affect the current discussion. > > Next, two of the three repos mentioned in Donald's PEP 481 are owned by > Brett Cannon, according to the Contact column listed on hg.python.org. I > propose to let Brett choose whether to keep these on hg.python.org, move > to BitBucket, or move to GitHub. @Brett, what say you? (Apart from "I'm > tired of the whole thread." :-) > You do one or two nice things for python-dev and you end up being saddled with them for life. ;) Sure, I can handle the devguide and devinabox decisions since someone has to and it isn't going to be more "fun" for someone else compared to me. > > The third one is the peps repo, which has python-dev at python.org as > Contact. It turns out that Nick is by far the largest contributor (he > committed 215 of the most recent 1000 changes) so I'll let him choose. > "Perk" of all those packaging PEPs. > > Finally, I'd like to get a few more volunteers for the PEP editors list, > preferably non-core devs: the core devs are already spread too thinly, and > I really shouldn't be the one who picks new PEP numbers and checks that > PEPs are well-formed according to the rules of PEP 1. A PEP editor > shouldn't have to pass judgment on the contents of a PEP (though they may > choose to correct spelling and grammar). Knowledge of Mercurial is a plus. > :-) > And based on how Nick has been talking, will continue to be at least in the medium term. =) -Brett > > On Sun, Nov 30, 2014 at 4:50 PM, Donald Stufft wrote: > >> >> > On Nov 30, 2014, at 7:43 PM, Ben Finney >> wrote: >> > >> > Donald Stufft writes: >> > >> >> It?s not lost, [? a long, presumably-accurate discourse of the many >> >> conditions that must be met before ?] you can restore it. >> > >> > This isn't the place to discuss the details of Git's internals, I think. >> > I'm merely pointing out that: >> > >> >> The important thing to realize is that a ?branch? isn?t anything >> >> special in git. >> > >> > Because of that, Ethan's impression ? that Git's default behaviour >> > encourages losing history (by re-writing the history of commits to be >> > other than what they were) is true, and ?Git never loses history? simply >> > isn't true. >> > >> > Whether that is a *problem* is a matter of debate, but the fact that >> > Git's common workflow commonly discards information that some consider >> > valuable, is a simple fact. >> > >> > If Ethan chooses to make that a factor in his decisions about Git, the >> > facts are on his side. >> >> Except it?s not true at all. >> >> That data is all still there if you want it to exist and it?s not a real >> differentiator between Mercurial and git because Mercurial has the ability >> to do the same thing. Never mind the fact that ?lose? your history makes >> it >> sound accidental instead of on purpose. It?s like saying that ``rm >> foo.txt`` >> will ?lose? the data in foo.txt. So either it was a misunderstanding in >> which case I wanted to point out that those operations don?t magically >> lose >> information or it?s a purposely FUDish statement in which case I want to >> point out that the statement is inaccurate. >> >> The only thing that is true is that git users are more likely to use the >> ability to rewrite history than Mercurial users are, but you?ll typically >> find that people generally don?t do this on public branches, only on >> private >> branches. Which again doesn?t make much sense in this context since >> generally >> currently the way people are using Mercurial with CPython you?re using >> patches to transfer the changes from the contributor to the committer so >> you?re >> ?losing? that history regardless. >> >> --- >> Donald Stufft >> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Dec 1 16:38:42 2014 From: guido at python.org (Guido van Rossum) Date: Mon, 1 Dec 2014 07:38:42 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: As far as I'm concerned I'm just waiting for your decision now. On Mon, Dec 1, 2014 at 7:07 AM, Brett Cannon wrote: > > > On Sun Nov 30 2014 at 8:25:25 PM Guido van Rossum > wrote: > >> Can we please stop the hg-vs-git discussion? We've established earlier >> that the capabilities of the DVCS itself (hg or git) are not a >> differentiator, and further he-said-she-said isn't going to change >> anybody's opinion. >> > > +1 from me as well. I view this as a discussion of platforms and not DVCSs. > > >> >> What's left is preferences of core developers, possibly capabilities of >> the popular websites (though BitBucket vs. GitHub seems to be a wash as >> well), and preferences of contributors who aren't core developers (using >> popularity as a proxy). It seems the preferences of the core developers are >> mixed, while the preferences of non-core contributors are pretty clear, so >> we have a problem weighing these two appropriately. >> >> Also, let's not get distracted by the needs of the CPython repo, issue >> tracker, and code review tool. Arguments about core developers vs. >> contributors for CPython shouldn't affect the current discussion. >> >> Next, two of the three repos mentioned in Donald's PEP 481 are owned by >> Brett Cannon, according to the Contact column listed on hg.python.org. I >> propose to let Brett choose whether to keep these on hg.python.org, move >> to BitBucket, or move to GitHub. @Brett, what say you? (Apart from "I'm >> tired of the whole thread." :-) >> > > You do one or two nice things for python-dev and you end up being saddled > with them for life. ;) > > Sure, I can handle the devguide and devinabox decisions since someone has > to and it isn't going to be more "fun" for someone else compared to me. > > >> >> The third one is the peps repo, which has python-dev at python.org as >> Contact. It turns out that Nick is by far the largest contributor (he >> committed 215 of the most recent 1000 changes) so I'll let him choose. >> > > "Perk" of all those packaging PEPs. > > >> >> Finally, I'd like to get a few more volunteers for the PEP editors list, >> preferably non-core devs: the core devs are already spread too thinly, and >> I really shouldn't be the one who picks new PEP numbers and checks that >> PEPs are well-formed according to the rules of PEP 1. A PEP editor >> shouldn't have to pass judgment on the contents of a PEP (though they may >> choose to correct spelling and grammar). Knowledge of Mercurial is a plus. >> :-) >> > > And based on how Nick has been talking, will continue to be at least in > the medium term. =) > > -Brett > > >> >> On Sun, Nov 30, 2014 at 4:50 PM, Donald Stufft wrote: >> >>> >>> > On Nov 30, 2014, at 7:43 PM, Ben Finney >>> wrote: >>> > >>> > Donald Stufft writes: >>> > >>> >> It?s not lost, [? a long, presumably-accurate discourse of the many >>> >> conditions that must be met before ?] you can restore it. >>> > >>> > This isn't the place to discuss the details of Git's internals, I >>> think. >>> > I'm merely pointing out that: >>> > >>> >> The important thing to realize is that a ?branch? isn?t anything >>> >> special in git. >>> > >>> > Because of that, Ethan's impression ? that Git's default behaviour >>> > encourages losing history (by re-writing the history of commits to be >>> > other than what they were) is true, and ?Git never loses history? >>> simply >>> > isn't true. >>> > >>> > Whether that is a *problem* is a matter of debate, but the fact that >>> > Git's common workflow commonly discards information that some consider >>> > valuable, is a simple fact. >>> > >>> > If Ethan chooses to make that a factor in his decisions about Git, the >>> > facts are on his side. >>> >>> Except it?s not true at all. >>> >>> That data is all still there if you want it to exist and it?s not a real >>> differentiator between Mercurial and git because Mercurial has the >>> ability >>> to do the same thing. Never mind the fact that ?lose? your history makes >>> it >>> sound accidental instead of on purpose. It?s like saying that ``rm >>> foo.txt`` >>> will ?lose? the data in foo.txt. So either it was a misunderstanding in >>> which case I wanted to point out that those operations don?t magically >>> lose >>> information or it?s a purposely FUDish statement in which case I want to >>> point out that the statement is inaccurate. >>> >>> The only thing that is true is that git users are more likely to use the >>> ability to rewrite history than Mercurial users are, but you?ll typically >>> find that people generally don?t do this on public branches, only on >>> private >>> branches. Which again doesn?t make much sense in this context since >>> generally >>> currently the way people are using Mercurial with CPython you?re using >>> patches to transfer the changes from the contributor to the committer so >>> you?re >>> ?losing? that history regardless. >>> >>> --- >>> Donald Stufft >>> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> >> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >> >> >> >> -- >> --Guido van Rossum (python.org/~guido) >> > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcepl at cepl.eu Mon Dec 1 13:05:01 2014 From: mcepl at cepl.eu (=?UTF-8?Q?Mat=C4=9Bj?= Cepl) Date: Mon, 1 Dec 2014 13:05:01 +0100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547C09D1.2090201@stoneleaf.us> <8798604C-2E43-4F89-BD6B-C12C9D572FDE@stufft.io> Message-ID: On 2014-12-01, 07:43 GMT, Donald Stufft wrote: >> I do not choose tools simply because they are written in >> Python -- I choose them because, being written in Python, I >> I can work on them if I need to: I can enhance them, I can >> fix them, I can learn from them. > > Git uses the idea of small individual commands that do small things, > so you can write your own commands that work on text streams to > extend git and you can even write those in Python. I really really dislike this Mercurial propaganda for two reasons: a) obviously you are right ... git is a complete tool box for building your own tools in the best UNIX? traditions. Each of has a ton of third-party (or our own) tools using git plumbing. (Is there a Mercurial equivalent of git-filter-branch? Can http://mercurial.selenic.com/wiki/ConvertExtension do the same as git-filter-branch?) b) it completely ignores existence of three (3) independent implementations of git format/protocol (also jgit and libgit2). How does VisualStudio/Eclipse/NetBeans/etc. support for hg works? Does it use a library or just runs hg binary in a subprocess (a thing which by the hg authors is Mercurial not designed to do)? Best, Mat?j From wes.turner at gmail.com Mon Dec 1 17:42:16 2014 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 1 Dec 2014 10:42:16 -0600 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547C09D1.2090201@stoneleaf.us> Message-ID: Here's a roundup of tools links, to make sure we're all on the same page: Git HG Rosetta Stone =================== https://github.com/sympy/sympy/wiki/Git-hg-rosetta-stone#rosetta-stone BugWarrior =========== BugWarrior works with many issue tracker APIs https://warehouse.python.org/project/bugwarrior/ bugwarrior is a command line utility for updating your local taskwarrior > database from your forge issue trackers. > It currently supports the following remote resources: > > - github (api v3) > - bitbucket > - trac > - bugzilla > - megaplan > - teamlab > - redmine > - jira > - activecollab (2.x and 4.x) > - phabricator > > [...] DVCS Interaction ================ Hg <-> Git ---------------- * https://warehouse.python.org/project/hg-git/ (dulwich) * hg-github https://github.com/stephenmcd/hg-github Git <-> Hg ------------------ * https://pypi.python.org/pypi/git-remote-hg/ * https://github.com/felipec/git-remote-hg Python <-> Hg ----------------------- | Wikipedia: https://en.wikipedia.org/wiki/Mercurial | Homepage: http://hg.selenic.org/ | Docs: http://mercurial.selenic.com/guide | Docs: http://hgbook.red-bean.com/ | Source: hg http://selenic.com/hg | Source: hg http://hg.intevation.org/mercurial/crew * http://evolution.experimentalworks.net/doc/user-guide.html * (growing list of included extensions) Python <-> Git ---------------------- * GitPython, pygit2 (libgit2), dulwich * https://github.com/libgit2/pygit2 (libgit2) * https://pythonhosted.org/GitPython/ (Python) * https://github.com/jelmer/dulwich (Python) * http://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html#installing-dependencies GitHub -> BitBucket ----------------------------- * https://bitbucket.org/ZyX_I/gibiexport Sphinx Documentation ==================== * http://read-the-docs.readthedocs.org/en/latest/webhooks.html * https://github.com/yoloseem/awesome-sphinxdoc * changelogs, charts, csv, ipython, %doctest_mode Is there an issue ticket or a wiki page that supports Markdown/ReStructuredText, where I could put this? Which URI do we assign to this artifact? On Mon, Dec 1, 2014 at 8:37 AM, Wes Turner wrote: > > > On Mon, Dec 1, 2014 at 12:25 AM, Ethan Furman wrote: > >> One argument that keeps coming up is transferability of knowledge: >> knowing git and/or GitHub, as many seem to, it >> therefore becomes easier to commit to the Python ecosystem. >> >> What about the transferability of Python knowledge? Because I know >> Python, I can customize hg; because I know Python I >> can customize Roundup. >> >> I do not choose tools simply because they are written in Python -- I >> choose them because, being written in Python, I can >> work on them if I need to: I can enhance them, I can fix them, I can >> learn from them. >> >> > There are lots of Python tools written with Git: > > * https://pypi.python.org/pypi/vcs > * https://pypi.python.org/pypi/dulwich > * https://pypi.python.org/pypi/hg-git > * http://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html > ("GitFS") > * https://github.com/libgit2/pygit2 (C) > * https://pypi.python.org/pypi/GitPython (Python) > * https://pypi.python.org/pypi/pyrpo (subprocess wrapper for git, hg, > bzr, svn) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.brandl at gmx.net Mon Dec 1 17:57:05 2014 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 01 Dec 2014 17:57:05 +0100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547C09D1.2090201@stoneleaf.us> <8798604C-2E43-4F89-BD6B-C12C9D572FDE@stufft.io> Message-ID: On 12/01/2014 01:05 PM, Mat?j Cepl wrote: > On 2014-12-01, 07:43 GMT, Donald Stufft wrote: >>> I do not choose tools simply because they are written in >>> Python -- I choose them because, being written in Python, I >>> I can work on them if I need to: I can enhance them, I can >>> fix them, I can learn from them. >> >> Git uses the idea of small individual commands that do small things, >> so you can write your own commands that work on text streams to >> extend git and you can even write those in Python. > > I really really dislike this Mercurial propaganda for two > reasons: > > a) obviously you are right ... git is a complete tool box for > building your own tools in the best UNIX? traditions. Each of > has a ton of third-party (or our own) tools using git > plumbing. (Is there a Mercurial equivalent of > git-filter-branch? Can > http://mercurial.selenic.com/wiki/ConvertExtension do the > same as git-filter-branch?) > b) it completely ignores existence of three (3) independent > implementations of git format/protocol (also jgit and > libgit2). How does VisualStudio/Eclipse/NetBeans/etc. support > for hg works? Does it use a library or just runs hg binary in > a subprocess (a thing which by the hg authors is Mercurial > not designed to do)? Please at least try to get your facts right. """ For the vast majority of third party code, the best approach is to use Mercurial's published, documented, and stable API: the command line interface. """ http://mercurial.selenic.com/wiki/MercurialApi Georg From ethan at stoneleaf.us Mon Dec 1 17:46:20 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 01 Dec 2014 08:46:20 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547C09D1.2090201@stoneleaf.us> Message-ID: <547C9B5C.4080105@stoneleaf.us> On 12/01/2014 08:42 AM, Wes Turner wrote: > > Here's a roundup of tools links, to make sure we're all on the same page: Thanks! -- ~Ethan~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From jimjjewett at gmail.com Mon Dec 1 18:37:21 2014 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Mon, 01 Dec 2014 09:37:21 -0800 (PST) Subject: [Python-Dev] hg vs Github [was: PEP 481 - Migrate Some Supporting Repositories to Git and Github] In-Reply-To: Message-ID: <547ca751.8524e00a.5a37.ffff8c58@mx.google.com> M. Cepl asked: > What I really don't understand is why this discussion is hg v. > GitHub, when it should be hg v. git. Particular hosting is > a secondary issue I think even the proponents concede that git isn't better enough to justify a switch in repositories. They do claim that GitHub (the whole environment; not just the hosting) is so much better that a switch to GitHub is justified. Github + hg offers far fewer benefits than Github + git, so also switching to git is part of the price. Whether that is an intolerable markup or a discount is disputed, as are the value of several other costs and benefits. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From fred at fdrake.net Mon Dec 1 18:56:53 2014 From: fred at fdrake.net (Fred Drake) Date: Mon, 1 Dec 2014 12:56:53 -0500 Subject: [Python-Dev] hg vs Github [was: PEP 481 - Migrate Some Supporting Repositories to Git and Github] In-Reply-To: <547ca751.8524e00a.5a37.ffff8c58@mx.google.com> References: <547ca751.8524e00a.5a37.ffff8c58@mx.google.com> Message-ID: On Mon, Dec 1, 2014 at 12:37 PM, Jim J. Jewett wrote: > I think even the proponents concede that git isn't better enough > to justify a switch in repositories. There are also many who find the Bitbucket tools more usable than the Github tools. I'm not aware of any functional differences (though I don't often use Github myself), but the Bitbucket UIs have a much cleaner feel to them. -Fred -- Fred L. Drake, Jr. "A storm broke loose in my mind." --Albert Einstein From demianbrecht at gmail.com Mon Dec 1 19:27:34 2014 From: demianbrecht at gmail.com (Demian Brecht) Date: Mon, 1 Dec 2014 10:27:34 -0800 Subject: [Python-Dev] hg vs Github [was: PEP 481 - Migrate Some Supporting Repositories to Git and Github] In-Reply-To: References: <547ca751.8524e00a.5a37.ffff8c58@mx.google.com> Message-ID: > hg vs Github At best, this is apples to oranges in comparing a DVCS to a platform, or was the intention to change the subject to "hg vs git"? If so, then it's promoting a developer tool war in the same vein as the never ending vim vs emacs and will likely only result in continued dissension. IMHO, there's really no point in continuing this discussion past decisions to be made by a select few in the thread discussing PEP 481. On Mon, Dec 1, 2014 at 9:56 AM, Fred Drake wrote: > On Mon, Dec 1, 2014 at 12:37 PM, Jim J. Jewett wrote: >> I think even the proponents concede that git isn't better enough >> to justify a switch in repositories. > > There are also many who find the Bitbucket tools more usable than the > Github tools. > > I'm not aware of any functional differences (though I don't often use > Github myself), but the Bitbucket UIs have a much cleaner feel to > them. > > > -Fred > > -- > Fred L. Drake, Jr. > "A storm broke loose in my mind." --Albert Einstein > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/demianbrecht%40gmail.com -- Demian Brecht https://demianbrecht.github.io https://github.com/demianbrecht From phd at phdru.name Mon Dec 1 21:36:51 2014 From: phd at phdru.name (Oleg Broytman) Date: Mon, 1 Dec 2014 21:36:51 +0100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547C09D1.2090201@stoneleaf.us> Message-ID: <20141201203651.GA15882@phdru.name> Hi! On Mon, Dec 01, 2014 at 10:42:16AM -0600, Wes Turner wrote: > Here's a roundup of tools links, to make sure we're all on the same page: Very nice! > Is there an issue ticket or a wiki page that supports > Markdown/ReStructuredText, > where I could put this? Which URI do we assign to this artifact? There are already pages https://wiki.python.org/moin/Git and https://wiki.python.org/moin/Mercurial . You can create an additional page and reference it on that pages. Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From tjreedy at udel.edu Mon Dec 1 21:52:21 2014 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 01 Dec 2014 15:52:21 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547C09D1.2090201@stoneleaf.us> Message-ID: On 12/1/2014 11:42 AM, Wes Turner wrote: > Here's a roundup of tools links, to make sure we're all on the same page: > > Git HG Rosetta Stone > =================== > https://github.com/sympy/sympy/wiki/Git-hg-rosetta-stone#rosetta-stone > > BugWarrior > =========== > BugWarrior works with many issue tracker APIs > > https://warehouse.python.org/project/bugwarrior/ > > bugwarrior is a command line utility for updating your local > taskwarrior database from your forge issue trackers. > It currently supports the following remote resources: > > * github (api v3) > * bitbucket > * trac > * bugzilla > * megaplan > * teamlab > * redmine > * jira > * activecollab (2.x and 4.x) > * phabricator > > [...] > > > DVCS Interaction > ================ > > Hg <-> Git > ---------------- > * https://warehouse.python.org/project/hg-git/ (dulwich) > * hg-github https://github.com/stephenmcd/hg-github > > Git <-> Hg > ------------------ > * https://pypi.python.org/pypi/git-remote-hg/ > * https://github.com/felipec/git-remote-hg > > Python <-> Hg > ----------------------- > | Wikipedia: https://en.wikipedia.org/wiki/Mercurial > | Homepage: http://hg.selenic.org/ > | Docs: http://mercurial.selenic.com/guide > | Docs: http://hgbook.red-bean.com/ > | Source: hg http://selenic.com/hg > | Source: hg http://hg.intevation.org/mercurial/crew > > * http://evolution.experimentalworks.net/doc/user-guide.html > * (growing list of included extensions) > > Python <-> Git > ---------------------- > * GitPython, pygit2 (libgit2), dulwich > * https://github.com/libgit2/pygit2 (libgit2) > * https://pythonhosted.org/GitPython/ (Python) > * https://github.com/jelmer/dulwich (Python) > * > http://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html#installing-dependencies > > GitHub -> BitBucket > ----------------------------- > * https://bitbucket.org/ZyX_I/gibiexport > > > Sphinx Documentation > ==================== > * http://read-the-docs.readthedocs.org/en/latest/webhooks.html > * https://github.com/yoloseem/awesome-sphinxdoc > * changelogs, charts, csv, ipython, %doctest_mode > > > Is there an issue ticket or a wiki page that supports https://wiki.python.org/moin/ > Markdown/ReStructuredText, whoops, I am not sure what moin uses. > where I could put this? Which URI do we assign to this artifact? -- Terry Jan Reedy From phd at phdru.name Mon Dec 1 22:02:00 2014 From: phd at phdru.name (Oleg Broytman) Date: Mon, 1 Dec 2014 22:02:00 +0100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547C09D1.2090201@stoneleaf.us> Message-ID: <20141201210200.GA21023@phdru.name> On Mon, Dec 01, 2014 at 03:52:21PM -0500, Terry Reedy wrote: > On 12/1/2014 11:42 AM, Wes Turner wrote: > >Is there an issue ticket or a wiki page that supports > > https://wiki.python.org/moin/ > > >Markdown/ReStructuredText, > > whoops, I am not sure what moin uses. Let's see... https://wiki.python.org/moin/?action=raw Seems like reST. Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From njs at pobox.com Mon Dec 1 22:38:45 2014 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 1 Dec 2014 21:38:45 +0000 Subject: [Python-Dev] advice needed: best approach to enabling "metamodules"? In-Reply-To: References: <547B96D8.7050700@hotpy.org> Message-ID: On Mon, Dec 1, 2014 at 4:06 AM, Guido van Rossum wrote: > On Sun, Nov 30, 2014 at 5:42 PM, Nathaniel Smith wrote: >> >> On Mon, Dec 1, 2014 at 1:27 AM, Guido van Rossum wrote: >> > Nathaniel, did you look at Brett's LazyLoader? It overcomes the subclass >> > issue by using a module loader that makes all modules instances of a >> > (trivial) Module subclass. I'm sure this approach can be backported as >> > far >> > as you need to go. >> >> The problem is that by the time your package's code starts running, >> it's too late to install such a loader. Brett's strategy works well >> for lazy-loading submodules (e.g., making it so 'import numpy' makes >> 'numpy.testing' available, but without the speed hit of importing it >> immediately), but it doesn't help if you want to actually hook >> attribute access on your top-level package (e.g., making 'numpy.foo' >> trigger a DeprecationWarning -- we have a lot of stupid exported >> constants that we can never get rid of because our rules say that we >> have to deprecate things before removing them). >> >> Or maybe you're suggesting that we define a trivial heap-allocated >> subclass of PyModule_Type and use that everywhere, as a >> quick-and-dirty way to enable __class__ assignment? (E.g., return it >> from PyModule_New?) I considered this before but hesitated b/c it >> could potentially break backwards compatibility -- e.g. if code A >> creates a PyModule_Type object directly without going through >> PyModule_New, and then code B checks whether the resulting object is a >> module by doing isinstance(x, type(sys)), this will break. (type(sys) >> is a pretty common way to get a handle to ModuleType -- in fact both >> types.py and importlib use it.) So in my mind I sorta lumped it in >> with my Option 2, "minor compatibility break". OTOH maybe anyone who >> creates a module object without going through PyModule_New deserves >> whatever they get. > > > Couldn't you install a package loader using some install-time hook? > > Anyway, I still think that the issues with heap types can be overcome. Hm, > didn't you bring that up before here? Was the conclusion that it's > impossible? I've brought it up several times but no-one's really discussed it :-). I finally attempted a deep dive into typeobject.c today myself. I'm not at all sure I understand the intricacies correctly here, but I *think* __class__ assignment could be relatively easily extended to handle non-heap types, and in fact the current restriction to heap types is actually buggy (IIUC). object_set_class is responsible for checking whether it's okay to take an object of class "oldto" and convert it to an object of class "newto". Basically it's goal is just to avoid crashing the interpreter (as would quickly happen if you e.g. allowed "[].__class__ = dict"). Currently the rules (spread across object_set_class and compatible_for_assignment) are: (1) both oldto and newto have to be heap types (2) they have to have the same tp_dealloc (3) they have to have the same tp_free (4) if you walk up the ->tp_base chain for both types until you find the most-ancestral type that has a compatible struct layout (as checked by equiv_structs), then either (4a) these ancestral types have to be the same, OR (4b) these ancestral types have to have the same tp_base, AND they have to have added the same slots on top of that tp_base (e.g. if you have class A(object): pass and class B(object): pass then they'll both have added a __dict__ slot at the same point in the instance struct, so that's fine; this is checked in same_slots_added). The only place the code assumes that it is dealing with heap types is in (4b) -- same_slots_added unconditionally casts the ancestral types to (PyHeapTypeObject*). AFAICT that's why step (1) is there, to protect this code. But I don't think the check actually works -- step (1) checks that the types we're trying to assign are heap types, but this is no guarantee that the *ancestral* types will be heap types. [Also, the code for __bases__ assignment appears to also call into this code with no heap type checks at all.] E.g., I think if you do class MyList(list): __slots__ = () class MyDict(dict): __slots__ = () MyList().__class__ = MyDict() then you'll end up in same_slots_added casting PyDict_Type and PyList_Type to PyHeapTypeObjects and then following invalid pointers into la-la land. (The __slots__ = () is to maintain layout compatibility with the base types; if you find builtin types that already have __dict__ and weaklist and HAVE_GC then this example should still work even with perfectly empty subclasses.) Okay, so suppose we move the heap type check (step 1) down into same_slots_added (step 4b), since AFAICT this is actually more correct anyway. This is almost enough to enable __class__ assignment on modules, because the cases we care about will go through the (4a) branch rather than (4b), so the heap type thing is irrelevant. The remaining problem is the requirement that both types have the same tp_dealloc (step 2). ModuleType itself has tp_dealloc == module_dealloc, while all(?) heap types have tp_dealloc == subtype_dealloc. Here again, though, I'm not sure what purpose this check serves. subtype_dealloc basically cleans up extra slots, and then calls the base class tp_dealloc. So AFAICT it's totally fine if oldto->tp_dealloc == module_dealloc, and newto->tp_dealloc == subtype_dealloc, so long as newto is a subtype of oldto -- b/c this means newto->tp_dealloc will end up calling oldto->tp_dealloc anyway. OTOH it's not actually a guarantee of anything useful to see that oldto->tp_dealloc == newto->tp_dealloc == subtype_dealloc, because subtype_dealloc does totally different things depending on the ancestry tree -- MyList and MyDict above pass the tp_dealloc check, even though list.tp_dealloc and dict.tp_dealloc are definitely *not* interchangeable. So I suspect that a more correct way to do this check would be something like PyTypeObject *old__real_deallocer = oldto, *new_real_deallocer = newto; while (old_real_deallocer->tp_dealloc == subtype_dealloc) old_real_deallocer = old_real_deallocer->tp_base; while (new_real_deallocer->tp_dealloc == subtype_dealloc) new_real_deallocer = new_real_deallocer->tp_base; if (old_real_deallocer->tp_dealloc != new_real_deallocer) error out; Module subclasses would pass this check. Alternatively it might make more sense to add a check in equiv_structs that (child_type->tp_dealloc == subtype_dealloc || child_type->tp_dealloc == parent_type->tp_dealloc); I think that would accomplish the same thing in a somewhat cleaner way. Obviously this code is really subtle though, so don't trust any of the above without review from someone who knows typeobject.c better than me! (Antoine?) -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From francis.giraldeau at gmail.com Mon Dec 1 23:48:24 2014 From: francis.giraldeau at gmail.com (Francis Giraldeau) Date: Mon, 1 Dec 2014 17:48:24 -0500 Subject: [Python-Dev] LTTng-UST support for CPython Message-ID: Here is a working prototype for CPython to record all function call/return using LTTng-UST, a fast tracer. https://github.com/giraldeau/python-profile-ust However, there are few issues and questions: - I was not able to get PyTrace_EXCEPTION using "raise" or other error conditions. How can we trigger this event in Python code (PyTrace_C_EXCEPTION works)? - How could be the best way to get the full name of an object (such as package, module, class and function). Maybe it's too Java-ish, and it is better to record file/lineno instead? - On the C-API side: I did a horrible and silly function show_type() to run every Py*_Check() to determine the type of a PyObject *. What would be the sane way to do that? Your comments are very valuable. Thanks! Francis -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Dec 2 01:00:50 2014 From: guido at python.org (Guido van Rossum) Date: Mon, 1 Dec 2014 16:00:50 -0800 Subject: [Python-Dev] advice needed: best approach to enabling "metamodules"? In-Reply-To: References: <547B96D8.7050700@hotpy.org> Message-ID: On Mon, Dec 1, 2014 at 1:38 PM, Nathaniel Smith wrote: > On Mon, Dec 1, 2014 at 4:06 AM, Guido van Rossum wrote: > > On Sun, Nov 30, 2014 at 5:42 PM, Nathaniel Smith wrote: > >> > >> On Mon, Dec 1, 2014 at 1:27 AM, Guido van Rossum > wrote: > >> > Nathaniel, did you look at Brett's LazyLoader? It overcomes the > subclass > >> > issue by using a module loader that makes all modules instances of a > >> > (trivial) Module subclass. I'm sure this approach can be backported as > >> > far > >> > as you need to go. > >> > >> The problem is that by the time your package's code starts running, > >> it's too late to install such a loader. Brett's strategy works well > >> for lazy-loading submodules (e.g., making it so 'import numpy' makes > >> 'numpy.testing' available, but without the speed hit of importing it > >> immediately), but it doesn't help if you want to actually hook > >> attribute access on your top-level package (e.g., making 'numpy.foo' > >> trigger a DeprecationWarning -- we have a lot of stupid exported > >> constants that we can never get rid of because our rules say that we > >> have to deprecate things before removing them). > >> > >> Or maybe you're suggesting that we define a trivial heap-allocated > >> subclass of PyModule_Type and use that everywhere, as a > >> quick-and-dirty way to enable __class__ assignment? (E.g., return it > >> from PyModule_New?) I considered this before but hesitated b/c it > >> could potentially break backwards compatibility -- e.g. if code A > >> creates a PyModule_Type object directly without going through > >> PyModule_New, and then code B checks whether the resulting object is a > >> module by doing isinstance(x, type(sys)), this will break. (type(sys) > >> is a pretty common way to get a handle to ModuleType -- in fact both > >> types.py and importlib use it.) So in my mind I sorta lumped it in > >> with my Option 2, "minor compatibility break". OTOH maybe anyone who > >> creates a module object without going through PyModule_New deserves > >> whatever they get. > > > > > > Couldn't you install a package loader using some install-time hook? > > > > Anyway, I still think that the issues with heap types can be overcome. > Hm, > > didn't you bring that up before here? Was the conclusion that it's > > impossible? > > I've brought it up several times but no-one's really discussed it :-). > That's because nobody dares to touch it. (Myself included -- I increased the size of typeobject.c from ~50 to ~5000 lines in a single intense editing session more than a decade ago, and since then it's been basically unmaintainable. :-( > I finally attempted a deep dive into typeobject.c today myself. I'm > not at all sure I understand the intricacies correctly here, but I > *think* __class__ assignment could be relatively easily extended to > handle non-heap types, and in fact the current restriction to heap > types is actually buggy (IIUC). > > object_set_class is responsible for checking whether it's okay to take > an object of class "oldto" and convert it to an object of class > "newto". Basically it's goal is just to avoid crashing the interpreter > (as would quickly happen if you e.g. allowed "[].__class__ = dict"). > Currently the rules (spread across object_set_class and > compatible_for_assignment) are: > > (1) both oldto and newto have to be heap types > (2) they have to have the same tp_dealloc > (3) they have to have the same tp_free > (4) if you walk up the ->tp_base chain for both types until you find > the most-ancestral type that has a compatible struct layout (as > checked by equiv_structs), then either > (4a) these ancestral types have to be the same, OR > (4b) these ancestral types have to have the same tp_base, AND they > have to have added the same slots on top of that tp_base (e.g. if you > have class A(object): pass and class B(object): pass then they'll both > have added a __dict__ slot at the same point in the instance struct, > so that's fine; this is checked in same_slots_added). > > The only place the code assumes that it is dealing with heap types is > in (4b) -- same_slots_added unconditionally casts the ancestral types > to (PyHeapTypeObject*). AFAICT that's why step (1) is there, to > protect this code. But I don't think the check actually works -- step > (1) checks that the types we're trying to assign are heap types, but > this is no guarantee that the *ancestral* types will be heap types. > [Also, the code for __bases__ assignment appears to also call into > this code with no heap type checks at all.] E.g., I think if you do > > class MyList(list): > __slots__ = () > > class MyDict(dict): > __slots__ = () > > MyList().__class__ = MyDict() > > then you'll end up in same_slots_added casting PyDict_Type and > PyList_Type to PyHeapTypeObjects and then following invalid pointers > into la-la land. (The __slots__ = () is to maintain layout > compatibility with the base types; if you find builtin types that > already have __dict__ and weaklist and HAVE_GC then this example > should still work even with perfectly empty subclasses.) > Have you filed this as a bug? I believe nobody has discovered this problem before. I've confirmed it as far back as 2.5 (I don't have anything older installed). > Okay, so suppose we move the heap type check (step 1) down into > same_slots_added (step 4b), since AFAICT this is actually more correct > anyway. This is almost enough to enable __class__ assignment on > modules, because the cases we care about will go through the (4a) > branch rather than (4b), so the heap type thing is irrelevant. > > The remaining problem is the requirement that both types have the same > tp_dealloc (step 2). ModuleType itself has tp_dealloc == > module_dealloc, while all(?) heap types have tp_dealloc == > subtype_dealloc. Yeah, I can't see a way that type_new() can create a type whose tp_dealloc isn't subtype_dealloc. > Here again, though, I'm not sure what purpose this > check serves. subtype_dealloc basically cleans up extra slots, and > then calls the base class tp_dealloc. So AFAICT it's totally fine if > oldto->tp_dealloc == module_dealloc, and newto->tp_dealloc == > subtype_dealloc, so long as newto is a subtype of oldto -- b/c this > means newto->tp_dealloc will end up calling oldto->tp_dealloc anyway. > I guess the simple check is an upper bound (or whatever that's called -- my math-speak is rusty ;-) for the necessary-and-sufficient check that you're describing. > OTOH it's not actually a guarantee of anything useful to see that > oldto->tp_dealloc == newto->tp_dealloc == subtype_dealloc, because > subtype_dealloc does totally different things depending on the > ancestry tree -- MyList and MyDict above pass the tp_dealloc check, > even though list.tp_dealloc and dict.tp_dealloc are definitely *not* > interchangeable. > > So I suspect that a more correct way to do this check would be something > like > > PyTypeObject *old__real_deallocer = oldto, *new_real_deallocer = newto; > while (old_real_deallocer->tp_dealloc == subtype_dealloc) > old_real_deallocer = old_real_deallocer->tp_base; > while (new_real_deallocer->tp_dealloc == subtype_dealloc) > new_real_deallocer = new_real_deallocer->tp_base; > if (old_real_deallocer->tp_dealloc != new_real_deallocer) > error out; > I'm not set up to disagree with you on this any more... > Module subclasses would pass this check. Alternatively it might make > more sense to add a check in equiv_structs that > (child_type->tp_dealloc == subtype_dealloc || child_type->tp_dealloc > == parent_type->tp_dealloc); I think that would accomplish the same > thing in a somewhat cleaner way. > > Obviously this code is really subtle though, so don't trust any of the > above without review from someone who knows typeobject.c better than > me! (Antoine?) > Or Benjamin? -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Tue Dec 2 02:33:53 2014 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 1 Dec 2014 20:33:53 -0500 Subject: [Python-Dev] LTTng-UST support for CPython In-Reply-To: References: Message-ID: On Mon, Dec 1, 2014 at 5:48 PM, Francis Giraldeau < francis.giraldeau at gmail.com> wrote: > - On the C-API side: I did a horrible and silly function show_type() to > run every Py*_Check() to determine the type of a PyObject *. What would be > the sane way to do that? Questions like this are better asked on a users' forum, but you can get the type name from a python object as follows: Py_TYPE(obj)->tp_name -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Dec 2 03:32:44 2014 From: guido at python.org (Guido van Rossum) Date: Mon, 1 Dec 2014 18:32:44 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <17CA895B-DDCA-44B7-A94E-0B74765CA7AD@stufft.io> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <17CA895B-DDCA-44B7-A94E-0B74765CA7AD@stufft.io> Message-ID: On Sun, Nov 30, 2014 at 10:25 AM, Donald Stufft wrote: > > On Nov 30, 2014, at 1:05 PM, Guido van Rossum wrote: > > I don't feel it's my job to accept or reject this PEP, but I do have an > opinion. > > > So here?s a question. If it?s not your job to accept or reject this PEP, > whose is it? This is probably an issue we?re never going to get actual > consensus on so unless there is an arbitrator of who gets to decide I feel > it?s probably a waste of my time to try and convince absolutely *everyone*. > I saved this question. I still don't know who should accept or reject the PEP. I tried to get out of it by asking Brett for the two repos he "owns", but he hasn't stated his preference (though he did acknowledge the responsibility). If it were really up to me I'd switch all "minor" repos to GitHub, but I feel I've run into sufficient opposition (most vocally from Nick) that I think "status quo wins" applies. I think Nick previously wanted to switch to BitBucket -- if he hasn't hardened his position I say we should do that. But if he no longer wants that, I have stopped caring after the 200th message. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Dec 2 10:19:27 2014 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 2 Dec 2014 10:19:27 +0100 Subject: [Python-Dev] advice needed: best approach to enabling "metamodules"? References: <547B96D8.7050700@hotpy.org> Message-ID: <20141202101927.1b4a5bf8@fsol> On Mon, 1 Dec 2014 21:38:45 +0000 Nathaniel Smith wrote: > > object_set_class is responsible for checking whether it's okay to take > an object of class "oldto" and convert it to an object of class > "newto". Basically it's goal is just to avoid crashing the interpreter > (as would quickly happen if you e.g. allowed "[].__class__ = dict"). > Currently the rules (spread across object_set_class and > compatible_for_assignment) are: > > (1) both oldto and newto have to be heap types > (2) they have to have the same tp_dealloc > (3) they have to have the same tp_free > (4) if you walk up the ->tp_base chain for both types until you find > the most-ancestral type that has a compatible struct layout (as > checked by equiv_structs), then either > (4a) these ancestral types have to be the same, OR > (4b) these ancestral types have to have the same tp_base, AND they > have to have added the same slots on top of that tp_base (e.g. if you > have class A(object): pass and class B(object): pass then they'll both > have added a __dict__ slot at the same point in the instance struct, > so that's fine; this is checked in same_slots_added). > > The only place the code assumes that it is dealing with heap types is > in (4b) I'm not sure. Many operations are standardized on heap types that can have arbitrary definitions on static types (I'm talking about the tp_ methods). You'd have to review them to double check. For example, a heap type's tp_new increments the type's refcount, so you have to adjust the instance refcount if you cast it from a non-heap type to a heap type, and vice-versa (see slot_tp_new()). (this raises the interesting question "what happens if you assign to __class__ from a __del__ method?") > -- same_slots_added unconditionally casts the ancestral types > to (PyHeapTypeObject*). AFAICT that's why step (1) is there, to > protect this code. But I don't think the check actually works -- step > (1) checks that the types we're trying to assign are heap types, but > this is no guarantee that the *ancestral* types will be heap types. > [Also, the code for __bases__ assignment appears to also call into > this code with no heap type checks at all.] E.g., I think if you do > > class MyList(list): > __slots__ = () > > class MyDict(dict): > __slots__ = () > > MyList().__class__ = MyDict() > > then you'll end up in same_slots_added casting PyDict_Type and > PyList_Type to PyHeapTypeObjects and then following invalid pointers > into la-la land. (The __slots__ = () is to maintain layout > compatibility with the base types; if you find builtin types that > already have __dict__ and weaklist and HAVE_GC then this example > should still work even with perfectly empty subclasses.) > > Okay, so suppose we move the heap type check (step 1) down into > same_slots_added (step 4b), since AFAICT this is actually more correct > anyway. This is almost enough to enable __class__ assignment on > modules, because the cases we care about will go through the (4a) > branch rather than (4b), so the heap type thing is irrelevant. > > The remaining problem is the requirement that both types have the same > tp_dealloc (step 2). ModuleType itself has tp_dealloc == > module_dealloc, while all(?) heap types have tp_dealloc == > subtype_dealloc. Here again, though, I'm not sure what purpose this > check serves. subtype_dealloc basically cleans up extra slots, and > then calls the base class tp_dealloc. So AFAICT it's totally fine if > oldto->tp_dealloc == module_dealloc, and newto->tp_dealloc == > subtype_dealloc, so long as newto is a subtype of oldto -- b/c this > means newto->tp_dealloc will end up calling oldto->tp_dealloc anyway. > OTOH it's not actually a guarantee of anything useful to see that > oldto->tp_dealloc == newto->tp_dealloc == subtype_dealloc, because > subtype_dealloc does totally different things depending on the > ancestry tree -- MyList and MyDict above pass the tp_dealloc check, > even though list.tp_dealloc and dict.tp_dealloc are definitely *not* > interchangeable. > > So I suspect that a more correct way to do this check would be something like > > PyTypeObject *old__real_deallocer = oldto, *new_real_deallocer = newto; > while (old_real_deallocer->tp_dealloc == subtype_dealloc) > old_real_deallocer = old_real_deallocer->tp_base; > while (new_real_deallocer->tp_dealloc == subtype_dealloc) > new_real_deallocer = new_real_deallocer->tp_base; > if (old_real_deallocer->tp_dealloc != new_real_deallocer) > error out; Sounds good. > Module subclasses would pass this check. Alternatively it might make > more sense to add a check in equiv_structs that > (child_type->tp_dealloc == subtype_dealloc || child_type->tp_dealloc > == parent_type->tp_dealloc); I think that would accomplish the same > thing in a somewhat cleaner way. There's no "child" and "parent" types in equiv_structs(). Regards Antoine. From ncoghlan at gmail.com Tue Dec 2 14:24:13 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 2 Dec 2014 23:24:13 +1000 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: On 2 December 2014 at 01:38, Guido van Rossum wrote: > As far as I'm concerned I'm just waiting for your decision now. The RhodeCode team got in touch with me offline to suggest the possibility of using RhodeCode Enterprise as a self-hosted solution rather than a volunteer-supported installation of Kallithea. I'll be talking to them tomorrow, and if that discussion goes well, will update PEP 474 (and potentially PEP 462) accordingly. Given that that would take away the "volunteer supported" vs "commercially supported" distinction between self-hosting and using GitHub (as well as potentially building a useful relationship that may help us resolve other workflow issues in the future), I'd like us to hold off on any significant decisions regarding the fate of any of the repos until I've had a chance to incorporate the results of that discussion into my proposals. As described in PEP 474, I'm aware of the Mercurial team's concerns with RhodeCode's current licensing, but still consider it a superior alternative to an outright proprietary solution that doesn't get us any closer to solving the workflow problems with the main CPython repo. Regards, Nick. P.S. I'll also bring up some of the RFEs raised in this discussion around making it possible for folks to submit pull requests via GitHub/BitBucket, even if the master repositories are hosted on PSF infrastructure. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From jeremy.kloth at gmail.com Tue Dec 2 14:44:01 2014 From: jeremy.kloth at gmail.com (Jeremy Kloth) Date: Tue, 2 Dec 2014 06:44:01 -0700 Subject: [Python-Dev] [Python-checkins] cpython (3.4): - Issue #22966: Fix __pycache__ pyc file name clobber when pyc_compile is In-Reply-To: <20141201231739.116310.85158@psf.io> References: <20141201231739.116310.85158@psf.io> Message-ID: On Mon, Dec 1, 2014 at 4:17 PM, barry.warsaw wrote: > summary: > - Issue #22966: Fix __pycache__ pyc file name clobber when pyc_compile is > asked to compile a source file containing multiple dots in the source file > name. > > diff --git a/Lib/test/test_py_compile.py b/Lib/test/test_py_compile.py > --- a/Lib/test/test_py_compile.py > +++ b/Lib/test/test_py_compile.py > @@ -99,5 +99,21 @@ > self.assertFalse(os.path.exists( > importlib.util.cache_from_source(bad_coding))) > > + def test_double_dot_no_clobber(self): > + # http://bugs.python.org/issue22966 > + # py_compile foo.bar.py -> __pycache__/foo.cpython-34.pyc > + weird_path = os.path.join(self.directory, 'foo.bar.py') > + cache_path = importlib.util.cache_from_source(weird_path) > + pyc_path = weird_path + 'c' > + self.assertEqual( > + '/'.join(cache_path.split('/')[-2:]), > + '__pycache__/foo.bar.cpython-34.pyc') > + with open(weird_path, 'w') as file: > + file.write('x = 123\n') > + py_compile.compile(weird_path) > + self.assertTrue(os.path.exists(cache_path)) > + self.assertFalse(os.path.exists(pyc_path)) > + > + This test is failing on the Windows buildbots due to the hard-coded path separator. Using `os.pathsep` should work assuming that importlib returns normalized paths. -- Jeremy Kloth From barry at python.org Tue Dec 2 17:28:49 2014 From: barry at python.org (Barry Warsaw) Date: Tue, 2 Dec 2014 11:28:49 -0500 Subject: [Python-Dev] [Python-checkins] cpython (3.4): - Issue #22966: Fix __pycache__ pyc file name clobber when pyc_compile is In-Reply-To: References: <20141201231739.116310.85158@psf.io> Message-ID: <20141202112849.247f9bc6@anarchist.wooz.org> On Dec 02, 2014, at 06:44 AM, Jeremy Kloth wrote: >This test is failing on the Windows buildbots due to the hard-coded >path separator. Using `os.pathsep` should work assuming that >importlib returns normalized paths. Good catch, thanks, however os.path would be the one to use. Here's the patch that should fix it. This passes for me on Ubuntu, but I don't have a Windows machine to do a test build on atm, so I'll just commit this and see how the buildbots handle it. diff -r 8badbd65840e Lib/test/test_py_compile.py --- a/Lib/test/test_py_compile.py Tue Dec 02 09:24:06 2014 +0200 +++ b/Lib/test/test_py_compile.py Tue Dec 02 11:27:16 2014 -0500 @@ -106,9 +106,13 @@ weird_path = os.path.join(self.directory, 'foo.bar.py') cache_path = importlib.util.cache_from_source(weird_path) pyc_path = weird_path + 'c' + head, tail = os.path.split(cache_path) + penultimate_tail = os.path.basename(head) self.assertEqual( - '/'.join(cache_path.split('/')[-2:]), - '__pycache__/foo.bar.{}.pyc'.format(sys.implementation.cache_tag)) + os.path.join(penultimate_tail, tail), + os.path.join( + '__pycache__', + 'foo.bar.{}.pyc'.format(sys.implementation.cache_tag))) with open(weird_path, 'w') as file: file.write('x = 123\n') py_compile.compile(weird_path) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From brett at python.org Tue Dec 2 17:50:29 2014 From: brett at python.org (Brett Cannon) Date: Tue, 02 Dec 2014 16:50:29 +0000 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: So I was waiting for Nick to say what he wanted to do for the peps repo since I view it as I get 2/3 of the choices and he gets the other third. The way I view it, the options are: 1. Move to GitHub 2. Move to Bitbucket 3. Improve our current tooling (either through new hosting setup and/or adding first-world support for downloading PRs from GitHub/Bitbucket) Regardless of what we do, I think we should graduate the mirrors on GitHub and Bitbucket to "official" -- for the proposed repos and cpython -- and get their repos updating per-push instead of as a cron job. I also think we should also flip on any CI we can (e.g. turn on Travis for GitHub along with coveralls support using coverage.py's encodings trick ). This will get us the most accessible repo backups as well as the widest tool coverage for contributors to assist them in their contributions (heck, even if we just get regular coverage reports for Python that would be a great win out of all of this). Now as for whether we should move the repos, I see two possibilities to help make that decision. One is we end up with 3 PEPs corresponding to the 3 proposals outlined above, get them done before PyCon, and then we have a discussion at the language summit where we can either make a decision or see what the pulse at the conference and sprints then make a decision shortly thereafter (I can moderate the summit discussion to keep this on-task and minimize the rambling; if Guido wants I can even make the final call since I have already played the role of "villain" for our issue tracker and hg decisions). The other option is we take each one of the 3 proposed repos and pilot/experiment with them on a different platform. I would put peps on GitHub (as per Guido's comment of getting PRs from there already), the devguide on Bitbucket, and leave devinabox on hg.python.org but with the motivation of getting better tooling in place to contribute to it. We can then see if anything changes between now and PyCon and then discuss what occurred there (if we can't get the word out about this experiment and get new tooling up and going on the issue tracker in the next 4 months then that's another data point about how much people do/don't care about any of this). Obviously if we end up needing more time we don't *have* to make a decision at PyCon, but it's a good goal to have. I don't think we can cleanly replicate a single repo on all three solutions as I sure don't want to deal with that merging fun (unless someone comes forward to be basically a "release manager" for one of the repos to make that experiment happen). So do people want PEPs or experimentation first? On Tue Dec 02 2014 at 8:24:16 AM Nick Coghlan wrote: > On 2 December 2014 at 01:38, Guido van Rossum wrote: > > As far as I'm concerned I'm just waiting for your decision now. > > The RhodeCode team got in touch with me offline to suggest the > possibility of using RhodeCode Enterprise as a self-hosted solution > rather than a volunteer-supported installation of Kallithea. I'll be > talking to them tomorrow, and if that discussion goes well, will > update PEP 474 (and potentially PEP 462) accordingly. > > Given that that would take away the "volunteer supported" vs > "commercially supported" distinction between self-hosting and using > GitHub (as well as potentially building a useful relationship that may > help us resolve other workflow issues in the future), I'd like us to > hold off on any significant decisions regarding the fate of any of the > repos until I've had a chance to incorporate the results of that > discussion into my proposals. > > As described in PEP 474, I'm aware of the Mercurial team's concerns > with RhodeCode's current licensing, but still consider it a superior > alternative to an outright proprietary solution that doesn't get us > any closer to solving the workflow problems with the main CPython > repo. > > Regards, > Nick. > > P.S. I'll also bring up some of the RFEs raised in this discussion > around making it possible for folks to submit pull requests via > GitHub/BitBucket, even if the master repositories are hosted on PSF > infrastructure. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Tue Dec 2 18:23:11 2014 From: tseaver at palladion.com (Tres Seaver) Date: Tue, 02 Dec 2014 12:23:11 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/02/2014 11:50 AM, Brett Cannon wrote: > So do people want PEPs or experimentation first? I'd vote for experimentation, to ground the discussion in actual practice. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iEYEARECAAYFAlR99X8ACgkQ+gerLs4ltQ7dpACgsGq7Rii7seJXHCOVMUymbOdL 2KQAn3qcOGWynKU4rd/H39hpBxwSsbk9 =93kJ -----END PGP SIGNATURE----- From demianbrecht at gmail.com Tue Dec 2 18:39:58 2014 From: demianbrecht at gmail.com (Demian Brecht) Date: Tue, 2 Dec 2014 09:39:58 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: On Tue, Dec 2, 2014 at 9:23 AM, Tres Seaver wrote: > I'd vote for experimentation, to ground the discussion in actual practice. +1. There may be a number of practical gotchas that very well might not surface in PEPs and should be documented and planned for. Likewise with benefits. From guido at python.org Tue Dec 2 19:04:54 2014 From: guido at python.org (Guido van Rossum) Date: Tue, 2 Dec 2014 10:04:54 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: Thanks for taking charge, Brett. I personally think this shouldn't be brought up at the summit -- it's likely to just cause lots of heat about git vs. hg, free vs. not-free, "loyalty" to free or open tools, the weighing of core committers' preferences vs. outside contributors' preferences, GitHub's diversity track record, with no new information added. Even if we *just* had a vote by show-of-hands at the summit that would just upset those who couldn't be present. But I'll leave that up to you. The only thing I ask you is not to give me the last word. I might just do something you regret. :-) --Guido On Tue, Dec 2, 2014 at 8:50 AM, Brett Cannon wrote: > So I was waiting for Nick to say what he wanted to do for the peps repo > since I view it as I get 2/3 of the choices and he gets the other third. > > The way I view it, the options are: > > 1. Move to GitHub > 2. Move to Bitbucket > 3. Improve our current tooling (either through new hosting setup > and/or adding first-world support for downloading PRs from GitHub/Bitbucket) > > Regardless of what we do, I think we should graduate the mirrors on GitHub > and Bitbucket to "official" -- for the proposed repos and cpython -- and > get their repos updating per-push instead of as a cron job. I also think we > should also flip on any CI we can (e.g. turn on Travis for GitHub along > with coveralls support using coverage.py's encodings trick > ). This > will get us the most accessible repo backups as well as the widest tool > coverage for contributors to assist them in their contributions (heck, even > if we just get regular coverage reports for Python that would be a great > win out of all of this). > > Now as for whether we should move the repos, I see two possibilities to > help make that decision. One is we end up with 3 PEPs corresponding to the > 3 proposals outlined above, get them done before PyCon, and then we have a > discussion at the language summit where we can either make a decision or > see what the pulse at the conference and sprints then make a decision > shortly thereafter (I can moderate the summit discussion to keep this > on-task and minimize the rambling; if Guido wants I can even make the final > call since I have already played the role of "villain" for our issue > tracker and hg decisions). > > The other option is we take each one of the 3 proposed repos and > pilot/experiment with them on a different platform. I would put peps on > GitHub (as per Guido's comment of getting PRs from there already), the > devguide on Bitbucket, and leave devinabox on hg.python.org but with the > motivation of getting better tooling in place to contribute to it. We can > then see if anything changes between now and PyCon and then discuss what > occurred there (if we can't get the word out about this experiment and get > new tooling up and going on the issue tracker in the next 4 months then > that's another data point about how much people do/don't care about any of > this). Obviously if we end up needing more time we don't *have* to make a > decision at PyCon, but it's a good goal to have. I don't think we can > cleanly replicate a single repo on all three solutions as I sure don't want > to deal with that merging fun (unless someone comes forward to be basically > a "release manager" for one of the repos to make that experiment happen). > > So do people want PEPs or experimentation first? > > On Tue Dec 02 2014 at 8:24:16 AM Nick Coghlan wrote: > >> On 2 December 2014 at 01:38, Guido van Rossum wrote: >> > As far as I'm concerned I'm just waiting for your decision now. >> >> The RhodeCode team got in touch with me offline to suggest the >> possibility of using RhodeCode Enterprise as a self-hosted solution >> rather than a volunteer-supported installation of Kallithea. I'll be >> talking to them tomorrow, and if that discussion goes well, will >> update PEP 474 (and potentially PEP 462) accordingly. >> >> Given that that would take away the "volunteer supported" vs >> "commercially supported" distinction between self-hosting and using >> GitHub (as well as potentially building a useful relationship that may >> help us resolve other workflow issues in the future), I'd like us to >> hold off on any significant decisions regarding the fate of any of the >> repos until I've had a chance to incorporate the results of that >> discussion into my proposals. >> >> As described in PEP 474, I'm aware of the Mercurial team's concerns >> with RhodeCode's current licensing, but still consider it a superior >> alternative to an outright proprietary solution that doesn't get us >> any closer to solving the workflow problems with the main CPython >> repo. >> >> Regards, >> Nick. >> >> P.S. I'll also bring up some of the RFEs raised in this discussion >> around making it possible for folks to submit pull requests via >> GitHub/BitBucket, even if the master repositories are hosted on PSF >> infrastructure. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Dec 2 19:21:39 2014 From: brett at python.org (Brett Cannon) Date: Tue, 02 Dec 2014 18:21:39 +0000 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: On Tue Dec 02 2014 at 1:05:22 PM Guido van Rossum wrote: > Thanks for taking charge, Brett. > > I personally think this shouldn't be brought up at the summit -- it's > likely to just cause lots of heat about git vs. hg, free vs. not-free, > "loyalty" to free or open tools, the weighing of core committers' > preferences vs. outside contributors' preferences, GitHub's diversity track > record, with no new information added. Even if we *just* had a vote by > show-of-hands at the summit that would just upset those who couldn't be > present. > Well, if I'm going to be the Great Decider on this then I can say upfront I'm taking a pragmatic view of preferring open but not mandating it, preferring hg over git but not ruling out a switch, preferring Python-based tools but not viewing it as a negative to not use Python, etc. I would like to think I have earned somewhat of a reputation of being level-headed and so none of this should really be a surprise to anyone. So if we did have a discussion at the summit and someone decided to argue for FLOSS vs. not as a key factor then I would politely cut them off and say that doesn't matter to me and move on. As I said, I would moderate the conversation to keep it on-task and not waste my time with points that have already been made and flagged by me and you as not deal-breakers. And any votes would be to gauge the feeling of the room and not as a binding decision; I assume either me or someone else is going to be the dictator on this and this won't be a majority decision. > > But I'll leave that up to you. The only thing I ask you is not to give me > the last word. I might just do something you regret. :-) > What about me doing something that *I* regret like taking this on? =) -Brett > > --Guido > > On Tue, Dec 2, 2014 at 8:50 AM, Brett Cannon wrote: > >> So I was waiting for Nick to say what he wanted to do for the peps repo >> since I view it as I get 2/3 of the choices and he gets the other third. >> >> The way I view it, the options are: >> >> 1. Move to GitHub >> 2. Move to Bitbucket >> 3. Improve our current tooling (either through new hosting setup >> and/or adding first-world support for downloading PRs from GitHub/Bitbucket) >> >> Regardless of what we do, I think we should graduate the mirrors on >> GitHub and Bitbucket to "official" -- for the proposed repos and cpython -- >> and get their repos updating per-push instead of as a cron job. I also >> think we should also flip on any CI we can (e.g. turn on Travis for GitHub >> along with coveralls support using coverage.py's encodings trick >> ). This >> will get us the most accessible repo backups as well as the widest tool >> coverage for contributors to assist them in their contributions (heck, even >> if we just get regular coverage reports for Python that would be a great >> win out of all of this). >> >> Now as for whether we should move the repos, I see two possibilities to >> help make that decision. One is we end up with 3 PEPs corresponding to the >> 3 proposals outlined above, get them done before PyCon, and then we have a >> discussion at the language summit where we can either make a decision or >> see what the pulse at the conference and sprints then make a decision >> shortly thereafter (I can moderate the summit discussion to keep this >> on-task and minimize the rambling; if Guido wants I can even make the final >> call since I have already played the role of "villain" for our issue >> tracker and hg decisions). >> >> The other option is we take each one of the 3 proposed repos and >> pilot/experiment with them on a different platform. I would put peps on >> GitHub (as per Guido's comment of getting PRs from there already), the >> devguide on Bitbucket, and leave devinabox on hg.python.org but with the >> motivation of getting better tooling in place to contribute to it. We can >> then see if anything changes between now and PyCon and then discuss what >> occurred there (if we can't get the word out about this experiment and get >> new tooling up and going on the issue tracker in the next 4 months then >> that's another data point about how much people do/don't care about any of >> this). Obviously if we end up needing more time we don't *have* to make >> a decision at PyCon, but it's a good goal to have. I don't think we can >> cleanly replicate a single repo on all three solutions as I sure don't want >> to deal with that merging fun (unless someone comes forward to be basically >> a "release manager" for one of the repos to make that experiment happen). >> >> So do people want PEPs or experimentation first? >> >> On Tue Dec 02 2014 at 8:24:16 AM Nick Coghlan wrote: >> >>> On 2 December 2014 at 01:38, Guido van Rossum wrote: >>> > As far as I'm concerned I'm just waiting for your decision now. >>> >>> The RhodeCode team got in touch with me offline to suggest the >>> possibility of using RhodeCode Enterprise as a self-hosted solution >>> rather than a volunteer-supported installation of Kallithea. I'll be >>> talking to them tomorrow, and if that discussion goes well, will >>> update PEP 474 (and potentially PEP 462) accordingly. >>> >>> Given that that would take away the "volunteer supported" vs >>> "commercially supported" distinction between self-hosting and using >>> GitHub (as well as potentially building a useful relationship that may >>> help us resolve other workflow issues in the future), I'd like us to >>> hold off on any significant decisions regarding the fate of any of the >>> repos until I've had a chance to incorporate the results of that >>> discussion into my proposals. >>> >>> As described in PEP 474, I'm aware of the Mercurial team's concerns >>> with RhodeCode's current licensing, but still consider it a superior >>> alternative to an outright proprietary solution that doesn't get us >>> any closer to solving the workflow problems with the main CPython >>> repo. >>> >>> Regards, >>> Nick. >>> >>> P.S. I'll also bring up some of the RFEs raised in this discussion >>> around making it possible for folks to submit pull requests via >>> GitHub/BitBucket, even if the master repositories are hosted on PSF >>> infrastructure. >>> >>> -- >>> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >>> >> > > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Dec 2 19:26:35 2014 From: guido at python.org (Guido van Rossum) Date: Tue, 2 Dec 2014 10:26:35 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: On Tue, Dec 2, 2014 at 10:21 AM, Brett Cannon wrote: > > > On Tue Dec 02 2014 at 1:05:22 PM Guido van Rossum > wrote: > >> Thanks for taking charge, Brett. >> >> I personally think this shouldn't be brought up at the summit -- it's >> likely to just cause lots of heat about git vs. hg, free vs. not-free, >> "loyalty" to free or open tools, the weighing of core committers' >> preferences vs. outside contributors' preferences, GitHub's diversity track >> record, with no new information added. Even if we *just* had a vote by >> show-of-hands at the summit that would just upset those who couldn't be >> present. >> > > Well, if I'm going to be the Great Decider on this then I can say upfront > I'm taking a pragmatic view of preferring open but not mandating it, > preferring hg over git but not ruling out a switch, preferring Python-based > tools but not viewing it as a negative to not use Python, etc. I would like > to think I have earned somewhat of a reputation of being level-headed and > so none of this should really be a surprise to anyone. > > So if we did have a discussion at the summit and someone decided to argue > for FLOSS vs. not as a key factor then I would politely cut them off and > say that doesn't matter to me and move on. As I said, I would moderate the > conversation to keep it on-task and not waste my time with points that have > already been made and flagged by me and you as not deal-breakers. And any > votes would be to gauge the feeling of the room and not as a binding > decision; I assume either me or someone else is going to be the dictator on > this and this won't be a majority decision. > > >> >> But I'll leave that up to you. The only thing I ask you is not to give me >> the last word. I might just do something you regret. :-) >> > > What about me doing something that *I* regret like taking this on? =) > I trust you more than myself in this issue, Brett. You'll do fine. I may just leave the room while it's being discussed. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Dec 2 19:52:11 2014 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 2 Dec 2014 19:52:11 +0100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: <20141202195211.77be1d74@fsol> On Tue, 02 Dec 2014 18:21:39 +0000 Brett Cannon wrote: > > So if we did have a discussion at the summit and someone decided to argue > for FLOSS vs. not as a key factor then I would politely cut them off and > say that doesn't matter to me and move on. As I said, I would moderate the > conversation to keep it on-task and not waste my time with points that have > already been made and flagged by me and you as not deal-breakers. And any > votes would be to gauge the feeling of the room and not as a binding > decision; I assume either me or someone else is going to be the dictator on > this and this won't be a majority decision. Can we stop making decisions at summits where it's always the same people being present? Thanks Antoine. From barry at python.org Tue Dec 2 19:58:44 2014 From: barry at python.org (Barry Warsaw) Date: Tue, 2 Dec 2014 13:58:44 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: <20141202135844.3216b148@anarchist.wooz.org> On Dec 02, 2014, at 06:21 PM, Brett Cannon wrote: >Well, if I'm going to be the Great Decider on this then I can say upfront >I'm taking a pragmatic view of preferring open but not mandating it, >preferring hg over git but not ruling out a switch, preferring Python-based >tools but not viewing it as a negative to not use Python, etc. I would like >to think I have earned somewhat of a reputation of being level-headed and >so none of this should really be a surprise to anyone. I think it's equally important to describe what criteria you will use to make this decision. E.g. are you saying all these above points will be completely ignored, or all else being equal, they will help tip the balance? Cheers, -Barry From brett at python.org Tue Dec 2 19:59:54 2014 From: brett at python.org (Brett Cannon) Date: Tue, 02 Dec 2014 18:59:54 +0000 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> <20141202195211.77be1d74@fsol> Message-ID: On Tue Dec 02 2014 at 1:52:49 PM Antoine Pitrou wrote: > On Tue, 02 Dec 2014 18:21:39 +0000 > Brett Cannon wrote: > > > > So if we did have a discussion at the summit and someone decided to argue > > for FLOSS vs. not as a key factor then I would politely cut them off and > > say that doesn't matter to me and move on. As I said, I would moderate > the > > conversation to keep it on-task and not waste my time with points that > have > > already been made and flagged by me and you as not deal-breakers. And any > > votes would be to gauge the feeling of the room and not as a binding > > decision; I assume either me or someone else is going to be the dictator > on > > this and this won't be a majority decision. > > Can we stop making decisions at summits where it's always the same > people being present? > I already said I'm not going to make a decision there, but you have to admit having an in-person discussion is a heck of a lot easier than going back and forth in email and so I'm not willing to rule out at least talking about the topic at PyCon. I wouldn't hold it against a BDFAP talking about something at EuroPython and happening to make a decision while there and so I would expect the same courtesy. -Brett > > Thanks > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Dec 2 20:09:21 2014 From: brett at python.org (Brett Cannon) Date: Tue, 02 Dec 2014 19:09:21 +0000 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> <20141202135844.3216b148@anarchist.wooz.org> Message-ID: On Tue Dec 02 2014 at 1:59:20 PM Barry Warsaw wrote: > On Dec 02, 2014, at 06:21 PM, Brett Cannon wrote: > > >Well, if I'm going to be the Great Decider on this then I can say upfront > >I'm taking a pragmatic view of preferring open but not mandating it, > >preferring hg over git but not ruling out a switch, preferring > Python-based > >tools but not viewing it as a negative to not use Python, etc. I would > like > >to think I have earned somewhat of a reputation of being level-headed and > >so none of this should really be a surprise to anyone. > > I think it's equally important to describe what criteria you will use to > make > this decision. E.g. are you saying all these above points will be > completely > ignored, or all else being equal, they will help tip the balance? > Considering Guido just gave me this position I have not exactly had a ton of time to think the intricacies out, but they are all positives and can help tip the balance or break ties (I purposely worded all of that with "prefer", etc.). For instance, if a FLOSS solution came forward that looked to be good and close enough to what would be a good workflow along with support commitments from the infrastructure team and folks to maintain the code -- and this will have to people several people as experience with the issue tracker has shown -- then that can help tip over the closed-source, hosted solution which might have some perks. As for Python over something else, that comes into play in open source more from a maintenance perspective, but for closed source it would be a tie-breaker only since it doesn't exactly influence the usability of the closed-source solution like it does an open-source one. Basically I'm willing to give brownie points for open source and Python stuff, but it is just that: points and not deal-breakers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Dec 2 20:15:07 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 2 Dec 2014 14:15:07 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> <20141202135844.3216b148@anarchist.wooz.org> Message-ID: <5E0BD252-A743-4C94-BA34-48D60032B7FE@stufft.io> > On Dec 2, 2014, at 2:09 PM, Brett Cannon wrote: > > > > On Tue Dec 02 2014 at 1:59:20 PM Barry Warsaw > wrote: > On Dec 02, 2014, at 06:21 PM, Brett Cannon wrote: > > >Well, if I'm going to be the Great Decider on this then I can say upfront > >I'm taking a pragmatic view of preferring open but not mandating it, > >preferring hg over git but not ruling out a switch, preferring Python-based > >tools but not viewing it as a negative to not use Python, etc. I would like > >to think I have earned somewhat of a reputation of being level-headed and > >so none of this should really be a surprise to anyone. > > I think it's equally important to describe what criteria you will use to make > this decision. E.g. are you saying all these above points will be completely > ignored, or all else being equal, they will help tip the balance? > > Considering Guido just gave me this position I have not exactly had a ton of time to think the intricacies out, but they are all positives and can help tip the balance or break ties (I purposely worded all of that with "prefer", etc.). For instance, if a FLOSS solution came forward that looked to be good and close enough to what would be a good workflow along with support commitments from the infrastructure team and folks to maintain the code -- and this will have to people several people as experience with the issue tracker has shown -- then that can help tip over the closed-source, hosted solution which might have some perks. As for Python over something else, that comes into play in open source more from a maintenance perspective, but for closed source it would be a tie-breaker only since it doesn't exactly influence the usability of the closed-source solution like it does an open-source one. > > Basically I'm willing to give brownie points for open source and Python stuff, but it is just that: points and not deal-breakers. This sounds like a pretty reasonable attitude to take towards this. If we?re going to be experimenting/talking things over, should I withdraw my PEP? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Dec 2 20:20:16 2014 From: brett at python.org (Brett Cannon) Date: Tue, 02 Dec 2014 19:20:16 +0000 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> <20141202135844.3216b148@anarchist.wooz.org> <5E0BD252-A743-4C94-BA34-48D60032B7FE@stufft.io> Message-ID: On Tue Dec 02 2014 at 2:15:09 PM Donald Stufft wrote: > > On Dec 2, 2014, at 2:09 PM, Brett Cannon wrote: > > > > On Tue Dec 02 2014 at 1:59:20 PM Barry Warsaw wrote: > >> On Dec 02, 2014, at 06:21 PM, Brett Cannon wrote: >> >> >Well, if I'm going to be the Great Decider on this then I can say upfront >> >I'm taking a pragmatic view of preferring open but not mandating it, >> >preferring hg over git but not ruling out a switch, preferring >> Python-based >> >tools but not viewing it as a negative to not use Python, etc. I would >> like >> >to think I have earned somewhat of a reputation of being level-headed and >> >so none of this should really be a surprise to anyone. >> >> I think it's equally important to describe what criteria you will use to >> make >> this decision. E.g. are you saying all these above points will be >> completely >> ignored, or all else being equal, they will help tip the balance? >> > > Considering Guido just gave me this position I have not exactly had a ton > of time to think the intricacies out, but they are all positives and can > help tip the balance or break ties (I purposely worded all of that with > "prefer", etc.). For instance, if a FLOSS solution came forward that looked > to be good and close enough to what would be a good workflow along with > support commitments from the infrastructure team and folks to maintain the > code -- and this will have to people several people as experience with the > issue tracker has shown -- then that can help tip over the closed-source, > hosted solution which might have some perks. As for Python over something > else, that comes into play in open source more from a maintenance > perspective, but for closed source it would be a tie-breaker only since it > doesn't exactly influence the usability of the closed-source solution like > it does an open-source one. > > Basically I'm willing to give brownie points for open source and Python > stuff, but it is just that: points and not deal-breakers. > > > This sounds like a pretty reasonable attitude to take towards this. > > If we?re going to be experimenting/talking things over, should I withdraw > my PEP? > No because only two people have said they like the experiment idea so that's not exactly enough to say it's worth the effort. =) Plus GitHub could be chosen in the end. Basically a PEP staying in draft is no big deal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Dec 2 20:21:29 2014 From: brett at python.org (Brett Cannon) Date: Tue, 02 Dec 2014 19:21:29 +0000 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> <20141202135844.3216b148@anarchist.wooz.org> <5E0BD252-A743-4C94-BA34-48D60032B7FE@stufft.io> Message-ID: I should say I will take a few days to think about this and then I will start a new thread outlining what I think we should be aiming for to help frame the whole discussion and to give proponents something to target. On Tue Dec 02 2014 at 2:20:16 PM Brett Cannon wrote: > On Tue Dec 02 2014 at 2:15:09 PM Donald Stufft wrote: > >> >> On Dec 2, 2014, at 2:09 PM, Brett Cannon wrote: >> >> >> >> On Tue Dec 02 2014 at 1:59:20 PM Barry Warsaw wrote: >> >>> On Dec 02, 2014, at 06:21 PM, Brett Cannon wrote: >>> >>> >Well, if I'm going to be the Great Decider on this then I can say >>> upfront >>> >I'm taking a pragmatic view of preferring open but not mandating it, >>> >preferring hg over git but not ruling out a switch, preferring >>> Python-based >>> >tools but not viewing it as a negative to not use Python, etc. I would >>> like >>> >to think I have earned somewhat of a reputation of being level-headed >>> and >>> >so none of this should really be a surprise to anyone. >>> >>> I think it's equally important to describe what criteria you will use to >>> make >>> this decision. E.g. are you saying all these above points will be >>> completely >>> ignored, or all else being equal, they will help tip the balance? >>> >> >> Considering Guido just gave me this position I have not exactly had a ton >> of time to think the intricacies out, but they are all positives and can >> help tip the balance or break ties (I purposely worded all of that with >> "prefer", etc.). For instance, if a FLOSS solution came forward that looked >> to be good and close enough to what would be a good workflow along with >> support commitments from the infrastructure team and folks to maintain the >> code -- and this will have to people several people as experience with the >> issue tracker has shown -- then that can help tip over the closed-source, >> hosted solution which might have some perks. As for Python over something >> else, that comes into play in open source more from a maintenance >> perspective, but for closed source it would be a tie-breaker only since it >> doesn't exactly influence the usability of the closed-source solution like >> it does an open-source one. >> >> Basically I'm willing to give brownie points for open source and Python >> stuff, but it is just that: points and not deal-breakers. >> >> >> This sounds like a pretty reasonable attitude to take towards this. >> >> If we?re going to be experimenting/talking things over, should I withdraw >> my PEP? >> > > No because only two people have said they like the experiment idea so > that's not exactly enough to say it's worth the effort. =) Plus GitHub > could be chosen in the end. > > Basically a PEP staying in draft is no big deal. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Tue Dec 2 20:30:33 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 02 Dec 2014 11:30:33 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> <20141202135844.3216b148@anarchist.wooz.org> <5E0BD252-A743-4C94-BA34-48D60032B7FE@stufft.io> Message-ID: <547E1359.10900@stoneleaf.us> On 12/02/2014 11:21 AM, Brett Cannon wrote: > > I should say I will take a few days to think about this and then I will start > a new thread outlining what I think we should be aiming for to help frame the > whole discussion and to give proponents something to target. Thanks for taking this on, Brett. -- ~Ethan~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From barry at python.org Tue Dec 2 21:14:07 2014 From: barry at python.org (Barry Warsaw) Date: Tue, 2 Dec 2014 15:14:07 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> <20141202135844.3216b148@anarchist.wooz.org> <5E0BD252-A743-4C94-BA34-48D60032B7FE@stufft.io> Message-ID: <20141202151407.4f260010@anarchist.wooz.org> On Dec 02, 2014, at 07:20 PM, Brett Cannon wrote: >No because only two people have said they like the experiment idea so >that's not exactly enough to say it's worth the effort. =) Plus GitHub >could be chosen in the end. Experimenting could be useful, although if the traffic is disproportionate (e.g. peps are updated way more often than devinabox) or folks don't interact with each of the repos, it might not be very representative. Still, I think it's better to get a visceral sense of how things actually work than to speculate about how they *might* work. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From ben+python at benfinney.id.au Tue Dec 2 21:15:49 2014 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 03 Dec 2014 07:15:49 +1100 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: <85tx1du7t6.fsf@benfinney.id.au> Brett Cannon writes: > Well, if I'm going to be the Great Decider on this then I can say > upfront I'm taking a pragmatic view of preferring open but not > mandating it, preferring hg over git but not ruling out a switch, > preferring Python-based tools but not viewing it as a negative to not > use Python, etc. (and you've later clarified that these will all be factors weighing in favour of a candidate.) Thanks for expressing your thoughts. Big thanks for taking on the role of consulting, evaluating, and deciding on this issue. -- \ ?I think Western civilization is more enlightened precisely | `\ because we have learned how to ignore our religious leaders.? | _o__) ?Bill Maher, 2003 | Ben Finney From ethan at stoneleaf.us Tue Dec 2 21:29:56 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 02 Dec 2014 12:29:56 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: <547E2144.6000707@stoneleaf.us> On 12/02/2014 08:50 AM, Brett Cannon wrote: > > So do people want PEPs or experimentation first? Experiments are good -- then we'll have real (if limited) data... which is better than no data. ;) -- ~Ethan~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From brett at python.org Tue Dec 2 21:35:08 2014 From: brett at python.org (Brett Cannon) Date: Tue, 02 Dec 2014 20:35:08 +0000 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> <20141202135844.3216b148@anarchist.wooz.org> <5E0BD252-A743-4C94-BA34-48D60032B7FE@stufft.io> <20141202151407.4f260010@anarchist.wooz.org> Message-ID: On Tue Dec 02 2014 at 3:14:20 PM Barry Warsaw wrote: > On Dec 02, 2014, at 07:20 PM, Brett Cannon wrote: > > >No because only two people have said they like the experiment idea so > >that's not exactly enough to say it's worth the effort. =) Plus GitHub > >could be chosen in the end. > > Experimenting could be useful, although if the traffic is disproportionate > (e.g. peps are updated way more often than devinabox) or folks don't > interact > with each of the repos, it might not be very representative. Still, I > think > it's better to get a visceral sense of how things actually work than to > speculate about how they *might* work. > That's my thinking. It's more about the workflow than measuring engagement on GitHub vs. Bitbucket (we already know how that skews). If I had my wish we would put the same repo in all three scenarios, but that is just asking for merge headaches. But I think if we go to the community and say, "help us test dev workflows by submitting spelling and grammar fixes" then we should quickly get an idea of the workflows (and I purposefully left devinabox out of a move since it is never touched after it essentially became a build script and a README and so represents our existing workflow; any benefit on our own infrastructure can go straight to cpython anyway which we can all experience). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Tue Dec 2 23:07:03 2014 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 2 Dec 2014 15:07:03 -0700 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: On Tue, Dec 2, 2014 at 6:24 AM, Nick Coghlan wrote: > P.S. I'll also bring up some of the RFEs raised in this discussion > around making it possible for folks to submit pull requests via > GitHub/BitBucket, even if the master repositories are hosted on PSF > infrastructure. In case it helps with any GH/BB-to-roundup/reitveld integration we might do, I've already done something similar for GH-to-reviewboard at work. All the code is on-line: https://bitbucket.org/ericsnowcurrently/rb_webhooks_extension -eric From ericsnowcurrently at gmail.com Tue Dec 2 23:33:16 2014 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 2 Dec 2014 15:33:16 -0700 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: On Tue, Dec 2, 2014 at 9:50 AM, Brett Cannon wrote: > So I was waiting for Nick to say what he wanted to do for the peps repo > since I view it as I get 2/3 of the choices and he gets the other third. > > The way I view it, the options are: > > Move to GitHub > Move to Bitbucket > Improve our current tooling (either through new hosting setup and/or adding > first-world support for downloading PRs from GitHub/Bitbucket) I'd argue that option #3 here is somewhat orthogonal to switching hosting. It makes sense regardless unless we plan on ditching roundup and reitveld (to which I'd be opposed). > > Regardless of what we do, I think we should graduate the mirrors on GitHub > and Bitbucket to "official" -- for the proposed repos and cpython -- and get > their repos updating per-push instead of as a cron job. I also think we > should also flip on any CI we can (e.g. turn on Travis for GitHub along with > coveralls support using coverage.py's encodings trick). This will get us the > most accessible repo backups as well as the widest tool coverage for > contributors to assist them in their contributions (heck, even if we just > get regular coverage reports for Python that would be a great win out of all > of this). +1 to all of this. Doing this would allow us to move forward with GH/BB-roundup/reitveld integration (option #3) sooner rather than later, regardless of moving to other hosting. > So do people want PEPs or experimentation first? +1 to PEPs. It's basically already happening. I'd like to see where 474/481/etc. end up, particularly with what Nick brought up earlier. Furthermore, I'm not sure how effectively we can experiment when it comes to moving hosting. There's overhead involved that biases the outcome and in part contributes to the momentum of the initial experimental conditions. I doubt any external solution is going to prove drastically better than another, making it harder to justify the effort to move yet again. -eric From guido at python.org Tue Dec 2 23:42:25 2014 From: guido at python.org (Guido van Rossum) Date: Tue, 2 Dec 2014 14:42:25 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: Before anyone gets too excited about Rietveld (which I originally wrote as an APp Engine demo), AFAIK we're using a fork that only Martin von Loewis can maintain -- and it's a dead-end fork because the Rietveld project itself only supports App Engine, but Martin's fork runs on our own server infrastructure. These environments are *very* different (App Engine has its own unique noSQL API) and it took a major hack (not by MvL) to get it to work outside App Engine. That fork is not supported, and hence our Rietveld installation still has various bugs that have long been squashed in the main Rietveld repo. (And no, I don't have time to help with this -- my recommendation is to move off Rietveld to something supported.) On Tue, Dec 2, 2014 at 2:33 PM, Eric Snow wrote: > On Tue, Dec 2, 2014 at 9:50 AM, Brett Cannon wrote: > > So I was waiting for Nick to say what he wanted to do for the peps repo > > since I view it as I get 2/3 of the choices and he gets the other third. > > > > The way I view it, the options are: > > > > Move to GitHub > > Move to Bitbucket > > Improve our current tooling (either through new hosting setup and/or > adding > > first-world support for downloading PRs from GitHub/Bitbucket) > > I'd argue that option #3 here is somewhat orthogonal to switching > hosting. It makes sense regardless unless we plan on ditching roundup > and reitveld (to which I'd be opposed). > > > > > Regardless of what we do, I think we should graduate the mirrors on > GitHub > > and Bitbucket to "official" -- for the proposed repos and cpython -- and > get > > their repos updating per-push instead of as a cron job. I also think we > > should also flip on any CI we can (e.g. turn on Travis for GitHub along > with > > coveralls support using coverage.py's encodings trick). This will get us > the > > most accessible repo backups as well as the widest tool coverage for > > contributors to assist them in their contributions (heck, even if we just > > get regular coverage reports for Python that would be a great win out of > all > > of this). > > +1 to all of this. Doing this would allow us to move forward with > GH/BB-roundup/reitveld integration (option #3) sooner rather than > later, regardless of moving to other hosting. > > > So do people want PEPs or experimentation first? > > +1 to PEPs. It's basically already happening. I'd like to see where > 474/481/etc. end up, particularly with what Nick brought up earlier. > > Furthermore, I'm not sure how effectively we can experiment when it > comes to moving hosting. There's overhead involved that biases the > outcome and in part contributes to the momentum of the initial > experimental conditions. I doubt any external solution is going to > prove drastically better than another, making it harder to justify the > effort to move yet again. > > -eric > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Dec 2 23:47:31 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 2 Dec 2014 17:47:31 -0500 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> Message-ID: <14B6B312-FF42-46E8-B614-84B8E1784ED6@stufft.io> > On Dec 2, 2014, at 5:42 PM, Guido van Rossum wrote: > > Before anyone gets too excited about Rietveld (which I originally wrote as an APp Engine demo), AFAIK we're using a fork that only Martin von Loewis can maintain -- and it's a dead-end fork because the Rietveld project itself only supports App Engine, but Martin's fork runs on our own server infrastructure. These environments are *very* different (App Engine has its own unique noSQL API) and it took a major hack (not by MvL) to get it to work outside App Engine. That fork is not supported, and hence our Rietveld installation still has various bugs that have long been squashed in the main Rietveld repo. (And no, I don't have time to help with this -- my recommendation is to move off Rietveld to something supported.) It probably makes sense to include code reviews in the matrix of what tools we?re going to use then yea? Like Github/Bitbucket/etc have review built in. Other tools like Phabricator do as well but are self hosted instead. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre-yves.david at ens-lyon.org Tue Dec 2 23:59:01 2014 From: pierre-yves.david at ens-lyon.org (Pierre-Yves David) Date: Tue, 02 Dec 2014 14:59:01 -0800 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <14B6B312-FF42-46E8-B614-84B8E1784ED6@stufft.io> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> <14B6B312-FF42-46E8-B614-84B8E1784ED6@stufft.io> Message-ID: <547E4435.2040206@ens-lyon.org> On 12/02/2014 02:47 PM, Donald Stufft wrote: > >> On Dec 2, 2014, at 5:42 PM, Guido van Rossum > > wrote: >> >> Before anyone gets too excited about Rietveld (which I originally >> wrote as an APp Engine demo), AFAIK we're using a fork that only >> Martin von Loewis can maintain -- and it's a dead-end fork because the >> Rietveld project itself only supports App Engine, but Martin's fork >> runs on our own server infrastructure. These environments are *very* >> different (App Engine has its own unique noSQL API) and it took a >> major hack (not by MvL) to get it to work outside App Engine. That >> fork is not supported, and hence our Rietveld installation still has >> various bugs that have long been squashed in the main Rietveld repo. >> (And no, I don't have time to help with this -- my recommendation is >> to move off Rietveld to something supported.) > > It probably makes sense to include code reviews in the matrix of what > tools we?re going to use then yea? > > Like Github/Bitbucket/etc have review built in. Other tools like > Phabricator do as well but are self hosted instead. I think the people/company behind phabricator are planning to offer an hosting solution. Could be worth poking at them to have and idea of what is the status of it. -- Pierre-Yves David From drsalists at gmail.com Wed Dec 3 00:37:17 2014 From: drsalists at gmail.com (Dan Stromberg) Date: Tue, 2 Dec 2014 15:37:17 -0800 Subject: [Python-Dev] Python 2.x vs 3.x survey - new owner? Message-ID: Last year in late December, I did a brief, 9 question survey of 2.x vs 3.x usage. I like the think the results were interesting, but I don't have the spare cash to do it again this year. I probably shouldn't have done it last year. ^_^ Is anyone interested in taking over the survey? It's on SurveyMonkey. It was mentioned last year, that it might be interesting to see how things change, year to year. It was also reported that some people felt that late December wasn't necessarily the best time of year to do the survey, as a lot of people were on vacation. The Python wiki has last year's results: https://wiki.python.org/moin/2.x-vs-3.x-survey From njs at pobox.com Wed Dec 3 00:54:29 2014 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 2 Dec 2014 23:54:29 +0000 Subject: [Python-Dev] advice needed: best approach to enabling "metamodules"? In-Reply-To: <20141202101927.1b4a5bf8@fsol> References: <547B96D8.7050700@hotpy.org> <20141202101927.1b4a5bf8@fsol> Message-ID: On Tue, Dec 2, 2014 at 9:19 AM, Antoine Pitrou wrote: > On Mon, 1 Dec 2014 21:38:45 +0000 > Nathaniel Smith wrote: >> >> object_set_class is responsible for checking whether it's okay to take >> an object of class "oldto" and convert it to an object of class >> "newto". Basically it's goal is just to avoid crashing the interpreter >> (as would quickly happen if you e.g. allowed "[].__class__ = dict"). >> Currently the rules (spread across object_set_class and >> compatible_for_assignment) are: >> >> (1) both oldto and newto have to be heap types >> (2) they have to have the same tp_dealloc >> (3) they have to have the same tp_free >> (4) if you walk up the ->tp_base chain for both types until you find >> the most-ancestral type that has a compatible struct layout (as >> checked by equiv_structs), then either >> (4a) these ancestral types have to be the same, OR >> (4b) these ancestral types have to have the same tp_base, AND they >> have to have added the same slots on top of that tp_base (e.g. if you >> have class A(object): pass and class B(object): pass then they'll both >> have added a __dict__ slot at the same point in the instance struct, >> so that's fine; this is checked in same_slots_added). >> >> The only place the code assumes that it is dealing with heap types is >> in (4b) > > I'm not sure. Many operations are standardized on heap types that can > have arbitrary definitions on static types (I'm talking about the tp_ > methods). You'd have to review them to double check. Reading through the list of tp_ methods I can't see any other that look problematic. The finalizers are kinda intimate, but I think people would expect that if you swap an instance's type to something that has a different __del__ method then it's the new __del__ method that'll be called. If we wanted to be really careful we should perhaps do something cleverer with tp_is_gc, but so long as type objects are the only objects that have a non-trivial tp_is_gc, and the tp_is_gc call depends only on their tp_flags (which are unmodified by __class__ assignment), then we should still be safe (and anyway this is orthogonal to the current issues). > For example, a heap type's tp_new increments the type's refcount, so > you have to adjust the instance refcount if you cast it from a non-heap > type to a heap type, and vice-versa (see slot_tp_new()). Right, fortunately this is easy :-). > (this raises the interesting question "what happens if you assign to > __class__ from a __del__ method?") subtype_dealloc actually attempts to take this possibility into account -- see the comment "Extract the type again; tp_del may have changed it". I'm not at all sure that it's handling is *correct* -- there's a bunch of code that references 'type' between the call to tp_del and this comment, and there's code after the comment that references 'base' without recalculating it. But it is there :-) >> -- same_slots_added unconditionally casts the ancestral types >> to (PyHeapTypeObject*). AFAICT that's why step (1) is there, to >> protect this code. But I don't think the check actually works -- step >> (1) checks that the types we're trying to assign are heap types, but >> this is no guarantee that the *ancestral* types will be heap types. >> [Also, the code for __bases__ assignment appears to also call into >> this code with no heap type checks at all.] E.g., I think if you do >> >> class MyList(list): >> __slots__ = () >> >> class MyDict(dict): >> __slots__ = () >> >> MyList().__class__ = MyDict() >> >> then you'll end up in same_slots_added casting PyDict_Type and >> PyList_Type to PyHeapTypeObjects and then following invalid pointers >> into la-la land. (The __slots__ = () is to maintain layout >> compatibility with the base types; if you find builtin types that >> already have __dict__ and weaklist and HAVE_GC then this example >> should still work even with perfectly empty subclasses.) >> >> Okay, so suppose we move the heap type check (step 1) down into >> same_slots_added (step 4b), since AFAICT this is actually more correct >> anyway. This is almost enough to enable __class__ assignment on >> modules, because the cases we care about will go through the (4a) >> branch rather than (4b), so the heap type thing is irrelevant. >> >> The remaining problem is the requirement that both types have the same >> tp_dealloc (step 2). ModuleType itself has tp_dealloc == >> module_dealloc, while all(?) heap types have tp_dealloc == >> subtype_dealloc. Here again, though, I'm not sure what purpose this >> check serves. subtype_dealloc basically cleans up extra slots, and >> then calls the base class tp_dealloc. So AFAICT it's totally fine if >> oldto->tp_dealloc == module_dealloc, and newto->tp_dealloc == >> subtype_dealloc, so long as newto is a subtype of oldto -- b/c this >> means newto->tp_dealloc will end up calling oldto->tp_dealloc anyway. >> OTOH it's not actually a guarantee of anything useful to see that >> oldto->tp_dealloc == newto->tp_dealloc == subtype_dealloc, because >> subtype_dealloc does totally different things depending on the >> ancestry tree -- MyList and MyDict above pass the tp_dealloc check, >> even though list.tp_dealloc and dict.tp_dealloc are definitely *not* >> interchangeable. >> >> So I suspect that a more correct way to do this check would be something like >> >> PyTypeObject *old__real_deallocer = oldto, *new_real_deallocer = newto; >> while (old_real_deallocer->tp_dealloc == subtype_dealloc) >> old_real_deallocer = old_real_deallocer->tp_base; >> while (new_real_deallocer->tp_dealloc == subtype_dealloc) >> new_real_deallocer = new_real_deallocer->tp_base; >> if (old_real_deallocer->tp_dealloc != new_real_deallocer) >> error out; > > Sounds good. > >> Module subclasses would pass this check. Alternatively it might make >> more sense to add a check in equiv_structs that >> (child_type->tp_dealloc == subtype_dealloc || child_type->tp_dealloc >> == parent_type->tp_dealloc); I think that would accomplish the same >> thing in a somewhat cleaner way. > > There's no "child" and "parent" types in equiv_structs(). Not as currently written, but every single call site is of the form equiv_structs(x, x->tp_base). And equiv_structs takes advantage of this -- e.g., checking that two types have the same tp_basicsize is pretty uninformative if they're unrelated types, but if they're parent and child then it tells you that they have exactly the same slots. I wrote a patch incorporating the above ideas: http://bugs.python.org/issue22986 -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From pydev at rebertia.com Wed Dec 3 01:07:47 2014 From: pydev at rebertia.com (Chris Rebert) Date: Tue, 2 Dec 2014 16:07:47 -0800 Subject: [Python-Dev] WebM MIME type in mimetypes module Message-ID: Hi all, I'm seeking to move http://bugs.python.org/issue16329 towards conclusion. Since the discussion on the issue itself seems to have petered out, I thought I'd bring it up here. To summarize the issue, it proposes adding an entry for WebM ( http://www.webmproject.org/docs/container/#naming ) to the mimetypes standard library module's file-extension to MIME-type database. (Specifically: .webm => video/webm ) Mozilla, Microsoft, Opera, and freedesktop.org (the de facto standard *nix MIME type database package) all acknowledge the existence of a video/webm MIME type (see the issue for relevant links), and this MIME type is in WebM's documentation. However, there is no official IANA registration for WebM's MIME type, and none seems to be forthcoming/planned. As R.D.M. said in the issue: > So we have two choices: > leave it to the platform mime types file to define because it is not even on track to be an official IANA standard, > or include it with a comment that it is a de-facto standard. [...] > I guess I'd be OK with adding it as a de-facto standard, though I'm not entirely comfortable with it. But that would represent a change in policy, so others may want to weigh in. Nobody has weighed in during the subsequent ~2 years, so I'm hoping a few of y'all could weigh in one way or the other, and thus bring the issue to a definitive conclusion. Cheers, Chris -- https://github.com/cvrebert From ncoghlan at gmail.com Wed Dec 3 01:14:24 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 3 Dec 2014 10:14:24 +1000 Subject: [Python-Dev] PEP 481 - Migrate Some Supporting Repositories to Git and Github In-Reply-To: <14B6B312-FF42-46E8-B614-84B8E1784ED6@stufft.io> References: <30725AC9-3DCF-416A-BCCD-2D64D489898C@stufft.io> <85d285wa3c.fsf@benfinney.id.au> <20141130140137.46803df1@fsol> <20141130115557.427918a2@limelight.wooz.org> <31195B20-A4BE-441C-ADC5-31236E2C4E1B@stufft.io> <547B6FD4.5000408@stoneleaf.us> <609E6EF0-2A6C-4F1E-B25F-DC43588B04C4@stufft.io> <85zjb8usu9.fsf@benfinney.id.au> <85sih0urmp.fsf@benfinney.id.au> <3D0E2AB4-ABD1-451D-975F-55B83920E57D@stufft.io> <14B6B312-FF42-46E8-B614-84B8E1784ED6@stufft.io> Message-ID: On 3 Dec 2014 08:47, "Donald Stufft" wrote: > > >> On Dec 2, 2014, at 5:42 PM, Guido van Rossum wrote: >> >> Before anyone gets too excited about Rietveld (which I originally wrote as an APp Engine demo), AFAIK we're using a fork that only Martin von Loewis can maintain -- and it's a dead-end fork because the Rietveld project itself only supports App Engine, but Martin's fork runs on our own server infrastructure. These environments are *very* different (App Engine has its own unique noSQL API) and it took a major hack (not by MvL) to get it to work outside App Engine. That fork is not supported, and hence our Rietveld installation still has various bugs that have long been squashed in the main Rietveld repo. (And no, I don't have time to help with this -- my recommendation is to move off Rietveld to something supported.) Thanks Guido - I'd started thinking in that direction for PEP 462 (in terms of potentially using Kallithea/RhodeCode for the review component rather than Reitveld), so it's good to know you'd be OK with such a change. > It probably makes sense to include code reviews in the matrix of what tools we?re going to use then yea? I'd suggest asking for discussion of a more general path forward for CPython workflow improvements. Not a "this must be included in the proposal", but rather answering the question, "if we choose this option for the support repos, how will it impact the future direction of CPython maintenance itself?". Cheers, Nick. > > Like Github/Bitbucket/etc have review built in. Other tools like Phabricator do as well but are self hosted instead. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Wed Dec 3 03:16:52 2014 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 02 Dec 2014 21:16:52 -0500 Subject: [Python-Dev] WebM MIME type in mimetypes module In-Reply-To: References: Message-ID: On 12/2/2014 7:07 PM, Chris Rebert wrote: > Hi all, > > I'm seeking to move http://bugs.python.org/issue16329 towards conclusion. > Since the discussion on the issue itself seems to have petered out, I > thought I'd bring it up here. > > To summarize the issue, it proposes adding an entry for WebM ( > http://www.webmproject.org/docs/container/#naming ) to the mimetypes > standard library module's file-extension to MIME-type database. > (Specifically: .webm => video/webm ) > Mozilla, Microsoft, Opera, and freedesktop.org (the de facto standard > *nix MIME type database package) all acknowledge the existence of a > video/webm MIME type (see the issue for relevant links), and this MIME > type is in WebM's documentation. > However, there is no official IANA registration for WebM's MIME type, > and none seems to be forthcoming/planned. > > As R.D.M. said in the issue: >> So we have two choices: >> leave it to the platform mime types file to define because it is not even on track to be an official IANA standard, >> or include it with a comment that it is a de-facto standard. > [...] >> I guess I'd be OK with adding it as a de-facto standard, though I'm not entirely comfortable with it. But that would represent a change in policy, so others may want to weigh in. > > > Nobody has weighed in during the subsequent ~2 years, so I'm hoping a > few of y'all could weigh in one way or the other, and thus bring the > issue to a definitive conclusion. If it has remained a defacto standard for the two years since your made that list, that would be a point in favor of recognizing it. Have .webm files become more common in actual use? -- Terry Jan Reedy From cs at zip.com.au Wed Dec 3 05:30:36 2014 From: cs at zip.com.au (Cameron Simpson) Date: Wed, 3 Dec 2014 15:30:36 +1100 Subject: [Python-Dev] WebM MIME type in mimetypes module In-Reply-To: References: Message-ID: <20141203043036.GA62072@cskk.homeip.net> On 02Dec2014 21:16, Terry Reedy wrote: >On 12/2/2014 7:07 PM, Chris Rebert wrote: >>To summarize the issue, it proposes adding an entry for WebM ( >>http://www.webmproject.org/docs/container/#naming ) to the mimetypes >>standard library module's file-extension to MIME-type database. >>(Specifically: .webm => video/webm ) [...] > >If it has remained a defacto standard for the two years since your >made that list, that would be a point in favor of recognizing it. >Have .webm files become more common in actual use? Subjectively I've seen a few more about that I think I used to. And there are definitely some .webm files on some websites I support. Can't say if they're more common in terms of hard data though. But if most browsers expect them, arguably we should recognise their existence. Usual disclaimer: I am not a python-dev. Cheers, Cameron Simpson The nice thing about standards is that you have so many to choose from; furthermore, if you do not like any of them, you can just wait for next year's model. - Andrew S. Tanenbaum From kaiser.yann at gmail.com Wed Dec 3 06:49:28 2014 From: kaiser.yann at gmail.com (Yann Kaiser) Date: Wed, 03 Dec 2014 05:49:28 +0000 Subject: [Python-Dev] WebM MIME type in mimetypes module References: <20141203043036.GA62072@cskk.homeip.net> Message-ID: Apologies if it has already been mentioned in the issue, but could the webm project be nudged towards officializing their mimetype? On Wed, Dec 3, 2014, 05:56 Cameron Simpson wrote: > On 02Dec2014 21:16, Terry Reedy wrote: > >On 12/2/2014 7:07 PM, Chris Rebert wrote: > >>To summarize the issue, it proposes adding an entry for WebM ( > >>http://www.webmproject.org/docs/container/#naming ) to the mimetypes > >>standard library module's file-extension to MIME-type database. > >>(Specifically: .webm => video/webm ) [...] > > > >If it has remained a defacto standard for the two years since your > >made that list, that would be a point in favor of recognizing it. > >Have .webm files become more common in actual use? > > Subjectively I've seen a few more about that I think I used to. > And there are definitely some .webm files on some websites I support. > > Can't say if they're more common in terms of hard data though. But if most > browsers expect them, arguably we should recognise their existence. > > Usual disclaimer: I am not a python-dev. > > Cheers, > Cameron Simpson > > The nice thing about standards is that you have so many to choose from; > furthermore, if you do not like any of them, you can just wait for next > year's model. - Andrew S. Tanenbaum > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > kaiser.yann%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Dec 3 10:36:44 2014 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 3 Dec 2014 10:36:44 +0100 Subject: [Python-Dev] WebM MIME type in mimetypes module References: Message-ID: <20141203103644.206346f3@fsol> On Tue, 2 Dec 2014 16:07:47 -0800 Chris Rebert wrote: > Hi all, > > I'm seeking to move http://bugs.python.org/issue16329 towards conclusion. > Since the discussion on the issue itself seems to have petered out, I > thought I'd bring it up here. > > To summarize the issue, it proposes adding an entry for WebM ( > http://www.webmproject.org/docs/container/#naming ) to the mimetypes > standard library module's file-extension to MIME-type database. > (Specifically: .webm => video/webm ) > Mozilla, Microsoft, Opera, and freedesktop.org (the de facto standard > *nix MIME type database package) all acknowledge the existence of a > video/webm MIME type (see the issue for relevant links), and this MIME > type is in WebM's documentation. > However, there is no official IANA registration for WebM's MIME type, > and none seems to be forthcoming/planned. I don't think we have to wait for IANA. There certainly won't be a competition around the "video/webm" MIME type, so no harm would be done by adding it to the module. Regards Antoine. > > As R.D.M. said in the issue: > > So we have two choices: > > leave it to the platform mime types file to define because it is not even on track to be an official IANA standard, > > or include it with a comment that it is a de-facto standard. > [...] > > I guess I'd be OK with adding it as a de-facto standard, though I'm not entirely comfortable with it. But that would represent a change in policy, so others may want to weigh in. > > > Nobody has weighed in during the subsequent ~2 years, so I'm hoping a > few of y'all could weigh in one way or the other, and thus bring the > issue to a definitive conclusion. > > Cheers, > Chris > -- > https://github.com/cvrebert From pydev at rebertia.com Wed Dec 3 21:16:41 2014 From: pydev at rebertia.com (Chris Rebert) Date: Wed, 3 Dec 2014 12:16:41 -0800 Subject: [Python-Dev] WebM MIME type in mimetypes module In-Reply-To: References: Message-ID: On Tue, Dec 2, 2014 at 6:16 PM, Terry Reedy wrote: > On 12/2/2014 7:07 PM, Chris Rebert wrote: >> >> Hi all, >> >> I'm seeking to move http://bugs.python.org/issue16329 towards conclusion. >> Since the discussion on the issue itself seems to have petered out, I >> thought I'd bring it up here. >> >> To summarize the issue, it proposes adding an entry for WebM ( >> http://www.webmproject.org/docs/container/#naming ) to the mimetypes >> standard library module's file-extension to MIME-type database. >> (Specifically: .webm => video/webm ) >> Mozilla, Microsoft, Opera, and freedesktop.org (the de facto standard >> *nix MIME type database package) all acknowledge the existence of a >> video/webm MIME type (see the issue for relevant links), and this MIME >> type is in WebM's documentation. >> However, there is no official IANA registration for WebM's MIME type, >> and none seems to be forthcoming/planned. >> >> As R.D.M. said in the issue: >>> >>> So we have two choices: >>> leave it to the platform mime types file to define because it is not even >>> on track to be an official IANA standard, >>> or include it with a comment that it is a de-facto standard. >> [...] >>> I guess I'd be OK with adding it as a de-facto standard, though I'm not >>> entirely comfortable with it. But that would represent a change in policy, >>> so others may want to weigh in. >> >> Nobody has weighed in during the subsequent ~2 years, so I'm hoping a >> few of y'all could weigh in one way or the other, and thus bring the >> issue to a definitive conclusion. > > If it has remained a defacto standard for the two years since your made that > list, that would be a point in favor of recognizing it. Have .webm files > become more common in actual use? I can't really speak to that personally one way or the other, but some researching shows it's used by YouTube and Wikimedia Commons, and the format in general seems to continue to enjoy a reasonably good level of support (see http://en.wikipedia.org/wiki/WebM#Vendor_support , http://caniuse.com/#search=webm ). Cheers, Chris From ethan at stoneleaf.us Wed Dec 3 19:39:35 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 03 Dec 2014 10:39:35 -0800 Subject: [Python-Dev] WebM MIME type in mimetypes module In-Reply-To: References: Message-ID: <547F58E7.2070608@stoneleaf.us> On 12/02/2014 06:16 PM, Terry Reedy wrote: > On 12/2/2014 7:07 PM, Chris Rebert wrote: >> I'm seeking to move http://bugs.python.org/issue16329 towards conclusion. >> Since the discussion on the issue itself seems to have petered out, I >> thought I'd bring it up here. > If it has remained a defacto standard for the two years since your made that list, that would be a point in favor of > recognizing it. Have .webm files become more common in actual use? I agree -- if it's still out there, let's add it with the de facto comment. -- ~Ethan~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From status at bugs.python.org Fri Dec 5 18:08:00 2014 From: status at bugs.python.org (Python tracker) Date: Fri, 5 Dec 2014 18:08:00 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20141205170800.2027456262@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2014-11-28 - 2014-12-05) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 4666 ( -2) closed 30095 (+39) total 34761 (+37) Open issues with patches: 2173 Issues opened (28) ================== #9179: Lookback with group references incorrect (two issues?) http://bugs.python.org/issue9179 reopened by serhiy.storchaka #22619: Possible implementation of negative limit for traceback functi http://bugs.python.org/issue22619 reopened by vlth #22922: asyncio: call_soon() should raise an exception if the event lo http://bugs.python.org/issue22922 reopened by haypo #22964: dbm.open(..., "x") http://bugs.python.org/issue22964 opened by Antony.Lee #22968: Lib/types.py nit: isinstance != PyType_IsSubtype http://bugs.python.org/issue22968 opened by gmt #22969: Compile fails with --without-signal-module http://bugs.python.org/issue22969 opened by KHH #22970: Cancelling wait() after notification leaves Condition in an in http://bugs.python.org/issue22970 opened by dcoles #22971: test_pickle: "Fatal Python error: Cannot recover from stack ov http://bugs.python.org/issue22971 opened by haypo #22972: Timeout making ajax calls to SimpleHTTPServer from internet ex http://bugs.python.org/issue22972 opened by Andrew.Burrows #22976: multiprocessing Queue empty() is broken on Windows http://bugs.python.org/issue22976 opened by Rados??aw.Szkodzi??ski #22977: Unformatted ???Windows Error 0x%X??? exception message on Wine http://bugs.python.org/issue22977 opened by vadmium #22980: C extension naming doesn't take bitness into account http://bugs.python.org/issue22980 opened by pitrou #22981: Use CFLAGS when extracting multiarch http://bugs.python.org/issue22981 opened by pitrou #22982: BOM incorrectly inserted before writing, after seeking in text http://bugs.python.org/issue22982 opened by MarkIngramUK #22983: Cookie parsing should be more permissive http://bugs.python.org/issue22983 opened by demian.brecht #22984: test_json.test_endless_recursion(): "Fatal Python error: Canno http://bugs.python.org/issue22984 opened by haypo #22985: Segfault on time.sleep http://bugs.python.org/issue22985 opened by Omer.Katz #22986: Improved handling of __class__ assignment http://bugs.python.org/issue22986 opened by njs #22988: No error when yielding from `finally` http://bugs.python.org/issue22988 opened by fov #22989: HTTPResponse.msg not as documented http://bugs.python.org/issue22989 opened by bastik #22990: bdist installation dialog http://bugs.python.org/issue22990 opened by Alan #22991: test_gdb leaves the terminal in raw mode with gdb 7.8.1 http://bugs.python.org/issue22991 opened by xdegaye #22992: Adding a git developer's guide to Mercurial to devguide http://bugs.python.org/issue22992 opened by demian.brecht #22993: Plistlib fails on certain valid plist values http://bugs.python.org/issue22993 opened by Connor.Wolf #22995: Restrict default pickleability http://bugs.python.org/issue22995 opened by serhiy.storchaka #22996: Order of _io objects finalization can lose data in reference c http://bugs.python.org/issue22996 opened by pitrou #22997: Minor improvements to "Functional API" section of Enum documen http://bugs.python.org/issue22997 opened by simeon.visser #22998: inspect.Signature and default arguments http://bugs.python.org/issue22998 opened by doerwalter Most recent 15 issues with no replies (15) ========================================== #22990: bdist installation dialog http://bugs.python.org/issue22990 #22989: HTTPResponse.msg not as documented http://bugs.python.org/issue22989 #22985: Segfault on time.sleep http://bugs.python.org/issue22985 #22981: Use CFLAGS when extracting multiarch http://bugs.python.org/issue22981 #22970: Cancelling wait() after notification leaves Condition in an in http://bugs.python.org/issue22970 #22969: Compile fails with --without-signal-module http://bugs.python.org/issue22969 #22964: dbm.open(..., "x") http://bugs.python.org/issue22964 #22962: ipaddress: Add optional prefixlen argument to ip_interface and http://bugs.python.org/issue22962 #22958: Constructors of weakref mapping classes don't accept "self" an http://bugs.python.org/issue22958 #22956: Improved support for prepared SQL statements http://bugs.python.org/issue22956 #22947: Enable 'imageop' - "Multimedia Srvices Feature module" for 64- http://bugs.python.org/issue22947 #22942: Language Reference - optional comma http://bugs.python.org/issue22942 #22928: HTTP header injection in urrlib2/urllib/httplib/http.client http://bugs.python.org/issue22928 #22907: Misc/python-config.sh.in: ensure sed invocations only match be http://bugs.python.org/issue22907 #22893: Idle: __future__ does not work in startup code. http://bugs.python.org/issue22893 Most recent 15 issues waiting for review (15) ============================================= #22998: inspect.Signature and default arguments http://bugs.python.org/issue22998 #22997: Minor improvements to "Functional API" section of Enum documen http://bugs.python.org/issue22997 #22992: Adding a git developer's guide to Mercurial to devguide http://bugs.python.org/issue22992 #22991: test_gdb leaves the terminal in raw mode with gdb 7.8.1 http://bugs.python.org/issue22991 #22986: Improved handling of __class__ assignment http://bugs.python.org/issue22986 #22984: test_json.test_endless_recursion(): "Fatal Python error: Canno http://bugs.python.org/issue22984 #22970: Cancelling wait() after notification leaves Condition in an in http://bugs.python.org/issue22970 #22968: Lib/types.py nit: isinstance != PyType_IsSubtype http://bugs.python.org/issue22968 #22955: Pickling of methodcaller, attrgetter, and itemgetter http://bugs.python.org/issue22955 #22952: multiprocessing doc introduction not in affirmative tone http://bugs.python.org/issue22952 #22947: Enable 'imageop' - "Multimedia Srvices Feature module" for 64- http://bugs.python.org/issue22947 #22946: urllib gives incorrect url after open when using HTTPS http://bugs.python.org/issue22946 #22941: IPv4Interface arithmetic changes subnet mask http://bugs.python.org/issue22941 #22935: Disabling SSLv3 support http://bugs.python.org/issue22935 #22932: email.utils.formatdate uses unreliable time.timezone constant http://bugs.python.org/issue22932 Top 10 most discussed issues (10) ================================= #22980: C extension naming doesn't take bitness into account http://bugs.python.org/issue22980 25 msgs #17852: Built-in module _io can loose data from buffered files at exit http://bugs.python.org/issue17852 23 msgs #22356: mention explicitly that stdlib assumes gmtime(0) epoch is 1970 http://bugs.python.org/issue22356 8 msgs #22906: PEP 479: Change StopIteration handling inside generators http://bugs.python.org/issue22906 8 msgs #22955: Pickling of methodcaller, attrgetter, and itemgetter http://bugs.python.org/issue22955 8 msgs #9179: Lookback with group references incorrect (two issues?) http://bugs.python.org/issue9179 6 msgs #16329: mimetypes does not support webm type http://bugs.python.org/issue16329 6 msgs #22922: asyncio: call_soon() should raise an exception if the event lo http://bugs.python.org/issue22922 6 msgs #22931: cookies with square brackets in value http://bugs.python.org/issue22931 6 msgs #22959: http.client.HTTPSConnection checks hostname when SSL context h http://bugs.python.org/issue22959 6 msgs Issues closed (37) ================== #3068: IDLE - Add an extension configuration dialog http://bugs.python.org/issue3068 closed by terry.reedy #12987: Demo/scripts/newslist.py has non-free licensing terms http://bugs.python.org/issue12987 closed by terry.reedy #13027: python 2.6.6 interpreter core dumps on modules command from he http://bugs.python.org/issue13027 closed by serhiy.storchaka #14099: ZipFile.open() should not reopen the underlying file http://bugs.python.org/issue14099 closed by serhiy.storchaka #16569: Preventing errors of simultaneous access in zipfile http://bugs.python.org/issue16569 closed by serhiy.storchaka #18053: Add checks for Misc/NEWS in make patchcheck http://bugs.python.org/issue18053 closed by terry.reedy #19834: Unpickling exceptions pickled by Python 2 http://bugs.python.org/issue19834 closed by doerwalter #20335: bytes constructor accepts more than one argument even if the f http://bugs.python.org/issue20335 closed by serhiy.storchaka #21032: Socket leak if HTTPConnection.getresponse() fails http://bugs.python.org/issue21032 closed by serhiy.storchaka #22389: Add contextlib.redirect_stderr() http://bugs.python.org/issue22389 closed by berker.peksag #22407: re.LOCALE is nonsensical for Unicode http://bugs.python.org/issue22407 closed by serhiy.storchaka #22429: asyncio: pending call to loop.stop() if a coroutine raises a B http://bugs.python.org/issue22429 closed by python-dev #22473: The gloss on asyncio "future with run_forever" example is conf http://bugs.python.org/issue22473 closed by python-dev #22475: asyncio task get_stack documentation seems to contradict itsel http://bugs.python.org/issue22475 closed by python-dev #22599: traceback: errors in the linecache module at exit http://bugs.python.org/issue22599 closed by haypo #22768: Add a way to get the peer certificate of a SSL Transport http://bugs.python.org/issue22768 closed by haypo #22838: Convert re tests to unittest http://bugs.python.org/issue22838 closed by serhiy.storchaka #22895: test failure introduced by the fix for issue #22462 http://bugs.python.org/issue22895 closed by pitrou #22902: Use 'ip' for uuid.getnode() http://bugs.python.org/issue22902 closed by serhiy.storchaka #22909: [argparse] Using parse_known_args, unknown arg with space in v http://bugs.python.org/issue22909 closed by berker.peksag #22914: Rewrite of Python 2/3 porting HOWTO http://bugs.python.org/issue22914 closed by brett.cannon #22924: Use of deprecated cgi.escape http://bugs.python.org/issue22924 closed by serhiy.storchaka #22943: bsddb: test_queue fails on Windows http://bugs.python.org/issue22943 closed by serhiy.storchaka #22951: unexpected return from float.__repr__() for inf, -inf, nan http://bugs.python.org/issue22951 closed by terry.reedy #22960: xmlrpc.client.ServerProxy() should accept a custom SSL context http://bugs.python.org/issue22960 closed by benjamin.peterson #22963: broken link in PEP 102 http://bugs.python.org/issue22963 closed by berker.peksag #22965: smtplib.py: senderrs[each] -> TypeError: unhashable instance http://bugs.python.org/issue22965 closed by r.david.murray #22966: py_compile: foo.bar.py ??? __pycache__/foo.cpython-34.pyc http://bugs.python.org/issue22966 closed by barry #22967: tempfile.py does not work in windows8 http://bugs.python.org/issue22967 closed by zach.ware #22973: hash() function gives the same result for -1 and for -2 argume http://bugs.python.org/issue22973 closed by eric.smith #22974: Make traceback functions support negative limits http://bugs.python.org/issue22974 closed by vlth #22975: Crosses initialization? http://bugs.python.org/issue22975 closed by serhiy.storchaka #22978: Logical Negation of NotImplemented http://bugs.python.org/issue22978 closed by r.david.murray #22979: Use of None in min and max http://bugs.python.org/issue22979 closed by r.david.murray #22987: ssl module documentation: incorrect compatibility matrix http://bugs.python.org/issue22987 closed by pitrou #22994: datetime buggy http://bugs.python.org/issue22994 closed by r.david.murray #22999: Copying emoji to Windows clipboard corrupts string in Python 3 http://bugs.python.org/issue22999 closed by amaury.forgeotdarc From guido at python.org Fri Dec 5 19:42:06 2014 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Dec 2014 10:42:06 -0800 Subject: [Python-Dev] PEP 479 (Change StopIteration handling inside generators) -- hopefully final text Message-ID: For those who haven't followed along, here's the final text of PEP 479, with a brief Acceptance section added. The basic plan hasn't changed, but there's a lot more clarifying text and discussion of a few counter-proposals. Please send suggestions for editorial improvements to peps at python.org. The official reference version of the PEP is at https://www.python.org/dev/peps/pep-0479/; the repo is https://hg.python.org/peps/ (please check out the repo and send diffs relative to the repo if you have edits). PEP: 479 Title: Change StopIteration handling inside generators Version: $Revision$ Last-Modified: $Date$ Author: Chris Angelico , Guido van Rossum < guido at python.org> Status: Accepted Type: Standards Track Content-Type: text/x-rst Created: 15-Nov-2014 Python-Version: 3.5 Post-History: 15-Nov-2014, 19-Nov-2014, 5-Dec-2014 Abstract ======== This PEP proposes a change to generators: when ``StopIteration`` is raised inside a generator, it is replaced it with ``RuntimeError``. (More precisely, this happens when the exception is about to bubble out of the generator's stack frame.) Because the change is backwards incompatible, the feature is initially introduced using a ``__future__`` statement. Acceptance ========== This PEP was accepted by the BDFL on November 22. Because of the exceptionally short period from first draft to acceptance, the main objections brought up after acceptance were carefully considered and have been reflected in the "Alternate proposals" section below. However, none of the discussion changed the BDFL's mind and the PEP's acceptance is now final. (Suggestions for clarifying edits are still welcome -- unlike IETF RFCs, the text of a PEP is not cast in stone after its acceptance, although the core design/plan/specification should not change after acceptance.) Rationale ========= The interaction of generators and ``StopIteration`` is currently somewhat surprising, and can conceal obscure bugs. An unexpected exception should not result in subtly altered behaviour, but should cause a noisy and easily-debugged traceback. Currently, ``StopIteration`` can be absorbed by the generator construct. The main goal of the proposal is to ease debugging in the situation where an unguarded ``next()`` call (perhaps several stack frames deep) raises ``StopIteration`` and causes the iteration controlled by the generator to terminate silently. (When another exception is raised, a traceback is printed pinpointing the cause of the problem.) This is particularly pernicious in combination with the ``yield from`` construct of PEP 380 [1]_, as it breaks the abstraction that a subgenerator may be factored out of a generator. That PEP notes this limitation, but notes that "use cases for these [are] rare to non- existent". Unfortunately while intentional use is rare, it is easy to stumble on these cases by accident:: import contextlib @contextlib.contextmanager def transaction(): print('begin') try: yield from do_it() except: print('rollback') raise else: print('commit') def do_it(): print('Refactored preparations') yield # Body of with-statement is executed here print('Refactored finalization') def gene(): for i in range(2): with transaction(): yield i # return raise StopIteration # This is wrong print('Should not be reached') for i in gene(): print('main: i =', i) Here factoring out ``do_it`` into a subgenerator has introduced a subtle bug: if the wrapped block raises ``StopIteration``, under the current behavior this exception will be swallowed by the context manager; and, worse, the finalization is silently skipped! Similarly problematic behavior occurs when an ``asyncio`` coroutine raises ``StopIteration``, causing it to terminate silently. Additionally, the proposal reduces the difference between list comprehensions and generator expressions, preventing surprises such as the one that started this discussion [2]_. Henceforth, the following statements will produce the same result if either produces a result at all:: a = list(F(x) for x in xs if P(x)) a = [F(x) for x in xs if P(x)] With the current state of affairs, it is possible to write a function ``F(x)`` or a predicate ``P(x)`` that causes the first form to produce a (truncated) result, while the second form raises an exception (namely, ``StopIteration``). With the proposed change, both forms will raise an exception at this point (albeit ``RuntimeError`` in the first case and ``StopIteration`` in the second). Finally, the proposal also clears up the confusion about how to terminate a generator: the proper way is ``return``, not ``raise StopIteration``. As an added bonus, the above changes bring generator functions much more in line with regular functions. If you wish to take a piece of code presented as a generator and turn it into something else, you can usually do this fairly simply, by replacing every ``yield`` with a call to ``print()`` or ``list.append()``; however, if there are any bare ``next()`` calls in the code, you have to be aware of them. If the code was originally written without relying on ``StopIteration`` terminating the function, the transformation would be that much easier. Background information ====================== When a generator frame is (re)started as a result of a ``__next__()`` (or ``send()`` or ``throw()``) call, one of three outcomes can occur: * A yield point is reached, and the yielded value is returned. * The frame is returned from; ``StopIteration`` is raised. * An exception is raised, which bubbles out. In the latter two cases the frame is abandoned (and the generator object's ``gi_frame`` attribute is set to None). Proposal ======== If a ``StopIteration`` is about to bubble out of a generator frame, it is replaced with ``RuntimeError``, which causes the ``next()`` call (which invoked the generator) to fail, passing that exception out. >From then on it's just like any old exception. [4]_ This affects the third outcome listed above, without altering any other effects. Furthermore, it only affects this outcome when the exception raised is ``StopIteration`` (or a subclass thereof). Note that the proposed replacement happens at the point where the exception is about to bubble out of the frame, i.e. after any ``except`` or ``finally`` blocks that could affect it have been exited. The ``StopIteration`` raised by returning from the frame is not affected (the point being that ``StopIteration`` means that the generator terminated "normally", i.e. it did not raise an exception). A subtle issue is what will happen if the caller, having caught the ``RuntimeError``, calls the generator object's ``__next__()`` method again. The answer is that it from this point on it will raise ``StopIteration`` -- the behavior is the same as when any other exception was raised by the generator. Another logical consequence of the proposal: if somone uses ``g.throw(StopIteration)`` to throw a ``StopIteration`` exception into a generator, if the generator doesn't catch it (which it could do using a ``try/except`` around the ``yield``), it will be transformed into ``RuntimeError``. During the transition phase, the new feature must be enabled per-module using:: from __future__ import generator_stop Any generator function constructed under the influence of this directive will have the ``REPLACE_STOPITERATION`` flag set on its code object, and generators with the flag set will behave according to this proposal. Once the feature becomes standard, the flag may be dropped; code should not inspect generators for it. Consequences for existing code ============================== This change will affect existing code that depends on ``StopIteration`` bubbling up. The pure Python reference implementation of ``groupby`` [3]_ currently has comments "Exit on ``StopIteration``" where it is expected that the exception will propagate and then be handled. This will be unusual, but not unknown, and such constructs will fail. Other examples abound, e.g. [6]_, [7]_. (Nick Coghlan comments: """If you wanted to factor out a helper function that terminated the generator you'd have to do "return yield from helper()" rather than just "helper()".""") There are also examples of generator expressions floating around that rely on a ``StopIteration`` raised by the expression, the target or the predicate (rather than by the ``__next__()`` call implied in the ``for`` loop proper). Writing backwards and forwards compatible code ---------------------------------------------- With the exception of hacks that raise ``StopIteration`` to exit a generator expression, it is easy to write code that works equally well under older Python versions as under the new semantics. This is done by enclosing those places in the generator body where a ``StopIteration`` is expected (e.g. bare ``next()`` calls or in some cases helper functions that are expected to raise ``StopIteration``) in a ``try/except`` construct that returns when ``StopIteration`` is raised. The ``try/except`` construct should appear directly in the generator function; doing this in a helper function that is not itself a generator does not work. If ``raise StopIteration`` occurs directly in a generator, simply replace it with ``return``. Examples of breakage -------------------- Generators which explicitly raise ``StopIteration`` can generally be changed to simply return instead. This will be compatible with all existing Python versions, and will not be affected by ``__future__``. Here are some illustrations from the standard library. Lib/ipaddress.py:: if other == self: raise StopIteration Becomes:: if other == self: return In some cases, this can be combined with ``yield from`` to simplify the code, such as Lib/difflib.py:: if context is None: while True: yield next(line_pair_iterator) Becomes:: if context is None: yield from line_pair_iterator return (The ``return`` is necessary for a strictly-equivalent translation, though in this particular file, there is no further code, and the ``return`` can be omitted.) For compatibility with pre-3.3 versions of Python, this could be written with an explicit ``for`` loop:: if context is None: for line in line_pair_iterator: yield line return More complicated iteration patterns will need explicit ``try/except`` constructs. For example, a hypothetical parser like this:: def parser(f): while True: data = next(f) while True: line = next(f) if line == "- end -": break data += line yield data would need to be rewritten as:: def parser(f): while True: try: data = next(f) while True: line = next(f) if line == "- end -": break data += line yield data except StopIteration: return or possibly:: def parser(f): for data in f: while True: line = next(f) if line == "- end -": break data += line yield data The latter form obscures the iteration by purporting to iterate over the file with a ``for`` loop, but then also fetches more data from the same iterator during the loop body. It does, however, clearly differentiate between a "normal" termination (``StopIteration`` instead of the initial line) and an "abnormal" termination (failing to find the end marker in the inner loop, which will now raise ``RuntimeError``). This effect of ``StopIteration`` has been used to cut a generator expression short, creating a form of ``takewhile``:: def stop(): raise StopIteration print(list(x for x in range(10) if x < 5 or stop())) # prints [0, 1, 2, 3, 4] Under the current proposal, this form of non-local flow control is not supported, and would have to be rewritten in statement form:: def gen(): for x in range(10): if x >= 5: return yield x print(list(gen())) # prints [0, 1, 2, 3, 4] While this is a small loss of functionality, it is functionality that often comes at the cost of readability, and just as ``lambda`` has restrictions compared to ``def``, so does a generator expression have restrictions compared to a generator function. In many cases, the transformation to full generator function will be trivially easy, and may improve structural clarity. Explanation of generators, iterators, and StopIteration ======================================================= Under this proposal, generators and iterators would be distinct, but related, concepts. Like the mixing of text and bytes in Python 2, the mixing of generators and iterators has resulted in certain perceived conveniences, but proper separation will make bugs more visible. An iterator is an object with a ``__next__`` method. Like many other special methods, it may either return a value, or raise a specific exception - in this case, ``StopIteration`` - to signal that it has no value to return. In this, it is similar to ``__getattr__`` (can raise ``AttributeError``), ``__getitem__`` (can raise ``KeyError``), and so on. A helper function for an iterator can be written to follow the same protocol; for example:: def helper(x, y): if x > y: return 1 / (x - y) raise StopIteration def __next__(self): if self.a: return helper(self.b, self.c) return helper(self.d, self.e) Both forms of signalling are carried through: a returned value is returned, an exception bubbles up. The helper is written to match the protocol of the calling function. A generator function is one which contains a ``yield`` expression. Each time it is (re)started, it may either yield a value, or return (including "falling off the end"). A helper function for a generator can also be written, but it must also follow generator protocol:: def helper(x, y): if x > y: yield 1 / (x - y) def gen(self): if self.a: return (yield from helper(self.b, self.c)) return (yield from helper(self.d, self.e)) In both cases, any unexpected exception will bubble up. Due to the nature of generators and iterators, an unexpected ``StopIteration`` inside a generator will be converted into ``RuntimeError``, but beyond that, all exceptions will propagate normally. Transition plan =============== - Python 3.5: Enable new semantics under ``__future__`` import; silent deprecation warning if ``StopIteration`` bubbles out of a generator not under ``__future__`` import. - Python 3.6: Non-silent deprecation warning. - Python 3.7: Enable new semantics everywhere. Alternate proposals =================== Raising something other than RuntimeError ----------------------------------------- Rather than the generic ``RuntimeError``, it might make sense to raise a new exception type ``UnexpectedStopIteration``. This has the downside of implicitly encouraging that it be caught; the correct action is to catch the original ``StopIteration``, not the chained exception. Supplying a specific exception to raise on return ------------------------------------------------- Nick Coghlan suggested a means of providing a specific ``StopIteration`` instance to the generator; if any other instance of ``StopIteration`` is raised, it is an error, but if that particular one is raised, the generator has properly completed. This subproposal has been withdrawn in favour of better options, but is retained for reference. Making return-triggered StopIterations obvious ---------------------------------------------- For certain situations, a simpler and fully backward-compatible solution may be sufficient: when a generator returns, instead of raising ``StopIteration``, it raises a specific subclass of ``StopIteration`` (``GeneratorReturn``) which can then be detected. If it is not that subclass, it is an escaping exception rather than a return statement. The inspiration for this alternative proposal was Nick's observation [8]_ that if an ``asyncio`` coroutine [9]_ accidentally raises ``StopIteration``, it currently terminates silently, which may present a hard-to-debug mystery to the developer. The main proposal turns such accidents into clearly distinguishable ``RuntimeError`` exceptions, but if that is rejected, this alternate proposal would enable ``asyncio`` to distinguish between a ``return`` statement and an accidentally-raised ``StopIteration`` exception. Of the three outcomes listed above, two change: * If a yield point is reached, the value, obviously, would still be returned. * If the frame is returned from, ``GeneratorReturn`` (rather than ``StopIteration``) is raised. * If an instance of ``GeneratorReturn`` would be raised, instead an instance of ``StopIteration`` would be raised. Any other exception bubbles up normally. In the third case, the ``StopIteration`` would have the ``value`` of the original ``GeneratorReturn``, and would reference the original exception in its ``__cause__``. If uncaught, this would clearly show the chaining of exceptions. This alternative does *not* affect the discrepancy between generator expressions and list comprehensions, but allows generator-aware code (such as the ``contextlib`` and ``asyncio`` modules) to reliably differentiate between the second and third outcomes listed above. However, once code exists that depends on this distinction between ``GeneratorReturn`` and ``StopIteration``, a generator that invokes another generator and relies on the latter's ``StopIteration`` to bubble out would still be potentially wrong, depending on the use made of the distinction between the two exception types. Converting the exception inside next() -------------------------------------- Mark Shannon suggested [12]_ that the problem could be solved in ``next()`` rather than at the boundary of generator functions. By having ``next()`` catch ``StopIteration`` and raise instead ``ValueError``, all unexpected ``StopIteration`` bubbling would be prevented; however, the backward-incompatibility concerns are far more serious than for the current proposal, as every ``next()`` call now needs to be rewritten to guard against ``ValueError`` instead of ``StopIteration`` - not to mention that there is no way to write one block of code which reliably works on multiple versions of Python. (Using a dedicated exception type, perhaps subclassing ``ValueError``, would help this; however, all code would still need to be rewritten.) Sub-proposal: decorator to explicitly request current behaviour --------------------------------------------------------------- Nick Coghlan suggested [13]_ that the situations where the current behaviour is desired could be supported by means of a decorator:: from itertools import allow_implicit_stop @allow_implicit_stop def my_generator(): ... yield next(it) ... Which would be semantically equivalent to:: def my_generator(): try: ... yield next(it) ... except StopIteration return but be faster, as it could be implemented by simply permitting the ``StopIteration`` to bubble up directly. Single-source Python 2/3 code would also benefit in a 3.7+ world, since libraries like six and python-future could just define their own version of "allow_implicit_stop" that referred to the new builtin in 3.5+, and was implemented as an identity function in other versions. However, due to the implementation complexities required, the ongoing compatibility issues created, the subtlety of the decorator's effect, and the fact that it would encourage the "quick-fix" solution of just slapping the decorator onto all generators instead of properly fixing the code in question, this sub-proposal has been rejected. [14]_ Criticism ========= Unofficial and apocryphal statistics suggest that this is seldom, if ever, a problem. [5]_ Code does exist which relies on the current behaviour (e.g. [3]_, [6]_, [7]_), and there is the concern that this would be unnecessary code churn to achieve little or no gain. Steven D'Aprano started an informal survey on comp.lang.python [10]_; at the time of writing only two responses have been received: one was in favor of changing list comprehensions to match generator expressions (!), the other was in favor of this PEP's main proposal. The existing model has been compared to the perfectly-acceptable issues inherent to every other case where an exception has special meaning. For instance, an unexpected ``KeyError`` inside a ``__getitem__`` method will be interpreted as failure, rather than permitted to bubble up. However, there is a difference. Special methods use ``return`` to indicate normality, and ``raise`` to signal abnormality; generators ``yield`` to indicate data, and ``return`` to signal the abnormal state. This makes explicitly raising ``StopIteration`` entirely redundant, and potentially surprising. If other special methods had dedicated keywords to distinguish between their return paths, they too could turn unexpected exceptions into ``RuntimeError``; the fact that they cannot should not preclude generators from doing so. References ========== .. [1] PEP 380 - Syntax for Delegating to a Subgenerator (https://www.python.org/dev/peps/pep-0380) .. [2] Initial mailing list comment (https://mail.python.org/pipermail/python-ideas/2014-November/029906.html ) .. [3] Pure Python implementation of groupby (https://docs.python.org/3/library/itertools.html#itertools.groupby) .. [4] Proposal by GvR (https://mail.python.org/pipermail/python-ideas/2014-November/029953.html ) .. [5] Response by Steven D'Aprano (https://mail.python.org/pipermail/python-ideas/2014-November/029994.html ) .. [6] Split a sequence or generator using a predicate ( http://code.activestate.com/recipes/578416-split-a-sequence-or-generator-using-a-predicate/ ) .. [7] wrap unbounded generator to restrict its output ( http://code.activestate.com/recipes/66427-wrap-unbounded-generator-to-restrict-its-output/ ) .. [8] Post from Nick Coghlan mentioning asyncio (https://mail.python.org/pipermail/python-ideas/2014-November/029961.html ) .. [9] Coroutines in asyncio (https://docs.python.org/3/library/asyncio-task.html#coroutines) .. [10] Thread on comp.lang.python started by Steven D'Aprano (https://mail.python.org/pipermail/python-list/2014-November/680757.html) .. [11] Tracker issue with Proof-of-Concept patch (http://bugs.python.org/issue22906) .. [12] Post from Mark Shannon with alternate proposal (https://mail.python.org/pipermail/python-dev/2014-November/137129.html) .. [13] Idea from Nick Coghlan (https://mail.python.org/pipermail/python-dev/2014-November/137201.html) .. [14] Rejection by GvR (https://mail.python.org/pipermail/python-dev/2014-November/137243.html) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Fri Dec 5 21:04:53 2014 From: bcannon at gmail.com (Brett Cannon) Date: Fri, 05 Dec 2014 20:04:53 +0000 Subject: [Python-Dev] My thinking about the development process Message-ID: This is a bit long as I espoused as if this was a blog post to try and give background info on my thinking, etc. The TL;DR folks should start at the "Ideal Scenario" section and read to the end. P.S.: This is in Markdown and I have put it up at https://gist.github.com/brettcannon/a9c9a5989dc383ed73b4 if you want a nicer formatted version for reading. # History lesson Since I signed up for the python-dev mailing list way back in June 2002, there seems to be a cycle where we as a group come to a realization that our current software development process has not kept up with modern practices and could stand for an update. For me this was first shown when we moved from SourceForge to our own infrastructure, then again when we moved from Subversion to Mercurial (I led both of these initiatives, so it's somewhat a tradition/curse I find myself in this position yet again). And so we again find ourselves at the point of realizing that we are not keeping up with current practices and thus need to evaluate how we can improve our situation. # Where we are now Now it should be realized that we have to sets of users of our development process: contributors and core developers (the latter whom can play both roles). If you take a rough outline of our current, recommended process it goes something like this: 1. Contributor clones a repository from hg.python.org 2. Contributor makes desired changes 3. Contributor generates a patch 4. Contributor creates account on bugs.python.org and signs the [contributor agreement](https://www.python.org/psf/contrib/contrib-form/) 4. Contributor creates an issue on bugs.python.org (if one does not already exist) and uploads a patch 5. Core developer evaluates patch, possibly leaving comments through our [custom version of Rietveld](http://bugs.python.org/review/) 6. Contributor revises patch based on feedback and uploads new patch 7. Core developer downloads patch and applies it to a clean clone 8. Core developer runs the tests 9. Core developer does one last `hg pull -u` and then commits the changes to various branches I think we can all agree it works to some extent, but isn't exactly smooth. There are multiple steps in there -- in full or partially -- that can be automated. There is room to improve everyone's lives. And we can't forget the people who help keep all of this running as well. There are those that manage the SSH keys, the issue tracker, the review tool, hg.python.org, and the email system that let's use know when stuff happens on any of these other systems. The impact on them needs to also be considered. ## Contributors I see two scenarios for contributors to optimize for. There's the simple spelling mistake patches and then there's the code change patches. The former is the kind of thing that you can do in a browser without much effort and should be a no-brainer commit/reject decision for a core developer. This is what the GitHub/Bitbucket camps have been promoting their solution for solving while leaving the cpython repo alone. Unfortunately the bulk of our documentation is in the Doc/ directory of cpython. While it's nice to think about moving the devguide, peps, and even breaking out the tutorial to repos hosting on Bitbucket/GitHub, everything else is in Doc/ (language reference, howtos, stdlib, C API, etc.). So unless we want to completely break all of Doc/ out of the cpython repo and have core developers willing to edit two separate repos when making changes that impact code **and** docs, moving only a subset of docs feels like a band-aid solution that ignores the big, white elephant in the room: the cpython repo, where a bulk of patches are targeting. For the code change patches, contributors need an easy way to get a hold of the code and get their changes to the core developers. After that it's things like letting contributors knowing that their patch doesn't apply cleanly, doesn't pass tests, etc. As of right now getting the patch into the issue tracker is a bit manual but nothing crazy. The real issue in this scenario is core developer response time. ## Core developers There is a finite amount of time that core developers get to contribute to Python and it fluctuates greatly. This means that if a process can be found which allows core developers to spend less time doing mechanical work and more time doing things that can't be automated -- namely code reviews -- then the throughput of patches being accepted/rejected will increase. This also impacts any increased patch submission rate that comes from improving the situation for contributors because if the throughput doesn't change then there will simply be more patches sitting in the issue tracker and that doesn't benefit anyone. # My ideal scenario If I had an infinite amount of resources (money, volunteers, time, etc.), this would be my ideal scenario: 1. Contributor gets code from wherever; easiest to just say "fork on GitHub or Bitbucket" as they would be official mirrors of hg.python.org and are updated after every commit, but could clone hg.python.org/cpython if they wanted 2. Contributor makes edits; if they cloned on Bitbucket or GitHub then they have browser edit access already 3. Contributor creates an account at bugs.python.org and signs the CLA 3. The contributor creates an issue at bugs.python.org (probably the one piece of infrastructure we all agree is better than the other options, although its workflow could use an update) 4. If the contributor used Bitbucket or GitHub, they send a pull request with the issue # in the PR message 5. bugs.python.org notices the PR, grabs a patch for it, and puts it on bugs.python.org for code review 6. CI runs on the patch based on what Python versions are specified in the issue tracker, letting everyone know if it applied cleanly, passed tests on the OSs that would be affected, and also got a test coverage report 7. Core developer does a code review 8. Contributor updates their code based on the code review and the updated patch gets pulled by bugs.python.org automatically and CI runs again 9. Once the patch is acceptable and assuming the patch applies cleanly to all versions to commit to, the core developer clicks a "Commit" button, fills in a commit message and NEWS entry, and everything gets committed (if the patch can't apply cleanly then the core developer does it the old-fashion way, or maybe auto-generate a new PR which can be manually touched up so it does apply cleanly?) Basically the ideal scenario lets contributors use whatever tools and platforms that they want and provides as much automated support as possible to make sure their code is tip-top before and during code review while core developers can review and commit patches so easily that they can do their job from a beach with a tablet and some WiFi. ## Where the current proposed solutions seem to fall short ### GitHub/Bitbucket Basically GitHub/Bitbucket is a win for contributors but doesn't buy core developers that much. GitHub/Bitbucket gives contributors the easy cloning, drive-by patches, CI, and PRs. Core developers get a code review tool -- I'm counting Rietveld as deprecated after Guido's comments about the code's maintenance issues -- and push-button commits **only for single branch changes**. But for any patch that crosses branches we don't really gain anything. At best core developers tell a contributor "please send your PR against 3.4", push-button merge it, update a local clone, merge from 3.4 to default, do the usual stuff, commit, and then push; that still keeps me off the beach, though, so that doesn't get us the whole way. You could force people to submit two PRs, but I don't see that flying. Maybe some tool could be written that automatically handles the merge/commit across branches once the initial PR is in? Or automatically create a PR that core developers can touch up as necessary and then accept that as well? Regardless, some solution is necessary to handle branch-crossing PRs. As for GitHub vs. Bitbucket, I personally don't care. I like GitHub's interface more, but that's personal taste. I like hg more than git, but that's also personal taste (and I consider a transition from hg to git a hassle but not a deal-breaker but also not a win). It is unfortunate, though, that under this scenario we would have to choose only one platform. It's also unfortunate both are closed-source, but that's not a deal-breaker, just a knock against if the decision is close. ### Our own infrastructure The shortcoming here is the need for developers, developers, developers! Everything outlined in the ideal scenario is totally doable on our own infrastructure with enough code and time (donated/paid-for infrastructure shouldn't be an issue). But historically that code and time has not materialized. Our code review tool is a fork that probably should be replaced as only Martin von L?wis can maintain it. Basically Ezio Melotti maintains the issue tracker's code. We don't exactly have a ton of people constantly going "I'm so bored because everything for Python's development infrastructure gets sorted so quickly!" A perfect example is that R. David Murray came up with a nice update for our workflow after PyCon but then ran out of time after mostly defining it and nothing ever became of it (maybe we can rectify that at PyCon?). Eric Snow has pointed out how he has written similar code for pulling PRs from I think GitHub to another code review tool, but that doesn't magically make it work in our infrastructure or get someone to write it and help maintain it (no offense, Eric). IOW our infrastructure can do anything, but it can't run on hopes and dreams. Commitments from many people to making this happen by a certain deadline will be needed so as to not allow it to drag on forever. People would also have to commit to continued maintenance to make this viable long-term. # Next steps I'm thinking first draft PEPs by February 1 to know who's all-in (8 weeks away), all details worked out in final PEPs and whatever is required to prove to me it will work by the PyCon language summit (4 months away). I make a decision by May 1, and then implementation aims to be done by the time 3.5.0 is cut so we can switch over shortly thereafter (9 months away). Sound like a reasonable timeline? -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Dec 5 21:24:34 2014 From: donald at stufft.io (Donald Stufft) Date: Fri, 5 Dec 2014 15:24:34 -0500 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: Message-ID: <55D026B6-484D-4AEA-90C3-F41B2EA79142@stufft.io> > On Dec 5, 2014, at 3:04 PM, Brett Cannon wrote: > This looks like a pretty good write up, seems to pretty fairly evaluate the various sides and the various concerns. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Fri Dec 5 22:04:48 2014 From: bcannon at gmail.com (Brett Cannon) Date: Fri, 05 Dec 2014 21:04:48 +0000 Subject: [Python-Dev] Python 2/3 porting HOWTO has been updated Message-ID: It now promotes using tooling as much as possible to automate the process of making code by Python 2/3 source-compatible: https://docs.python.org/3.5/howto/pyporting.html Blog post about it at http://nothingbutsnark.svbtle.com/commentary-on-getting-your-code-to-run-on-python-23 . -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Fri Dec 5 22:07:44 2014 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 05 Dec 2014 16:07:44 -0500 Subject: [Python-Dev] Python 2/3 porting HOWTO has been updated In-Reply-To: References: Message-ID: <1417813664.3644099.199403849.649886EF@webmail.messagingengine.com> On Fri, Dec 5, 2014, at 16:04, Brett Cannon wrote: > It now promotes using tooling as much as possible to automate the process > of making code by Python 2/3 source-compatible: > https://docs.python.org/3.5/howto/pyporting.html Are you going to update the 2.7 copy of the howto, too? From ericsnowcurrently at gmail.com Fri Dec 5 23:17:35 2014 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Fri, 5 Dec 2014 15:17:35 -0700 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: Message-ID: Very nice, Brett. On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon wrote: > And we can't forget the people who help keep all of this running as well. > There are those that manage the SSH keys, the issue tracker, the review > tool, hg.python.org, and the email system that let's use know when stuff > happens on any of these other systems. The impact on them needs to also be > considered. It sounds like Guido would rather as much of this was done by a provider rather than relying on volunteers. That makes sense though there are concerns about control of certain assents. However, that applies only to some, like hg.python.org. > > ## Contributors > I see two scenarios for contributors to optimize for. There's the simple > spelling mistake patches and then there's the code change patches. The > former is the kind of thing that you can do in a browser without much effort > and should be a no-brainer commit/reject decision for a core developer. This > is what the GitHub/Bitbucket camps have been promoting their solution for > solving while leaving the cpython repo alone. Unfortunately the bulk of our > documentation is in the Doc/ directory of cpython. While it's nice to think > about moving the devguide, peps, and even breaking out the tutorial to repos > hosting on Bitbucket/GitHub, everything else is in Doc/ (language reference, > howtos, stdlib, C API, etc.). So unless we want to completely break all of > Doc/ out of the cpython repo and have core developers willing to edit two > separate repos when making changes that impact code **and** docs, moving > only a subset of docs feels like a band-aid solution that ignores the big, > white elephant in the room: the cpython repo, where a bulk of patches are > targeting. With your ideal scenario this would be a moot point, right? There would be no need to split out doc-related repos. > > For the code change patches, contributors need an easy way to get a hold of > the code and get their changes to the core developers. After that it's > things like letting contributors knowing that their patch doesn't apply > cleanly, doesn't pass tests, etc. This is probably more work than it seems at first. > As of right now getting the patch into the > issue tracker is a bit manual but nothing crazy. The real issue in this > scenario is core developer response time. > > ## Core developers > There is a finite amount of time that core developers get to contribute to > Python and it fluctuates greatly. This means that if a process can be found > which allows core developers to spend less time doing mechanical work and > more time doing things that can't be automated -- namely code reviews -- > then the throughput of patches being accepted/rejected will increase. This > also impacts any increased patch submission rate that comes from improving > the situation for contributors because if the throughput doesn't change then > there will simply be more patches sitting in the issue tracker and that > doesn't benefit anyone. This is the key concern I have with only addressing the contributor side of things. I'm all for increasing contributions, but not if they are just going to rot on the tracker and we end up with disillusioned contributors. > > # My ideal scenario > If I had an infinite amount of resources (money, volunteers, time, etc.), > this would be my ideal scenario: > > 1. Contributor gets code from wherever; easiest to just say "fork on GitHub > or Bitbucket" as they would be official mirrors of hg.python.org and are > updated after every commit, but could clone hg.python.org/cpython if they > wanted > 2. Contributor makes edits; if they cloned on Bitbucket or GitHub then they > have browser edit access already > 3. Contributor creates an account at bugs.python.org and signs the CLA There's no real way around this, is there? I suppose account creation *could* be automated relative to a github or bitbucket user, though it probably isn't worth the effort. However, the CLA part is pretty unavoidable. > 3. The contributor creates an issue at bugs.python.org (probably the one > piece of infrastructure we all agree is better than the other options, > although its workflow could use an update) I wonder if issue creation from a PR (where no issue # is in the message) could be automated too without a lot of extra work. > 4. If the contributor used Bitbucket or GitHub, they send a pull request > with the issue # in the PR message > 5. bugs.python.org notices the PR, grabs a patch for it, and puts it on > bugs.python.org for code review > 6. CI runs on the patch based on what Python versions are specified in the > issue tracker, letting everyone know if it applied cleanly, passed tests on > the OSs that would be affected, and also got a test coverage report > 7. Core developer does a code review > 8. Contributor updates their code based on the code review and the updated > patch gets pulled by bugs.python.org automatically and CI runs again > 9. Once the patch is acceptable and assuming the patch applies cleanly to > all versions to commit to, the core developer clicks a "Commit" button, > fills in a commit message and NEWS entry, and everything gets committed (if > the patch can't apply cleanly then the core developer does it the > old-fashion way, or maybe auto-generate a new PR which can be manually > touched up so it does apply cleanly?) 6-9 sounds a lot like PEP 462. :) This seems like the part the would win us the most. > > Basically the ideal scenario lets contributors use whatever tools and > platforms that they want and provides as much automated support as possible > to make sure their code is tip-top before and during code review while core > developers can review and commit patches so easily that they can do their > job from a beach with a tablet and some WiFi. Sign me up! > > ## Where the current proposed solutions seem to fall short > ### GitHub/Bitbucket > Basically GitHub/Bitbucket is a win for contributors but doesn't buy core > developers that much. GitHub/Bitbucket gives contributors the easy cloning, > drive-by patches, CI, and PRs. Core developers get a code review tool -- I'm > counting Rietveld as deprecated after Guido's comments about the code's > maintenance issues -- and push-button commits **only for single branch > changes**. But for any patch that crosses branches we don't really gain > anything. At best core developers tell a contributor "please send your PR > against 3.4", push-button merge it, update a local clone, merge from 3.4 to > default, do the usual stuff, commit, and then push; that still keeps me off > the beach, though, so that doesn't get us the whole way. This will probably be one of the trickiest parts. > You could force > people to submit two PRs, but I don't see that flying. Maybe some tool could > be written that automatically handles the merge/commit across branches once > the initial PR is in? Or automatically create a PR that core developers can > touch up as necessary and then accept that as well? Regardless, some > solution is necessary to handle branch-crossing PRs. > > As for GitHub vs. Bitbucket, I personally don't care. I like GitHub's > interface more, but that's personal taste. I like hg more than git, but > that's also personal taste (and I consider a transition from hg to git a > hassle but not a deal-breaker but also not a win). It is unfortunate, > though, that under this scenario we would have to choose only one platform. > > It's also unfortunate both are closed-source, but that's not a deal-breaker, > just a knock against if the decision is close. > > ### Our own infrastructure > The shortcoming here is the need for developers, developers, developers! > Everything outlined in the ideal scenario is totally doable on our own > infrastructure with enough code and time (donated/paid-for infrastructure > shouldn't be an issue). But historically that code and time has not > materialized. Our code review tool is a fork that probably should be > replaced as only Martin von L?wis can maintain it. Basically Ezio Melotti > maintains the issue tracker's code. Doing something about those two tools is something to consider. Would it be out of scope for this discussion or any resulting PEPS? I have opinions here, but I'd rather not sidetrack the discussion. > We don't exactly have a ton of people > constantly going "I'm so bored because everything for Python's development > infrastructure gets sorted so quickly!" A perfect example is that R. David > Murray came up with a nice update for our workflow after PyCon but then ran > out of time after mostly defining it and nothing ever became of it (maybe we > can rectify that at PyCon?). Eric Snow has pointed out how he has written > similar code for pulling PRs from I think GitHub to another code review > tool, but that doesn't magically make it work in our infrastructure or get > someone to write it and help maintain it (no offense, Eric). None taken. I was thinking the same thing when I wrote that. :) > > IOW our infrastructure can do anything, but it can't run on hopes and > dreams. Commitments from many people to making this happen by a certain > deadline will be needed so as to not allow it to drag on forever. People > would also have to commit to continued maintenance to make this viable > long-term. > > # Next steps > I'm thinking first draft PEPs by February 1 to know who's all-in (8 weeks > away), all details worked out in final PEPs and whatever is required to > prove to me it will work by the PyCon language summit (4 months away). I > make a decision by May 1, and > then implementation aims to be done by the time 3.5.0 is cut so we can > switch over shortly thereafter (9 months away). Sound like a reasonable > timeline? Sounds reasonable to me, but I don't have plans to champion a PEP. :) I could probably help with the tooling between GitHub/Bitbucket though. -eric From bcannon at gmail.com Sat Dec 6 00:10:17 2014 From: bcannon at gmail.com (Brett Cannon) Date: Fri, 05 Dec 2014 23:10:17 +0000 Subject: [Python-Dev] Python 2/3 porting HOWTO has been updated References: <1417813664.3644099.199403849.649886EF@webmail.messagingengine.com> Message-ID: On Fri Dec 05 2014 at 4:07:46 PM Benjamin Peterson wrote: > > > On Fri, Dec 5, 2014, at 16:04, Brett Cannon wrote: > > It now promotes using tooling as much as possible to automate the process > > of making code by Python 2/3 source-compatible: > > https://docs.python.org/3.5/howto/pyporting.html > > Are you going to update the 2.7 copy of the howto, too? > Have not decided yet. All the Google searches I have tried that bring up the HOWTO use the Python 3 version. Plus I know people are going to find mistakes that require fixing so I would rather wait until it stabilizes before I bother backporting to 2.7. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Dec 6 00:16:35 2014 From: donald at stufft.io (Donald Stufft) Date: Fri, 5 Dec 2014 18:16:35 -0500 Subject: [Python-Dev] Python 2/3 porting HOWTO has been updated In-Reply-To: References: <1417813664.3644099.199403849.649886EF@webmail.messagingengine.com> Message-ID: > > On Dec 5, 2014, at 6:10 PM, Brett Cannon wrote: > > > > On Fri Dec 05 2014 at 4:07:46 PM Benjamin Peterson > wrote: > > > On Fri, Dec 5, 2014, at 16:04, Brett Cannon wrote: > > It now promotes using tooling as much as possible to automate the process > > of making code by Python 2/3 source-compatible: > > https://docs.python.org/3.5/howto/pyporting.html > > Are you going to update the 2.7 copy of the howto, too? > > Have not decided yet. All the Google searches I have tried that bring up the HOWTO use the Python 3 version. Plus I know people are going to find mistakes that require fixing so I would rather wait until it stabilizes before I bother backporting to 2.7. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/donald%40stufft.io Do we need to update it? Can it just redirect to the 3 version? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Sat Dec 6 00:32:13 2014 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 5 Dec 2014 17:32:13 -0600 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: Message-ID: On Dec 5, 2014 4:18 PM, "Eric Snow" wrote: > > Very nice, Brett. > > On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon wrote: > > And we can't forget the people who help keep all of this running as well. > > There are those that manage the SSH keys, the issue tracker, the review > > tool, hg.python.org, and the email system that let's use know when stuff > > happens on any of these other systems. The impact on them needs to also be > > considered. > > It sounds like Guido would rather as much of this was done by a > provider rather than relying on volunteers. That makes sense though > there are concerns about control of certain assents. However, that > applies only to some, like hg.python.org. > > > > > ## Contributors > > I see two scenarios for contributors to optimize for. There's the simple > > spelling mistake patches and then there's the code change patches. The > > former is the kind of thing that you can do in a browser without much effort > > and should be a no-brainer commit/reject decision for a core developer. This > > is what the GitHub/Bitbucket camps have been promoting their solution for > > solving while leaving the cpython repo alone. Unfortunately the bulk of our > > documentation is in the Doc/ directory of cpython. While it's nice to think > > about moving the devguide, peps, and even breaking out the tutorial to repos > > hosting on Bitbucket/GitHub, everything else is in Doc/ (language reference, > > howtos, stdlib, C API, etc.). So unless we want to completely break all of > > Doc/ out of the cpython repo and have core developers willing to edit two > > separate repos when making changes that impact code **and** docs, moving > > only a subset of docs feels like a band-aid solution that ignores the big, > > white elephant in the room: the cpython repo, where a bulk of patches are > > targeting. > > With your ideal scenario this would be a moot point, right? There > would be no need to split out doc-related repos. > > > > > For the code change patches, contributors need an easy way to get a hold of > > the code and get their changes to the core developers. After that it's > > things like letting contributors knowing that their patch doesn't apply > > cleanly, doesn't pass tests, etc. > > This is probably more work than it seems at first. > > > As of right now getting the patch into the > > issue tracker is a bit manual but nothing crazy. The real issue in this > > scenario is core developer response time. > > > > ## Core developers > > There is a finite amount of time that core developers get to contribute to > > Python and it fluctuates greatly. This means that if a process can be found > > which allows core developers to spend less time doing mechanical work and > > more time doing things that can't be automated -- namely code reviews -- > > then the throughput of patches being accepted/rejected will increase. This > > also impacts any increased patch submission rate that comes from improving > > the situation for contributors because if the throughput doesn't change then > > there will simply be more patches sitting in the issue tracker and that > > doesn't benefit anyone. > > This is the key concern I have with only addressing the contributor > side of things. I'm all for increasing contributions, but not if they > are just going to rot on the tracker and we end up with disillusioned > contributors. > > > > > # My ideal scenario > > If I had an infinite amount of resources (money, volunteers, time, etc.), > > this would be my ideal scenario: > > > > 1. Contributor gets code from wherever; easiest to just say "fork on GitHub > > or Bitbucket" as they would be official mirrors of hg.python.org and are > > updated after every commit, but could clone hg.python.org/cpython if they > > wanted > > 2. Contributor makes edits; if they cloned on Bitbucket or GitHub then they > > have browser edit access already > > 3. Contributor creates an account at bugs.python.org and signs the CLA > > There's no real way around this, is there? I suppose account creation > *could* be automated relative to a github or bitbucket user, though it > probably isn't worth the effort. However, the CLA part is pretty > unavoidable. > > > 3. The contributor creates an issue at bugs.python.org (probably the one > > piece of infrastructure we all agree is better than the other options, > > although its workflow could use an update) > > I wonder if issue creation from a PR (where no issue # is in the > message) could be automated too without a lot of extra work. > > > 4. If the contributor used Bitbucket or GitHub, they send a pull request > > with the issue # in the PR message > > 5. bugs.python.org notices the PR, grabs a patch for it, and puts it on > > bugs.python.org for code review > > 6. CI runs on the patch based on what Python versions are specified in the > > issue tracker, letting everyone know if it applied cleanly, passed tests on > > the OSs that would be affected, and also got a test coverage report > > 7. Core developer does a code review > > 8. Contributor updates their code based on the code review and the updated > > patch gets pulled by bugs.python.org automatically and CI runs again > > 9. Once the patch is acceptable and assuming the patch applies cleanly to > > all versions to commit to, the core developer clicks a "Commit" button, > > fills in a commit message and NEWS entry, and everything gets committed (if > > the patch can't apply cleanly then the core developer does it the > > old-fashion way, or maybe auto-generate a new PR which can be manually > > touched up so it does apply cleanly?) > > 6-9 sounds a lot like PEP 462. :) This seems like the part the would > win us the most. > > > > > Basically the ideal scenario lets contributors use whatever tools and > > platforms that they want and provides as much automated support as possible > > to make sure their code is tip-top before and during code review while core > > developers can review and commit patches so easily that they can do their > > job from a beach with a tablet and some WiFi. > > Sign me up! > > > > > ## Where the current proposed solutions seem to fall short > > ### GitHub/Bitbucket > > Basically GitHub/Bitbucket is a win for contributors but doesn't buy core > > developers that much. GitHub/Bitbucket gives contributors the easy cloning, > > drive-by patches, CI, and PRs. Core developers get a code review tool -- I'm > > counting Rietveld as deprecated after Guido's comments about the code's > > maintenance issues -- and push-button commits **only for single branch > > changes**. But for any patch that crosses branches we don't really gain > > anything. At best core developers tell a contributor "please send your PR > > against 3.4", push-button merge it, update a local clone, merge from 3.4 to > > default, do the usual stuff, commit, and then push; that still keeps me off > > the beach, though, so that doesn't get us the whole way. > > This will probably be one of the trickiest parts. > > > You could force > > people to submit two PRs, but I don't see that flying. Maybe some tool could > > be written that automatically handles the merge/commit across branches once > > the initial PR is in? Or automatically create a PR that core developers can > > touch up as necessary and then accept that as well? Regardless, some > > solution is necessary to handle branch-crossing PRs. > > > > As for GitHub vs. Bitbucket, I personally don't care. I like GitHub's > > interface more, but that's personal taste. I like hg more than git, but > > that's also personal taste (and I consider a transition from hg to git a > > hassle but not a deal-breaker but also not a win). It is unfortunate, > > though, that under this scenario we would have to choose only one platform. > > > > It's also unfortunate both are closed-source, but that's not a deal-breaker, > > just a knock against if the decision is close. > > > > ### Our own infrastructure > > The shortcoming here is the need for developers, developers, developers! > > Everything outlined in the ideal scenario is totally doable on our own > > infrastructure with enough code and time (donated/paid-for infrastructure > > shouldn't be an issue). But historically that code and time has not > > materialized. Our code review tool is a fork that probably should be > > replaced as only Martin von L?wis can maintain it. Basically Ezio Melotti > > maintains the issue tracker's code. > > Doing something about those two tools is something to consider. Would > it be out of scope for this discussion or any resulting PEPS? I have > opinions here, but I'd rather not sidetrack the discussion. > > > We don't exactly have a ton of people > > constantly going "I'm so bored because everything for Python's development > > infrastructure gets sorted so quickly!" A perfect example is that R. David > > Murray came up with a nice update for our workflow after PyCon but then ran > > out of time after mostly defining it and nothing ever became of it (maybe we > > can rectify that at PyCon?). Eric Snow has pointed out how he has written > > similar code for pulling PRs from I think GitHub to another code review > > tool, but that doesn't magically make it work in our infrastructure or get > > someone to write it and help maintain it (no offense, Eric). > > None taken. I was thinking the same thing when I wrote that. :) > > > > > IOW our infrastructure can do anything, but it can't run on hopes and > > dreams. Commitments from many people to making this happen by a certain > > deadline will be needed so as to not allow it to drag on forever. People > > would also have to commit to continued maintenance to make this viable > > long-term. > > > > # Next steps > > I'm thinking first draft PEPs by February 1 to know who's all-in (8 weeks > > away), all details worked out in final PEPs and whatever is required to > > prove to me it will work by the PyCon language summit (4 months away). I > > make a decision by May 1, and > > then implementation aims to be done by the time 3.5.0 is cut so we can > > switch over shortly thereafter (9 months away). Sound like a reasonable > > timeline? > > Sounds reasonable to me, but I don't have plans to champion a PEP. :) > I could probably help with the tooling between GitHub/Bitbucket > though. > > -eric > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/graffatcolmingov%40gmail.com I have extensive experience with the GitHub API and some with BitBucket. I'm willing to help out with the tooling as well. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Dec 6 01:13:45 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 6 Dec 2014 10:13:45 +1000 Subject: [Python-Dev] My thinking about the development process In-Reply-To: <55D026B6-484D-4AEA-90C3-F41B2EA79142@stufft.io> References: <55D026B6-484D-4AEA-90C3-F41B2EA79142@stufft.io> Message-ID: On 6 December 2014 at 06:24, Donald Stufft wrote: > > On Dec 5, 2014, at 3:04 PM, Brett Cannon wrote: > > > This looks like a pretty good write up, seems to pretty fairly evaluate the > various sides and the various concerns. Agreed - thanks for taking this on Brett! For my part, I realised that if I want my Kallithea based proposal to work out, I actually need to *be* an upstream Kallithea contributor, so I posted to the Kallithea list laying out the kinds of features I'd be pushing for and why: http://lists.sfconservancy.org/pipermail/kallithea-general/2014q4/000060.html I only posted that a few minutes ago, so we'll see what the existing Kallithea contributors think of the idea :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From benjamin at python.org Sat Dec 6 01:44:53 2014 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 05 Dec 2014 19:44:53 -0500 Subject: [Python-Dev] Python 2/3 porting HOWTO has been updated In-Reply-To: References: <1417813664.3644099.199403849.649886EF@webmail.messagingengine.com> Message-ID: <1417826693.3685392.199461181.23A9D812@webmail.messagingengine.com> On Fri, Dec 5, 2014, at 18:16, Donald Stufft wrote: > > > > On Dec 5, 2014, at 6:10 PM, Brett Cannon wrote: > > > > > > > > On Fri Dec 05 2014 at 4:07:46 PM Benjamin Peterson > wrote: > > > > > > On Fri, Dec 5, 2014, at 16:04, Brett Cannon wrote: > > > It now promotes using tooling as much as possible to automate the process > > > of making code by Python 2/3 source-compatible: > > > https://docs.python.org/3.5/howto/pyporting.html > > > > Are you going to update the 2.7 copy of the howto, too? > > > > Have not decided yet. All the Google searches I have tried that bring up the HOWTO use the Python 3 version. Plus I know people are going to find mistakes that require fixing so I would rather wait until it stabilizes before I bother backporting to 2.7. > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: https://mail.python.org/mailman/options/python-dev/donald%40stufft.io > > > Do we need to update it? Can it just redirect to the 3 version? Technically, yes, of course. However, that would unexpected take you out of the Python 2 docs "context". Also, that doesn't solve the problem for the downloadable versions of the docs. From rajshorya at gmail.com Sat Dec 6 00:15:09 2014 From: rajshorya at gmail.com (Shorya Raj) Date: Sat, 6 Dec 2014 12:15:09 +1300 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: Message-ID: Hi All I just want to put my two cents into this. This would definitely be a great step to take. I have been discussing PEP 462 with Nick, and the automation was definitely something that would be something that would be great to have - I mean, I was submitting a simple documentation patch for build CPython on Windows, and it took several weeks for the patch to be accepted, then a couple of months for the patch to actually be merged in. As mentioned, automated testing to ensure that tests pass, along with easier comitting of documentation patches, would obviously be a great way to start to decrease this turn around. Has there been any though on what sort off infrastructure we could use for this? Obviously github / bitbucket could be used as mentioned by others for repo management, but a lot of thought would have to go into the decisions regarding CI tools. I think it would also be a good time to address the issues with the current bug tracker - although it works, it is hardly as useable as some of the other ones. As for the argument that we should use open source tools to ensure that the owners of these tools aren't able to cause us problems in the future - both Hadoop and Cassandra, along with a lot of other Apache projects seem to be using JIRA just fine. Thanks Shorya Raj On Sat, Dec 6, 2014 at 11:17 AM, Eric Snow wrote: > Very nice, Brett. > > On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon wrote: > > And we can't forget the people who help keep all of this running as well. > > There are those that manage the SSH keys, the issue tracker, the review > > tool, hg.python.org, and the email system that let's use know when stuff > > happens on any of these other systems. The impact on them needs to also > be > > considered. > > It sounds like Guido would rather as much of this was done by a > provider rather than relying on volunteers. That makes sense though > there are concerns about control of certain assents. However, that > applies only to some, like hg.python.org. > > > > > ## Contributors > > I see two scenarios for contributors to optimize for. There's the simple > > spelling mistake patches and then there's the code change patches. The > > former is the kind of thing that you can do in a browser without much > effort > > and should be a no-brainer commit/reject decision for a core developer. > This > > is what the GitHub/Bitbucket camps have been promoting their solution for > > solving while leaving the cpython repo alone. Unfortunately the bulk of > our > > documentation is in the Doc/ directory of cpython. While it's nice to > think > > about moving the devguide, peps, and even breaking out the tutorial to > repos > > hosting on Bitbucket/GitHub, everything else is in Doc/ (language > reference, > > howtos, stdlib, C API, etc.). So unless we want to completely break all > of > > Doc/ out of the cpython repo and have core developers willing to edit two > > separate repos when making changes that impact code **and** docs, moving > > only a subset of docs feels like a band-aid solution that ignores the > big, > > white elephant in the room: the cpython repo, where a bulk of patches are > > targeting. > > With your ideal scenario this would be a moot point, right? There > would be no need to split out doc-related repos. > > > > > For the code change patches, contributors need an easy way to get a hold > of > > the code and get their changes to the core developers. After that it's > > things like letting contributors knowing that their patch doesn't apply > > cleanly, doesn't pass tests, etc. > > This is probably more work than it seems at first. > > > As of right now getting the patch into the > > issue tracker is a bit manual but nothing crazy. The real issue in this > > scenario is core developer response time. > > > > ## Core developers > > There is a finite amount of time that core developers get to contribute > to > > Python and it fluctuates greatly. This means that if a process can be > found > > which allows core developers to spend less time doing mechanical work and > > more time doing things that can't be automated -- namely code reviews -- > > then the throughput of patches being accepted/rejected will increase. > This > > also impacts any increased patch submission rate that comes from > improving > > the situation for contributors because if the throughput doesn't change > then > > there will simply be more patches sitting in the issue tracker and that > > doesn't benefit anyone. > > This is the key concern I have with only addressing the contributor > side of things. I'm all for increasing contributions, but not if they > are just going to rot on the tracker and we end up with disillusioned > contributors. > > > > > # My ideal scenario > > If I had an infinite amount of resources (money, volunteers, time, etc.), > > this would be my ideal scenario: > > > > 1. Contributor gets code from wherever; easiest to just say "fork on > GitHub > > or Bitbucket" as they would be official mirrors of hg.python.org and are > > updated after every commit, but could clone hg.python.org/cpython if > they > > wanted > > 2. Contributor makes edits; if they cloned on Bitbucket or GitHub then > they > > have browser edit access already > > 3. Contributor creates an account at bugs.python.org and signs the CLA > > There's no real way around this, is there? I suppose account creation > *could* be automated relative to a github or bitbucket user, though it > probably isn't worth the effort. However, the CLA part is pretty > unavoidable. > > > 3. The contributor creates an issue at bugs.python.org (probably the one > > piece of infrastructure we all agree is better than the other options, > > although its workflow could use an update) > > I wonder if issue creation from a PR (where no issue # is in the > message) could be automated too without a lot of extra work. > > > 4. If the contributor used Bitbucket or GitHub, they send a pull request > > with the issue # in the PR message > > 5. bugs.python.org notices the PR, grabs a patch for it, and puts it on > > bugs.python.org for code review > > 6. CI runs on the patch based on what Python versions are specified in > the > > issue tracker, letting everyone know if it applied cleanly, passed tests > on > > the OSs that would be affected, and also got a test coverage report > > 7. Core developer does a code review > > 8. Contributor updates their code based on the code review and the > updated > > patch gets pulled by bugs.python.org automatically and CI runs again > > 9. Once the patch is acceptable and assuming the patch applies cleanly to > > all versions to commit to, the core developer clicks a "Commit" button, > > fills in a commit message and NEWS entry, and everything gets committed > (if > > the patch can't apply cleanly then the core developer does it the > > old-fashion way, or maybe auto-generate a new PR which can be manually > > touched up so it does apply cleanly?) > > 6-9 sounds a lot like PEP 462. :) This seems like the part the would > win us the most. > > > > > Basically the ideal scenario lets contributors use whatever tools and > > platforms that they want and provides as much automated support as > possible > > to make sure their code is tip-top before and during code review while > core > > developers can review and commit patches so easily that they can do their > > job from a beach with a tablet and some WiFi. > > Sign me up! > > > > > ## Where the current proposed solutions seem to fall short > > ### GitHub/Bitbucket > > Basically GitHub/Bitbucket is a win for contributors but doesn't buy core > > developers that much. GitHub/Bitbucket gives contributors the easy > cloning, > > drive-by patches, CI, and PRs. Core developers get a code review tool -- > I'm > > counting Rietveld as deprecated after Guido's comments about the code's > > maintenance issues -- and push-button commits **only for single branch > > changes**. But for any patch that crosses branches we don't really gain > > anything. At best core developers tell a contributor "please send your PR > > against 3.4", push-button merge it, update a local clone, merge from 3.4 > to > > default, do the usual stuff, commit, and then push; that still keeps me > off > > the beach, though, so that doesn't get us the whole way. > > This will probably be one of the trickiest parts. > > > You could force > > people to submit two PRs, but I don't see that flying. Maybe some tool > could > > be written that automatically handles the merge/commit across branches > once > > the initial PR is in? Or automatically create a PR that core developers > can > > touch up as necessary and then accept that as well? Regardless, some > > solution is necessary to handle branch-crossing PRs. > > > > As for GitHub vs. Bitbucket, I personally don't care. I like GitHub's > > interface more, but that's personal taste. I like hg more than git, but > > that's also personal taste (and I consider a transition from hg to git a > > hassle but not a deal-breaker but also not a win). It is unfortunate, > > though, that under this scenario we would have to choose only one > platform. > > > > It's also unfortunate both are closed-source, but that's not a > deal-breaker, > > just a knock against if the decision is close. > > > > ### Our own infrastructure > > The shortcoming here is the need for developers, developers, developers! > > Everything outlined in the ideal scenario is totally doable on our own > > infrastructure with enough code and time (donated/paid-for infrastructure > > shouldn't be an issue). But historically that code and time has not > > materialized. Our code review tool is a fork that probably should be > > replaced as only Martin von L?wis can maintain it. Basically Ezio Melotti > > maintains the issue tracker's code. > > Doing something about those two tools is something to consider. Would > it be out of scope for this discussion or any resulting PEPS? I have > opinions here, but I'd rather not sidetrack the discussion. > > > We don't exactly have a ton of people > > constantly going "I'm so bored because everything for Python's > development > > infrastructure gets sorted so quickly!" A perfect example is that R. > David > > Murray came up with a nice update for our workflow after PyCon but then > ran > > out of time after mostly defining it and nothing ever became of it > (maybe we > > can rectify that at PyCon?). Eric Snow has pointed out how he has written > > similar code for pulling PRs from I think GitHub to another code review > > tool, but that doesn't magically make it work in our infrastructure or > get > > someone to write it and help maintain it (no offense, Eric). > > None taken. I was thinking the same thing when I wrote that. :) > > > > > IOW our infrastructure can do anything, but it can't run on hopes and > > dreams. Commitments from many people to making this happen by a certain > > deadline will be needed so as to not allow it to drag on forever. People > > would also have to commit to continued maintenance to make this viable > > long-term. > > > > # Next steps > > I'm thinking first draft PEPs by February 1 to know who's all-in (8 weeks > > away), all details worked out in final PEPs and whatever is required to > > prove to me it will work by the PyCon language summit (4 months away). I > > make a decision by May 1, and > > then implementation aims to be done by the time 3.5.0 is cut so we can > > switch over shortly thereafter (9 months away). Sound like a reasonable > > timeline? > > Sounds reasonable to me, but I don't have plans to champion a PEP. :) > I could probably help with the tooling between GitHub/Bitbucket > though. > > -eric > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/rajshorya%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Sat Dec 6 02:26:08 2014 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 05 Dec 2014 20:26:08 -0500 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: Message-ID: <20141206012608.B55DBB1408D@webabinitio.net> On Fri, 05 Dec 2014 15:17:35 -0700, Eric Snow wrote: > On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon wrote: > > We don't exactly have a ton of people > > constantly going "I'm so bored because everything for Python's development > > infrastructure gets sorted so quickly!" A perfect example is that R. David > > Murray came up with a nice update for our workflow after PyCon but then ran > > out of time after mostly defining it and nothing ever became of it (maybe we > > can rectify that at PyCon?). Eric Snow has pointed out how he has written > > similar code for pulling PRs from I think GitHub to another code review > > tool, but that doesn't magically make it work in our infrastructure or get > > someone to write it and help maintain it (no offense, Eric). > > None taken. I was thinking the same thing when I wrote that. :) > > > > > IOW our infrastructure can do anything, but it can't run on hopes and > > dreams. Commitments from many people to making this happen by a certain > > deadline will be needed so as to not allow it to drag on forever. People > > would also have to commit to continued maintenance to make this viable > > long-term. The biggest blocker to my actually working the proposal I made was that people wanted to see it in action first, which means I needed to spin up a test instance of the tracker and do the work there. That barrier to getting started was enough to keep me from getting started...even though the barrier isn't *that* high (I've done it before, and it is easier now than it was when I first did it), it is still a *lot* higher than checking out CPython and working on a patch. That's probably the biggest issue with *anyone* contributing to tracker maintenance, and if we could solve that, I think we could get more people interested in helping maintain it. We need the equivalent of dev-in-a-box for setting up for testing proposed changes to bugs.python.org, but including some standard way to get it deployed so others can look at a live system running the change in order to review the patch. Maybe our infrastructure folks will have a thought or two about this? I'm willing to put some work into this if we can figure out what direction to head in. It could well be tied in to moving bugs.python.org in with the rest of our infrastructure, something I know Donald has been noodling with off and on; and I'm willing to help with that as well. It sounds like being able to propose and test changes to our Roundup instance (and test other services talking to Roundup, before deploying them for real) is going to be critical to improving our workflow no matter what other decisions are made, so we need to make it easier to do. In other words, it seems like the key to improving the productivity of our CPython patch workflow is to improve the productivity of the patch workflow for our key workflow resource, bugs.python.org. --David From donald at stufft.io Sat Dec 6 02:39:10 2014 From: donald at stufft.io (Donald Stufft) Date: Fri, 5 Dec 2014 20:39:10 -0500 Subject: [Python-Dev] My thinking about the development process In-Reply-To: <20141206012608.B55DBB1408D@webabinitio.net> References: <20141206012608.B55DBB1408D@webabinitio.net> Message-ID: > On Dec 5, 2014, at 8:26 PM, R. David Murray wrote: > > On Fri, 05 Dec 2014 15:17:35 -0700, Eric Snow wrote: >> On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon wrote: >>> We don't exactly have a ton of people >>> constantly going "I'm so bored because everything for Python's development >>> infrastructure gets sorted so quickly!" A perfect example is that R. David >>> Murray came up with a nice update for our workflow after PyCon but then ran >>> out of time after mostly defining it and nothing ever became of it (maybe we >>> can rectify that at PyCon?). Eric Snow has pointed out how he has written >>> similar code for pulling PRs from I think GitHub to another code review >>> tool, but that doesn't magically make it work in our infrastructure or get >>> someone to write it and help maintain it (no offense, Eric). >> >> None taken. I was thinking the same thing when I wrote that. :) >> >>> >>> IOW our infrastructure can do anything, but it can't run on hopes and >>> dreams. Commitments from many people to making this happen by a certain >>> deadline will be needed so as to not allow it to drag on forever. People >>> would also have to commit to continued maintenance to make this viable >>> long-term. > > The biggest blocker to my actually working the proposal I made was that > people wanted to see it in action first, which means I needed to spin up > a test instance of the tracker and do the work there. That barrier to > getting started was enough to keep me from getting started...even though > the barrier isn't *that* high (I've done it before, and it is easier now > than it was when I first did it), it is still a *lot* higher than > checking out CPython and working on a patch. > > That's probably the biggest issue with *anyone* contributing to tracker > maintenance, and if we could solve that, I think we could get more > people interested in helping maintain it. We need the equivalent of > dev-in-a-box for setting up for testing proposed changes to > bugs.python.org, but including some standard way to get it deployed so > others can look at a live system running the change in order to review > the patch. > > Maybe our infrastructure folks will have a thought or two about this? > I'm willing to put some work into this if we can figure out what > direction to head in. It could well be tied in to moving > bugs.python.org in with the rest of our infrastructure, something I know > Donald has been noodling with off and on; and I'm willing to help with > that as well. Theoretically you could create a dev environment with the psf-salt stuff once it?s actually done. It won?t be the most efficient use of your computer resources because it?d expect to run several vagrant VMs locally but it would also match ?production? (in a salt-ified world) better. It wouldn?t be as good as a dedicated dev setup for it, but it would probably be better than a sort of ?yea here?s a bunch of steps that sort of get you close YOLO?. > > It sounds like being able to propose and test changes to our Roundup > instance (and test other services talking to Roundup, before deploying > them for real) is going to be critical to improving our workflow no > matter what other decisions are made, so we need to make it easier to > do. > > In other words, it seems like the key to improving the productivity of > our CPython patch workflow is to improve the productivity of the patch > workflow for our key workflow resource, bugs.python.org. > > --David > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/donald%40stufft.io --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ncoghlan at gmail.com Sat Dec 6 05:08:52 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 6 Dec 2014 14:08:52 +1000 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: <20141206012608.B55DBB1408D@webabinitio.net> Message-ID: On 6 December 2014 at 11:39, Donald Stufft wrote: >> Maybe our infrastructure folks will have a thought or two about this? >> I'm willing to put some work into this if we can figure out what >> direction to head in. It could well be tied in to moving >> bugs.python.org in with the rest of our infrastructure, something I know >> Donald has been noodling with off and on; and I'm willing to help with >> that as well. > > Theoretically you could create a dev environment with the psf-salt stuff > once it?s actually done. It won?t be the most efficient use of your computer > resources because it?d expect to run several vagrant VMs locally but it would > also match ?production? (in a salt-ified world) better. It wouldn?t be as > good as a dedicated dev setup for it, but it would probably be better than > a sort of ?yea here?s a bunch of steps that sort of get you close YOLO?. For demonstrating UI changes, either a single VM Vagrant setup specifically for testing, or else something that works in the free tier of a public PaaS may be a better option. The advantage of those two approaches is that they'd be potentially acceptable as contributions to the upstream Roundup project, rather than needing to be CPython specific. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Dec 6 05:40:44 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 6 Dec 2014 14:40:44 +1000 Subject: [Python-Dev] Python 2/3 porting HOWTO has been updated In-Reply-To: <1417826693.3685392.199461181.23A9D812@webmail.messagingengine.com> References: <1417813664.3644099.199403849.649886EF@webmail.messagingengine.com> <1417826693.3685392.199461181.23A9D812@webmail.messagingengine.com> Message-ID: On 6 December 2014 at 10:44, Benjamin Peterson wrote: > On Fri, Dec 5, 2014, at 18:16, Donald Stufft wrote: >> Do we need to update it? Can it just redirect to the 3 version? > > Technically, yes, of course. However, that would unexpected take you out > of the Python 2 docs "context". Also, that doesn't solve the problem for > the downloadable versions of the docs. As Benjamin says, we'll likely want to update the Python 2 version eventually for the benefit of the downloadable version of the docs, but Brett's also right it makes sense to wait for feedback on the Python 3 version and then backport the most up to date text wholesale. In terms of the text itself, this is a great update Brett - thanks! A couple of specific notes: * http://python-future.org/compatible_idioms.html is my favourite short list of "What are the specific Python 2 only habits that I need to unlearn in order to start writing 2/3 compatible code?". It could be worth mentioning in addition to the What's New documents and the full Python 3 Porting book. * it's potentially worth explicitly noting the "bytes(index_value)" and "str(bytes_value)" traps when discussing the bytes/text changes. Those do rather different things in Python 2 & 3, but won't emit an error or warning in either version. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Dec 6 05:55:09 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 6 Dec 2014 14:55:09 +1000 Subject: [Python-Dev] PEP 479 (Change StopIteration handling inside generators) -- hopefully final text In-Reply-To: References: Message-ID: On 6 December 2014 at 04:42, Guido van Rossum wrote: > For those who haven't followed along, here's the final text of PEP 479, with > a brief Acceptance section added. The basic plan hasn't changed, but there's > a lot more clarifying text and discussion of a few counter-proposals. Please > send suggestions for editorial improvements to peps at python.org. The official > reference version of the PEP is at > https://www.python.org/dev/peps/pep-0479/; the repo is > https://hg.python.org/peps/ (please check out the repo and send diffs > relative to the repo if you have edits). Thanks Guido, that explanation of the change looks great to me. And thanks also to Chris and everyone else that helped with the rather involved discussions! Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Dec 6 06:41:54 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 6 Dec 2014 15:41:54 +1000 Subject: [Python-Dev] Python 2/3 porting HOWTO has been updated In-Reply-To: References: <1417813664.3644099.199403849.649886EF@webmail.messagingengine.com> <1417826693.3685392.199461181.23A9D812@webmail.messagingengine.com> Message-ID: On 6 December 2014 at 14:40, Nick Coghlan wrote: > On 6 December 2014 at 10:44, Benjamin Peterson wrote: >> On Fri, Dec 5, 2014, at 18:16, Donald Stufft wrote: >>> Do we need to update it? Can it just redirect to the 3 version? >> >> Technically, yes, of course. However, that would unexpected take you out >> of the Python 2 docs "context". Also, that doesn't solve the problem for >> the downloadable versions of the docs. > > As Benjamin says, we'll likely want to update the Python 2 version > eventually for the benefit of the downloadable version of the docs, > but Brett's also right it makes sense to wait for feedback on the > Python 3 version and then backport the most up to date text wholesale. > > In terms of the text itself, this is a great update Brett - thanks! > > A couple of specific notes: > > * http://python-future.org/compatible_idioms.html is my favourite > short list of "What are the specific Python 2 only habits that I need > to unlearn in order to start writing 2/3 compatible code?". It could > be worth mentioning in addition to the What's New documents and the > full Python 3 Porting book. > > * it's potentially worth explicitly noting the "bytes(index_value)" > and "str(bytes_value)" traps when discussing the bytes/text changes. > Those do rather different things in Python 2 & 3, but won't emit an > error or warning in either version. Given that 3.4 and 2.7.9 will be the first exposure some users will have had to pip, would it perhaps be worth explicitly mentioning the "pip install " commands for the various tools? At least pylint's PyPI page only gives the manual download instructions, including which dependencies you will need to install. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From tjreedy at udel.edu Sat Dec 6 08:53:17 2014 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 06 Dec 2014 02:53:17 -0500 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: Message-ID: On 12/5/2014 3:04 PM, Brett Cannon wrote: > 1. Contributor clones a repository from hg.python.org > 2. Contributor makes desired changes > 3. Contributor generates a patch > 4. Contributor creates account on bugs.python.org > and signs the > [contributor > agreement](https://www.python.org/psf/contrib/contrib-form/) I would like to have the process of requesting and enforcing the signing of CAs automated. > 4. Contributor creates an issue on bugs.python.org > (if one does not already exist) and uploads a patch I would like to have patches rejected, or at least held up, until a CA is registered. For this to work, a signed CA should be immediately registered on the tracker, at least as 'pending'. It now can take a week or more to go through human processing. > 5. Core developer evaluates patch, possibly leaving comments through our > [custom version of Rietveld](http://bugs.python.org/review/) > 6. Contributor revises patch based on feedback and uploads new patch > 7. Core developer downloads patch and applies it to a clean clone > 8. Core developer runs the tests > 9. Core developer does one last `hg pull -u` and then commits the > changes to various branches -- Terry Jan Reedy From bcannon at gmail.com Sat Dec 6 14:40:23 2014 From: bcannon at gmail.com (Brett Cannon) Date: Sat, 06 Dec 2014 13:40:23 +0000 Subject: [Python-Dev] Python 2/3 porting HOWTO has been updated References: <1417813664.3644099.199403849.649886EF@webmail.messagingengine.com> <1417826693.3685392.199461181.23A9D812@webmail.messagingengine.com> Message-ID: Thanks for the feedback. I'll update the doc probably on Friday. On Sat Dec 06 2014 at 12:41:54 AM Nick Coghlan wrote: > On 6 December 2014 at 14:40, Nick Coghlan wrote: > > On 6 December 2014 at 10:44, Benjamin Peterson > wrote: > >> On Fri, Dec 5, 2014, at 18:16, Donald Stufft wrote: > >>> Do we need to update it? Can it just redirect to the 3 version? > >> > >> Technically, yes, of course. However, that would unexpected take you out > >> of the Python 2 docs "context". Also, that doesn't solve the problem for > >> the downloadable versions of the docs. > > > > As Benjamin says, we'll likely want to update the Python 2 version > > eventually for the benefit of the downloadable version of the docs, > > but Brett's also right it makes sense to wait for feedback on the > > Python 3 version and then backport the most up to date text wholesale. > > > > In terms of the text itself, this is a great update Brett - thanks! > > > > A couple of specific notes: > > > > * http://python-future.org/compatible_idioms.html is my favourite > > short list of "What are the specific Python 2 only habits that I need > > to unlearn in order to start writing 2/3 compatible code?". It could > > be worth mentioning in addition to the What's New documents and the > > full Python 3 Porting book. > > > > * it's potentially worth explicitly noting the "bytes(index_value)" > > and "str(bytes_value)" traps when discussing the bytes/text changes. > > Those do rather different things in Python 2 & 3, but won't emit an > > error or warning in either version. > > Given that 3.4 and 2.7.9 will be the first exposure some users will > have had to pip, would it perhaps be worth explicitly mentioning the > "pip install " commands for the various tools? At least pylint's > PyPI page only gives the manual download instructions, including which > dependencies you will need to install. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Sat Dec 6 14:45:16 2014 From: bcannon at gmail.com (Brett Cannon) Date: Sat, 06 Dec 2014 13:45:16 +0000 Subject: [Python-Dev] My thinking about the development process References: <55D026B6-484D-4AEA-90C3-F41B2EA79142@stufft.io> Message-ID: On Fri Dec 05 2014 at 3:24:38 PM Donald Stufft wrote: > > On Dec 5, 2014, at 3:04 PM, Brett Cannon wrote: > > > > This looks like a pretty good write up, seems to pretty fairly evaluate > the various sides and the various concerns. > Thanks! It seems like I have gotten the point across that I don't care what the solution is as long as it's a good one and that we have to look at the whole process and not just a corner of it if we want big gains. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Dec 6 15:01:57 2014 From: donald at stufft.io (Donald Stufft) Date: Sat, 6 Dec 2014 09:01:57 -0500 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: <55D026B6-484D-4AEA-90C3-F41B2EA79142@stufft.io> Message-ID: > On Dec 6, 2014, at 8:45 AM, Brett Cannon wrote: > > > > On Fri Dec 05 2014 at 3:24:38 PM Donald Stufft > wrote: > >> On Dec 5, 2014, at 3:04 PM, Brett Cannon > wrote: >> > > This looks like a pretty good write up, seems to pretty fairly evaluate the various sides and the various concerns. > > Thanks! It seems like I have gotten the point across that I don't care what the solution is as long as it's a good one and that we have to look at the whole process and not just a corner of it if we want big gains. One potential solution is Phabricator (http://phabricator.org ) which is a gerrit like tool except it also works with Mercurial. It is a fully open source platform though it works on a ?patch? bases rather than a pull request basis. They are also coming out with hosting for it (http://phacility.com/ ) but that is ?coming soon? and I?m not sure what the cost will be and if they?d be willing to donate to an OSS project. It makes it easier to upload a patch using a command like tool (like gerrit does) called arc. Phabricator itself is OSS and the coming soon page for phacility says that it?s easy to migrate from a hosted to a self-hosted solution. Phabricator supports hosting the repository itself but as I understand it, it also supports hosting the repository elsewhere. So it could mean that we host the repository on a platform that supports Pull Requests (as you might expect, I?m a fan of Github here) and also deploy Phabricator on top of it. I haven?t actually tried that so I?d want to play around with it to make sure this works how I believe it does, but it may be a good way to enable both pull requests (and the web editors that tend to come with those workflows) for easier changes and a different tool for more invasive changes. Terry spoke about CLAs, which is an interesting thing too, because phabricator itself has some workflow around this I believe, at least one of the examples in their tour is setting up some sort of notification about requiring a CLA. It even has a built in thing for signing legal documents (although I?m not sure if that?s acceptable to the PSF, we?d need to ask VanL I suspect). Another neat feature, although I?m not sure we?re actually setup to take advantage of it, is that if you run test coverage numbers you can report that directly inline with the review / diff to see what lines of the patch are being exercised by a test or not. I?m not sure if it?s actually workable for us but it probably should be explored a little bit to see if it is and if it might be a good solution. They also have a copy of it running which they develop phabricator itself on (https://secure.phabricator.com/ ) though they also accept pull requests on github. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Dec 6 15:11:44 2014 From: brett at python.org (Brett Cannon) Date: Sat, 06 Dec 2014 14:11:44 +0000 Subject: [Python-Dev] My thinking about the development process References: <20141206012608.B55DBB1408D@webabinitio.net> Message-ID: On Fri Dec 05 2014 at 8:31:27 PM R. David Murray wrote: > On Fri, 05 Dec 2014 15:17:35 -0700, Eric Snow > wrote: > > On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon wrote: > > > We don't exactly have a ton of people > > > constantly going "I'm so bored because everything for Python's > development > > > infrastructure gets sorted so quickly!" A perfect example is that R. > David > > > Murray came up with a nice update for our workflow after PyCon but > then ran > > > out of time after mostly defining it and nothing ever became of it > (maybe we > > > can rectify that at PyCon?). Eric Snow has pointed out how he has > written > > > similar code for pulling PRs from I think GitHub to another code review > > > tool, but that doesn't magically make it work in our infrastructure or > get > > > someone to write it and help maintain it (no offense, Eric). > > > > None taken. I was thinking the same thing when I wrote that. :) > > > > > > > > IOW our infrastructure can do anything, but it can't run on hopes and > > > dreams. Commitments from many people to making this happen by a certain > > > deadline will be needed so as to not allow it to drag on forever. > People > > > would also have to commit to continued maintenance to make this viable > > > long-term. > > The biggest blocker to my actually working the proposal I made was that > people wanted to see it in action first, which means I needed to spin up > a test instance of the tracker and do the work there. That barrier to > getting started was enough to keep me from getting started...even though > the barrier isn't *that* high (I've done it before, and it is easier now > than it was when I first did it), it is still a *lot* higher than > checking out CPython and working on a patch. > > That's probably the biggest issue with *anyone* contributing to tracker > maintenance, and if we could solve that, I think we could get more > people interested in helping maintain it. We need the equivalent of > dev-in-a-box for setting up for testing proposed changes to > bugs.python.org, but including some standard way to get it deployed so > others can look at a live system running the change in order to review > the patch. > Maybe it's just me and all the Docker/Rocket hoopla that's occurred over the past week, but this just screams "container" to me which would make getting a test instance set up dead simple. > > Maybe our infrastructure folks will have a thought or two about this? > I'm willing to put some work into this if we can figure out what > direction to head in. It could well be tied in to moving > bugs.python.org in with the rest of our infrastructure, something I know > Donald has been noodling with off and on; and I'm willing to help with > that as well. > > It sounds like being able to propose and test changes to our Roundup > instance (and test other services talking to Roundup, before deploying > them for real) is going to be critical to improving our workflow no > matter what other decisions are made, so we need to make it easier to > do. > > In other words, it seems like the key to improving the productivity of > our CPython patch workflow is to improve the productivity of the patch > workflow for our key workflow resource, bugs.python.org. > Quite possible and since no one is suggesting we drop bugs.python.org it's a worthy goal to have regardless of what PEP gets accepted. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Dec 6 15:13:47 2014 From: brett at python.org (Brett Cannon) Date: Sat, 06 Dec 2014 14:13:47 +0000 Subject: [Python-Dev] My thinking about the development process References: Message-ID: On Sat Dec 06 2014 at 2:53:43 AM Terry Reedy wrote: > On 12/5/2014 3:04 PM, Brett Cannon wrote: > > > 1. Contributor clones a repository from hg.python.org < > http://hg.python.org> > > 2. Contributor makes desired changes > > 3. Contributor generates a patch > > 4. Contributor creates account on bugs.python.org > > and signs the > > [contributor > > agreement](https://www.python.org/psf/contrib/contrib-form/) > > I would like to have the process of requesting and enforcing the signing > of CAs automated. > So would I. > > > 4. Contributor creates an issue on bugs.python.org > > (if one does not already exist) and uploads a > patch > > I would like to have patches rejected, or at least held up, until a CA > is registered. For this to work, a signed CA should be immediately > registered on the tracker, at least as 'pending'. It now can take a > week or more to go through human processing. > This is one of the reasons I didn't want to create an issue magically from PRs initially. I think it's totally doable with some coding. -Brett > > > > 5. Core developer evaluates patch, possibly leaving comments through our > > [custom version of Rietveld](http://bugs.python.org/review/) > > 6. Contributor revises patch based on feedback and uploads new patch > > 7. Core developer downloads patch and applies it to a clean clone > > 8. Core developer runs the tests > > 9. Core developer does one last `hg pull -u` and then commits the > > changes to various branches > > -- > Terry Jan Reedy > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Sat Dec 6 15:08:54 2014 From: bcannon at gmail.com (Brett Cannon) Date: Sat, 06 Dec 2014 14:08:54 +0000 Subject: [Python-Dev] My thinking about the development process References: Message-ID: On Fri Dec 05 2014 at 5:17:35 PM Eric Snow wrote: > Very nice, Brett. > Thanks! > > On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon wrote: > > And we can't forget the people who help keep all of this running as well. > > There are those that manage the SSH keys, the issue tracker, the review > > tool, hg.python.org, and the email system that let's use know when stuff > > happens on any of these other systems. The impact on them needs to also > be > > considered. > > It sounds like Guido would rather as much of this was done by a > provider rather than relying on volunteers. That makes sense though > there are concerns about control of certain assents. However, that > applies only to some, like hg.python.org. > Sure, but that's also the reason Guido stuck me with the job of being the Great Decider on this. =) I have a gut feeling of how much support would need to be committed in order to consider things covered well enough (I can't give a number because it will vary depending on who steps forward; someone who I know and trust to stick around is worth more than someone who kindly steps forward and has never volunteered, but that's just because I don't know the stranger and not because I don't want people who are unknown on python-dev to step forward innately). > > > > > ## Contributors > > I see two scenarios for contributors to optimize for. There's the simple > > spelling mistake patches and then there's the code change patches. The > > former is the kind of thing that you can do in a browser without much > effort > > and should be a no-brainer commit/reject decision for a core developer. > This > > is what the GitHub/Bitbucket camps have been promoting their solution for > > solving while leaving the cpython repo alone. Unfortunately the bulk of > our > > documentation is in the Doc/ directory of cpython. While it's nice to > think > > about moving the devguide, peps, and even breaking out the tutorial to > repos > > hosting on Bitbucket/GitHub, everything else is in Doc/ (language > reference, > > howtos, stdlib, C API, etc.). So unless we want to completely break all > of > > Doc/ out of the cpython repo and have core developers willing to edit two > > separate repos when making changes that impact code **and** docs, moving > > only a subset of docs feels like a band-aid solution that ignores the > big, > > white elephant in the room: the cpython repo, where a bulk of patches are > > targeting. > > With your ideal scenario this would be a moot point, right? There > would be no need to split out doc-related repos. > Exactly, which is why I stressed we can't simply ignore the cpython repo. If someone is bored they could run an analysis on the various repos, calculate the number of contributions for outsiders -- maybe check the logs for the use of the work "Thank" since we typically say "Thanks to ..." -- and see how many external contributions we got in all the repos and also a detailed breakdown for Doc/. > > > > > For the code change patches, contributors need an easy way to get a hold > of > > the code and get their changes to the core developers. After that it's > > things like letting contributors knowing that their patch doesn't apply > > cleanly, doesn't pass tests, etc. > > This is probably more work than it seems at first. > Maybe, maybe not. Depends on what external services someone wants to rely on. E.g., could a webhook with some CI company be used so that it's more "grab the patch from here and run the tests" vs. us having to manage the whole CI infrastructure? Just because the home-grown solution requires developers and maintenance doesn't mean that the maintenance is more maintaining the code to interface with an external service provider instead of providing the service ourselves from scratch. And don't forget companies will quite possibly donate services if you ask or the PSF could pay for some things. > > > As of right now getting the patch into the > > issue tracker is a bit manual but nothing crazy. The real issue in this > > scenario is core developer response time. > > > > ## Core developers > > There is a finite amount of time that core developers get to contribute > to > > Python and it fluctuates greatly. This means that if a process can be > found > > which allows core developers to spend less time doing mechanical work and > > more time doing things that can't be automated -- namely code reviews -- > > then the throughput of patches being accepted/rejected will increase. > This > > also impacts any increased patch submission rate that comes from > improving > > the situation for contributors because if the throughput doesn't change > then > > there will simply be more patches sitting in the issue tracker and that > > doesn't benefit anyone. > > This is the key concern I have with only addressing the contributor > side of things. I'm all for increasing contributions, but not if they > are just going to rot on the tracker and we end up with disillusioned > contributors. > Yep, which is why I'm saying we need a complete solution to our entire development process. > > > > > # My ideal scenario > > If I had an infinite amount of resources (money, volunteers, time, etc.), > > this would be my ideal scenario: > > > > 1. Contributor gets code from wherever; easiest to just say "fork on > GitHub > > or Bitbucket" as they would be official mirrors of hg.python.org and are > > updated after every commit, but could clone hg.python.org/cpython if > they > > wanted > > 2. Contributor makes edits; if they cloned on Bitbucket or GitHub then > they > > have browser edit access already > > 3. Contributor creates an account at bugs.python.org and signs the CLA > > There's no real way around this, is there? I suppose account creation > *could* be automated relative to a github or bitbucket user, though it > probably isn't worth the effort. However, the CLA part is pretty > unavoidable. > Account creation is not that heavy. We could make it so that if you create an account from e.g. a GitHub account we extract some of the details using OAuth from GitHub automatically. Once again, it's just a matter of effort. > > > 3. The contributor creates an issue at bugs.python.org (probably the one > > piece of infrastructure we all agree is better than the other options, > > although its workflow could use an update) > > I wonder if issue creation from a PR (where no issue # is in the > message) could be automated too without a lot of extra work. > I'm sure it's possible. You can tell me in a PEP. =) > > > 4. If the contributor used Bitbucket or GitHub, they send a pull request > > with the issue # in the PR message > > 5. bugs.python.org notices the PR, grabs a patch for it, and puts it on > > bugs.python.org for code review > > 6. CI runs on the patch based on what Python versions are specified in > the > > issue tracker, letting everyone know if it applied cleanly, passed tests > on > > the OSs that would be affected, and also got a test coverage report > > 7. Core developer does a code review > > 8. Contributor updates their code based on the code review and the > updated > > patch gets pulled by bugs.python.org automatically and CI runs again > > 9. Once the patch is acceptable and assuming the patch applies cleanly to > > all versions to commit to, the core developer clicks a "Commit" button, > > fills in a commit message and NEWS entry, and everything gets committed > (if > > the patch can't apply cleanly then the core developer does it the > > old-fashion way, or maybe auto-generate a new PR which can be manually > > touched up so it does apply cleanly?) > > 6-9 sounds a lot like PEP 462. :) This seems like the part the would > win us the most. > I have stated publicly multiple times that I really wanted Nick's workflow to happen, but since it is dependent on volunteers it didn't materialize. I mean this is also a lot like the GitHub+Travis/Bitbucket+drone.io|| Codeship.io workflow most other projects use -- my personal ones included -- and it's great. We just like to complicate things with 18 month release cycles and bugfix releases. =) > > > > > Basically the ideal scenario lets contributors use whatever tools and > > platforms that they want and provides as much automated support as > possible > > to make sure their code is tip-top before and during code review while > core > > developers can review and commit patches so easily that they can do their > > job from a beach with a tablet and some WiFi. > > Sign me up! > Do the PEP and the work and I will! =) > > > > > ## Where the current proposed solutions seem to fall short > > ### GitHub/Bitbucket > > Basically GitHub/Bitbucket is a win for contributors but doesn't buy core > > developers that much. GitHub/Bitbucket gives contributors the easy > cloning, > > drive-by patches, CI, and PRs. Core developers get a code review tool -- > I'm > > counting Rietveld as deprecated after Guido's comments about the code's > > maintenance issues -- and push-button commits **only for single branch > > changes**. But for any patch that crosses branches we don't really gain > > anything. At best core developers tell a contributor "please send your PR > > against 3.4", push-button merge it, update a local clone, merge from 3.4 > to > > default, do the usual stuff, commit, and then push; that still keeps me > off > > the beach, though, so that doesn't get us the whole way. > > This will probably be one of the trickiest parts. > Yes, but I know for me personally and I would wager for most other core developers it's the branch merging work that is the biggest blocker from wanting to put the time in to accept a patch. And then on top of that it's simply having access to a checkout (if I could accept simple patches through a browser I could do it on my lunch break at work 5 days a week; heck I would probably make it a personal goal to try and accept a patch a day if it was simply a button press). > > > You could force > > people to submit two PRs, but I don't see that flying. Maybe some tool > could > > be written that automatically handles the merge/commit across branches > once > > the initial PR is in? Or automatically create a PR that core developers > can > > touch up as necessary and then accept that as well? Regardless, some > > solution is necessary to handle branch-crossing PRs. > > > > As for GitHub vs. Bitbucket, I personally don't care. I like GitHub's > > interface more, but that's personal taste. I like hg more than git, but > > that's also personal taste (and I consider a transition from hg to git a > > hassle but not a deal-breaker but also not a win). It is unfortunate, > > though, that under this scenario we would have to choose only one > platform. > > > > It's also unfortunate both are closed-source, but that's not a > deal-breaker, > > just a knock against if the decision is close. > > > > ### Our own infrastructure > > The shortcoming here is the need for developers, developers, developers! > > Everything outlined in the ideal scenario is totally doable on our own > > infrastructure with enough code and time (donated/paid-for infrastructure > > shouldn't be an issue). But historically that code and time has not > > materialized. Our code review tool is a fork that probably should be > > replaced as only Martin von L?wis can maintain it. Basically Ezio Melotti > > maintains the issue tracker's code. > > Doing something about those two tools is something to consider. Would > it be out of scope for this discussion or any resulting PEPS? I have > opinions here, but I'd rather not sidetrack the discussion. > I would be very happy if someone wrote up a PEP saying "we don't need to do a complete overhaul and toss everything out, we just need to tweak this stuff" or a "here is a fallback PEP to update some things if none of the proposals can solve the cpython problem" so that we basically have a PEP for considering risk mitigation. So think of this PEP as saying "we can switch to X for a review tool, we can add a GitHub/Bitbucket button for pulling from a fork by doing Y, we can use service Z as a CI service without issue through webhooks" but not necessarily worrying about issue created from PRs, etc. that might be a bit tricky; IOW the least drastic PEP that still nabs us some wins. > > > We don't exactly have a ton of people > > constantly going "I'm so bored because everything for Python's > development > > infrastructure gets sorted so quickly!" A perfect example is that R. > David > > Murray came up with a nice update for our workflow after PyCon but then > ran > > out of time after mostly defining it and nothing ever became of it > (maybe we > > can rectify that at PyCon?). Eric Snow has pointed out how he has written > > similar code for pulling PRs from I think GitHub to another code review > > tool, but that doesn't magically make it work in our infrastructure or > get > > someone to write it and help maintain it (no offense, Eric). > > None taken. I was thinking the same thing when I wrote that. :) > > > > > IOW our infrastructure can do anything, but it can't run on hopes and > > dreams. Commitments from many people to making this happen by a certain > > deadline will be needed so as to not allow it to drag on forever. People > > would also have to commit to continued maintenance to make this viable > > long-term. > > > > # Next steps > > I'm thinking first draft PEPs by February 1 to know who's all-in (8 weeks > > away), all details worked out in final PEPs and whatever is required to > > prove to me it will work by the PyCon language summit (4 months away). I > > make a decision by May 1, and > > then implementation aims to be done by the time 3.5.0 is cut so we can > > switch over shortly thereafter (9 months away). Sound like a reasonable > > timeline? > > Sounds reasonable to me, but I don't have plans to champion a PEP. :) > I could probably help with the tooling between GitHub/Bitbucket > though. > And Ian Cordasco also said he could help, but I still need a PEP to work from. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Dec 6 16:07:47 2014 From: donald at stufft.io (Donald Stufft) Date: Sat, 6 Dec 2014 10:07:47 -0500 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: <20141206012608.B55DBB1408D@webabinitio.net> Message-ID: <4E6DEB03-B444-4DF1-9CDF-C80E81F3F237@stufft.io> > On Dec 6, 2014, at 9:11 AM, Brett Cannon wrote: > > > > On Fri Dec 05 2014 at 8:31:27 PM R. David Murray > wrote: > On Fri, 05 Dec 2014 15:17:35 -0700, Eric Snow > wrote: > > On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon > wrote: > > > We don't exactly have a ton of people > > > constantly going "I'm so bored because everything for Python's development > > > infrastructure gets sorted so quickly!" A perfect example is that R. David > > > Murray came up with a nice update for our workflow after PyCon but then ran > > > out of time after mostly defining it and nothing ever became of it (maybe we > > > can rectify that at PyCon?). Eric Snow has pointed out how he has written > > > similar code for pulling PRs from I think GitHub to another code review > > > tool, but that doesn't magically make it work in our infrastructure or get > > > someone to write it and help maintain it (no offense, Eric). > > > > None taken. I was thinking the same thing when I wrote that. :) > > > > > > > > IOW our infrastructure can do anything, but it can't run on hopes and > > > dreams. Commitments from many people to making this happen by a certain > > > deadline will be needed so as to not allow it to drag on forever. People > > > would also have to commit to continued maintenance to make this viable > > > long-term. > > The biggest blocker to my actually working the proposal I made was that > people wanted to see it in action first, which means I needed to spin up > a test instance of the tracker and do the work there. That barrier to > getting started was enough to keep me from getting started...even though > the barrier isn't *that* high (I've done it before, and it is easier now > than it was when I first did it), it is still a *lot* higher than > checking out CPython and working on a patch. > > That's probably the biggest issue with *anyone* contributing to tracker > maintenance, and if we could solve that, I think we could get more > people interested in helping maintain it. We need the equivalent of > dev-in-a-box for setting up for testing proposed changes to > bugs.python.org , but including some standard way to get it deployed so > others can look at a live system running the change in order to review > the patch. > > Maybe it's just me and all the Docker/Rocket hoopla that's occurred over the past week, but this just screams "container" to me which would make getting a test instance set up dead simple. Heh, one of my thoughts on deploying the bug tracker into production was via a container, especially since we have multiple instances of it. I got side tracked on getting the rest of the infrastructure readier for a web application and some improvements there as well as getting a big postgresql database cluster set up (2x 15GB RAM servers running in Primary/Replica mode). The downside of course to this is that afaik Docker is a lot harder to use on Windows and to some degree OS X than linux. However if the tracker could be deployed as a docker image that would make the infrastructure side a ton easier. I also have control over the python/ organization on Docker Hub too for whatever uses we have for it. Unrelated to the tracker: Something that any PEP should consider is security, particularly that of running the tests. Currently we have a buildbot fleet that checks out the code and executes the test suite (aka code). A problem that any pre-merge test runner needs to solve is that unlike a post-merge runner, which will only run code that has been committed by a committer, a pre-merge runner will run code that _anybody_ has submitted. This means that it?s not merely enough to simply trigger a build in our buildbot fleet prior to the merge happening as that would allow anyone to execute arbitrary code there. As far as I?m aware there are two solutions to this problem in common use, either use throw away environments/machines/containers that isolate the running code and then get destroyed after each test run, or don?t run the pre-merge tests immediately unless it?s from a ?trusted? person and for ?untrusted? or ?unknown? people require a ?trusted? person to give the OK for each test run. The throw away machine solution is obviously much nicer experience for the ?untrusted? or ?unknown? users since they don?t require any intervention to get their tests run which means that they can see if their tests pass, fix things, and then see if that fixes it much quicker. The obvious downside here is that it?s more effort to do that and the availability of throw away environments for all the systems we support. Linux, most (all?) of the BSDs, and Windows are pretty easy here since there are cloud offerings for them that can be used to spin up a temporary environment, run tests, and then delete it. OS X is a problem because afaik you can only virtualize OS X on Apple hardware and I?m not aware of any cloud provider that offers metered access to OS X hosts. The more esoteric systems like AIX and what not are likely an even bigger problem in this regard since I?m unsure of the ability to get virtualized instances of these at all. It may be possible to build our own images of these on a cloud provider assuming that their licenses allow that. The other solution would work easier with our current buildbot fleet since you?d just tell it to run some tests but you?d wait until a ?trusted? person gave the OK before you did that. A likely solution is to use a pre-merge test runner for the systems that we can isolate which will give a decent indication if the tests are going to pass across the entire supported matrix or not and then continue to use the current post-merge test runner to handle testing the esoteric systems that we can?t work into the pre-merge testing. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Dec 6 16:26:27 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 7 Dec 2014 01:26:27 +1000 Subject: [Python-Dev] My thinking about the development process In-Reply-To: <4E6DEB03-B444-4DF1-9CDF-C80E81F3F237@stufft.io> References: <20141206012608.B55DBB1408D@webabinitio.net> <4E6DEB03-B444-4DF1-9CDF-C80E81F3F237@stufft.io> Message-ID: On 7 December 2014 at 01:07, Donald Stufft wrote: > A likely solution is to use a pre-merge test runner for the systems that we > can isolate which will give a decent indication if the tests are going to > pass across the entire supported matrix or not and then continue to use the > current post-merge test runner to handle testing the esoteric systems that > we can?t work into the pre-merge testing. Yep, that's exactly the approach I had in mind for this problem. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Dec 6 16:30:52 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 7 Dec 2014 01:30:52 +1000 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: <20141206012608.B55DBB1408D@webabinitio.net> Message-ID: On 7 December 2014 at 00:11, Brett Cannon wrote: > On Fri Dec 05 2014 at 8:31:27 PM R. David Murray > wrote: >> >> That's probably the biggest issue with *anyone* contributing to tracker >> maintenance, and if we could solve that, I think we could get more >> people interested in helping maintain it. We need the equivalent of >> dev-in-a-box for setting up for testing proposed changes to >> bugs.python.org, but including some standard way to get it deployed so >> others can look at a live system running the change in order to review >> the patch. > > > Maybe it's just me and all the Docker/Rocket hoopla that's occurred over the > past week, but this just screams "container" to me which would make getting > a test instance set up dead simple. It's not just you (and Graham Dumpleton has even been working on reference images for Apache/mod_wsgi hosting of Python web services: http://blog.dscpl.com.au/2014/12/hosting-python-wsgi-applications-using.html) You still end up with Vagrant as a required element for Windows and Mac OS X, but that's pretty much a given for a lot of web service development these days. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From brett at python.org Sat Dec 6 16:21:46 2014 From: brett at python.org (Brett Cannon) Date: Sat, 06 Dec 2014 15:21:46 +0000 Subject: [Python-Dev] My thinking about the development process References: <20141206012608.B55DBB1408D@webabinitio.net> <4E6DEB03-B444-4DF1-9CDF-C80E81F3F237@stufft.io> Message-ID: On Sat Dec 06 2014 at 10:07:50 AM Donald Stufft wrote: > > On Dec 6, 2014, at 9:11 AM, Brett Cannon wrote: > > > > On Fri Dec 05 2014 at 8:31:27 PM R. David Murray > wrote: > >> On Fri, 05 Dec 2014 15:17:35 -0700, Eric Snow < >> ericsnowcurrently at gmail.com> wrote: >> > On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon wrote: >> > > We don't exactly have a ton of people >> > > constantly going "I'm so bored because everything for Python's >> development >> > > infrastructure gets sorted so quickly!" A perfect example is that R. >> David >> > > Murray came up with a nice update for our workflow after PyCon but >> then ran >> > > out of time after mostly defining it and nothing ever became of it >> (maybe we >> > > can rectify that at PyCon?). Eric Snow has pointed out how he has >> written >> > > similar code for pulling PRs from I think GitHub to another code >> review >> > > tool, but that doesn't magically make it work in our infrastructure >> or get >> > > someone to write it and help maintain it (no offense, Eric). >> > >> > None taken. I was thinking the same thing when I wrote that. :) >> > >> > > >> > > IOW our infrastructure can do anything, but it can't run on hopes and >> > > dreams. Commitments from many people to making this happen by a >> certain >> > > deadline will be needed so as to not allow it to drag on forever. >> People >> > > would also have to commit to continued maintenance to make this viable >> > > long-term. >> >> The biggest blocker to my actually working the proposal I made was that >> people wanted to see it in action first, which means I needed to spin up >> a test instance of the tracker and do the work there. That barrier to >> getting started was enough to keep me from getting started...even though >> the barrier isn't *that* high (I've done it before, and it is easier now >> than it was when I first did it), it is still a *lot* higher than >> checking out CPython and working on a patch. >> >> That's probably the biggest issue with *anyone* contributing to tracker >> maintenance, and if we could solve that, I think we could get more >> people interested in helping maintain it. We need the equivalent of >> dev-in-a-box for setting up for testing proposed changes to >> bugs.python.org, but including some standard way to get it deployed so >> others can look at a live system running the change in order to review >> the patch. >> > > Maybe it's just me and all the Docker/Rocket hoopla that's occurred over > the past week, but this just screams "container" to me which would make > getting a test instance set up dead simple. > > > Heh, one of my thoughts on deploying the bug tracker into production was > via a container, especially since we have multiple instances of it. I got > side tracked on getting the rest of the infrastructure readier for a web > application and some improvements there as well as getting a big postgresql > database cluster set up (2x 15GB RAM servers running in Primary/Replica > mode). The downside of course to this is that afaik Docker is a lot harder > to use on Windows and to some degree OS X than linux. However if the > tracker could be deployed as a docker image that would make the > infrastructure side a ton easier. I also have control over the python/ > organization on Docker Hub too for whatever uses we have for it. > I think it's something worth thinking about, but like you I don't know if the containers work on OS X or Windows (I don't work with containers personally). > > Unrelated to the tracker: > > Something that any PEP should consider is security, particularly that of > running the tests. Currently we have a buildbot fleet that checks out the > code and executes the test suite (aka code). A problem that any pre-merge > test runner needs to solve is that unlike a post-merge runner, which will > only run code that has been committed by a committer, a pre-merge runner > will run code that _anybody_ has submitted. This means that it?s not merely > enough to simply trigger a build in our buildbot fleet prior to the merge > happening as that would allow anyone to execute arbitrary code there. As > far as I?m aware there are two solutions to this problem in common use, > either use throw away environments/machines/containers that isolate the > running code and then get destroyed after each test run, or don?t run the > pre-merge tests immediately unless it?s from a ?trusted? person and for > ?untrusted? or ?unknown? people require a ?trusted? person to give the OK > for each test run. > > The throw away machine solution is obviously much nicer experience for the > ?untrusted? or ?unknown? users since they don?t require any intervention to > get their tests run which means that they can see if their tests pass, fix > things, and then see if that fixes it much quicker. The obvious downside > here is that it?s more effort to do that and the availability of throw away > environments for all the systems we support. Linux, most (all?) of the > BSDs, and Windows are pretty easy here since there are cloud offerings for > them that can be used to spin up a temporary environment, run tests, and > then delete it. OS X is a problem because afaik you can only virtualize OS > X on Apple hardware and I?m not aware of any cloud provider that offers > metered access to OS X hosts. The more esoteric systems like AIX and what > not are likely an even bigger problem in this regard since I?m unsure of > the ability to get virtualized instances of these at all. It may be > possible to build our own images of these on a cloud provider assuming that > their licenses allow that. > > The other solution would work easier with our current buildbot fleet since > you?d just tell it to run some tests but you?d wait until a ?trusted? > person gave the OK before you did that. > > A likely solution is to use a pre-merge test runner for the systems that > we can isolate which will give a decent indication if the tests are going > to pass across the entire supported matrix or not and then continue to use > the current post-merge test runner to handle testing the esoteric systems > that we can?t work into the pre-merge testing. > Security is definitely something to consider and what you mentioned above is all reasonable for CI of submitted patches. This all also a reason to consider CI services like Travis, Codeship, Drone, etc. as they are already set up for this kind of thing and simply using them for the pre-commit checks and then relying on the buildbots for post-commit verification we didn't break on some specific platform. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Dec 6 16:46:09 2014 From: brett at python.org (Brett Cannon) Date: Sat, 06 Dec 2014 15:46:09 +0000 Subject: [Python-Dev] My thinking about the development process References: <20141206012608.B55DBB1408D@webabinitio.net> Message-ID: On Sat Dec 06 2014 at 10:30:54 AM Nick Coghlan wrote: > On 7 December 2014 at 00:11, Brett Cannon wrote: > > On Fri Dec 05 2014 at 8:31:27 PM R. David Murray > > wrote: > >> > >> That's probably the biggest issue with *anyone* contributing to tracker > >> maintenance, and if we could solve that, I think we could get more > >> people interested in helping maintain it. We need the equivalent of > >> dev-in-a-box for setting up for testing proposed changes to > >> bugs.python.org, but including some standard way to get it deployed so > >> others can look at a live system running the change in order to review > >> the patch. > > > > > > Maybe it's just me and all the Docker/Rocket hoopla that's occurred over > the > > past week, but this just screams "container" to me which would make > getting > > a test instance set up dead simple. > > It's not just you (and Graham Dumpleton has even been working on > reference images for Apache/mod_wsgi hosting of Python web services: > http://blog.dscpl.com.au/2014/12/hosting-python-wsgi- > applications-using.html) > > You still end up with Vagrant as a required element for Windows and > Mac OS X, but that's pretty much a given for a lot of web service > development these days. > If we need a testbed then we could try it out with a devinabox and see how it works with new contributors at PyCon. Would be nice to just have Clang, all the extras for the stdlib, etc. already pulled together for people to work from. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Dec 6 16:51:38 2014 From: donald at stufft.io (Donald Stufft) Date: Sat, 6 Dec 2014 10:51:38 -0500 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: <20141206012608.B55DBB1408D@webabinitio.net> <4E6DEB03-B444-4DF1-9CDF-C80E81F3F237@stufft.io> Message-ID: <546D0075-E169-4366-9106-D8D0B4D94631@stufft.io> > On Dec 6, 2014, at 10:26 AM, Nick Coghlan wrote: > > On 7 December 2014 at 01:07, Donald Stufft wrote: >> A likely solution is to use a pre-merge test runner for the systems that we >> can isolate which will give a decent indication if the tests are going to >> pass across the entire supported matrix or not and then continue to use the >> current post-merge test runner to handle testing the esoteric systems that >> we can?t work into the pre-merge testing. > > Yep, that's exactly the approach I had in mind for this problem. > I?m coming around to the idea for pip too, though I?ve been trying to figure out a way to do pre-merge testing using isolated for even the esoteric platforms. One thing that I?d personally greatly appreciate is if this whole process made it possible for selected external projects to re-use the infrastructure for the harder to get platforms. Pip and setuptools in particular would make good candidates for this I think. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From rdmurray at bitdance.com Sat Dec 6 17:11:32 2014 From: rdmurray at bitdance.com (R. David Murray) Date: Sat, 06 Dec 2014 11:11:32 -0500 Subject: [Python-Dev] Tracker test instances (was: My thinking about the development process) In-Reply-To: References: <20141206012608.B55DBB1408D@webabinitio.net> <4E6DEB03-B444-4DF1-9CDF-C80E81F3F237@stufft.io> Message-ID: <20141206161132.C7EAD250F16@webabinitio.net> On Sat, 06 Dec 2014 15:21:46 +0000, Brett Cannon wrote: > On Sat Dec 06 2014 at 10:07:50 AM Donald Stufft wrote: > > On Dec 6, 2014, at 9:11 AM, Brett Cannon wrote: > > > >> On Fri Dec 05 2014 at 8:31:27 PM R. David Murray > >> wrote: > >>> That's probably the biggest issue with *anyone* contributing to tracker > >>> maintenance, and if we could solve that, I think we could get more > >>> people interested in helping maintain it. We need the equivalent of > >>> dev-in-a-box for setting up for testing proposed changes to > >>> bugs.python.org, but including some standard way to get it deployed so > >>> others can look at a live system running the change in order to review > >>> the patch. > >> > >> Maybe it's just me and all the Docker/Rocket hoopla that's occurred over > >> the past week, but this just screams "container" to me which would make > >> getting a test instance set up dead simple. > > > > Heh, one of my thoughts on deploying the bug tracker into production was > > via a container, especially since we have multiple instances of it. I got > > side tracked on getting the rest of the infrastructure readier for a web > > application and some improvements there as well as getting a big postgresql > > database cluster set up (2x 15GB RAM servers running in Primary/Replica > > mode). The downside of course to this is that afaik Docker is a lot harder > > to use on Windows and to some degree OS X than linux. However if the > > tracker could be deployed as a docker image that would make the > > infrastructure side a ton easier. I also have control over the python/ > > organization on Docker Hub too for whatever uses we have for it. > > > > I think it's something worth thinking about, but like you I don't know if > the containers work on OS X or Windows (I don't work with containers > personally). (Had to fix the quoting there, somebody's email program got it wrong.) For the tracker, being unable to run a test instance on Windows would likely not be a severe limitation. Given how few Windows people we get making contributions to CPython, I'd really rather encourage them to work there, rather than on the tracker. OS/X is a bit more problematic, but it sounds like it is also a bit more doable. On the other hand, what's the overhead on setting up to use Docker? If that task is non-trivial, we're back to having a higher barrier to entry than running a dev-in-a-box script... Note also in thinking about setting up a test tracker instance we have an additional concern: it requires postgres, and needs either a copy of the full data set (which includes account data/passwords which would need to be creatively sanitized) or a fairly large test data set. I'd prefer a sanitized copy of the real data. --David From tjreedy at udel.edu Sun Dec 7 00:18:47 2014 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 06 Dec 2014 18:18:47 -0500 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: <20141206012608.B55DBB1408D@webabinitio.net> <4E6DEB03-B444-4DF1-9CDF-C80E81F3F237@stufft.io> Message-ID: On 12/6/2014 10:26 AM, Nick Coghlan wrote: > On 7 December 2014 at 01:07, Donald Stufft wrote: >> A likely solution is to use a pre-merge test runner for the systems that we >> can isolate which will give a decent indication if the tests are going to >> pass across the entire supported matrix or not and then continue to use the >> current post-merge test runner to handle testing the esoteric systems that >> we can?t work into the pre-merge testing. > > Yep, that's exactly the approach I had in mind for this problem. Most patches are tested on just one (major) system before being committed. The buildbots confirm that there is no oddball failure elsewhere, and there is usually is not. Testing user submissions on one system should usually be enough. Committers should generally have an idea when wider testing is needed, and indeed it should be nice to be able to get wider testing on occasion *before* making a commit, without begging on the tracker. What would be *REALLY* helpful for Idle development (and tkinter, turtle, and turtle demo testing) would be if there were a test.support.screenshot function that would take a screenshot and email to the tracker or developer. There would also need to be at least one (stable) *nix test machine that actually runs tkinter code, and the ability to test on OSX with its different graphics options. Properly testing Idle tkinter code that affects what users see is a real bottleneck. -- Terry Jan Reedy From ncoghlan at gmail.com Sun Dec 7 01:56:44 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 7 Dec 2014 10:56:44 +1000 Subject: [Python-Dev] Tracker test instances (was: My thinking about the development process) In-Reply-To: <20141206161132.C7EAD250F16@webabinitio.net> References: <20141206012608.B55DBB1408D@webabinitio.net> <4E6DEB03-B444-4DF1-9CDF-C80E81F3F237@stufft.io> <20141206161132.C7EAD250F16@webabinitio.net> Message-ID: On 7 December 2014 at 02:11, R. David Murray wrote: > For the tracker, being unable to run a test instance on Windows would > likely not be a severe limitation. Given how few Windows people we get > making contributions to CPython, I'd really rather encourage them to > work there, rather than on the tracker. OS/X is a bit more problematic, > but it sounds like it is also a bit more doable. > > On the other hand, what's the overhead on setting up to use Docker? If > that task is non-trivial, we're back to having a higher barrier to > entry than running a dev-in-a-box script... > > Note also in thinking about setting up a test tracker instance we have > an additional concern: it requires postgres, and needs either a copy of > the full data set (which includes account data/passwords which would > need to be creatively sanitized) or a fairly large test data set. I'd > prefer a sanitized copy of the real data. If you're OK with git as an entry requirement, then something like the OpenShift free tier may be a better place for test instances, rather than local hosting - with an appropriate quickstart, creating your own tracker instance can be a single click operation on a normal hyperlink. That also has the advantage of making it easy to share changes to demonstrate UI updates. (OpenShift doesn't support running containers directly yet, but that capability is being worked on in the upstream OpenShift Origin open source project) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From wes.turner at gmail.com Sun Dec 7 02:23:29 2014 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 6 Dec 2014 19:23:29 -0600 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: <55D026B6-484D-4AEA-90C3-F41B2EA79142@stufft.io> Message-ID: On Sat, Dec 6, 2014 at 8:01 AM, Donald Stufft wrote: > > > One potential solution is Phabricator (http://phabricator.org) which is a > gerrit like tool except it also works with Mercurial. It is a fully open > source platform though it works on a ?patch? bases rather than a pull > request basis. > I've been pleasantly unsurprised with the ReviewBoard CLI tools (RBtools): * https://www.reviewboard.org/docs/rbtools/dev/ * https://www.reviewboard.org/docs/codebase/dev/contributing-patches/ * https://www.reviewboard.org/docs/manual/2.0/users/ ReviewBoard supports Markdown, {Git, Mercurial, Subversion, ... }, full-text search * https://wiki.jenkins-ci.org/display/JENKINS/Reviewboard+Plugin * [ https://wiki.jenkins-ci.org/display/JENKINS/Selenium+Plugin ] * https://github.com/saltstack/salt-testing/blob/develop/salttesting/jenkins.py * GetPullRequestAction * https://wiki.jenkins-ci.org/display/JENKINS/saltstack-plugin (spin up an instance) * https://github.com/saltstack-formulas/jenkins-formula * https://github.com/saltstack/salt-jenkins > Terry spoke about CLAs, which is an interesting thing too, because > phabricator itself has some workflow around this I believe, at least one of > the examples in their tour is setting up some sort of notification about > requiring a CLA. It even has a built in thing for signing legal documents > (although I?m not sure if that?s acceptable to the PSF, we?d need to ask > VanL I suspect). Another neat feature, although I?m not sure we?re actually > setup to take advantage of it, is that if you run test coverage numbers you > can report that directly inline with the review / diff to see what lines of > the patch are being exercised by a test or not. > AFAIU, these are not (yet) features of ReviewBoard (which is written in Python). > > I?m not sure if it?s actually workable for us but it probably should be > explored a little bit to see if it is and if it might be a good solution. > They also have a copy of it running which they develop phabricator itself > on (https://secure.phabricator.com/) though they also accept pull > requests on github. > What a good looking service. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sun Dec 7 02:27:12 2014 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 6 Dec 2014 19:27:12 -0600 Subject: [Python-Dev] My thinking about the development process In-Reply-To: <4E6DEB03-B444-4DF1-9CDF-C80E81F3F237@stufft.io> References: <20141206012608.B55DBB1408D@webabinitio.net> <4E6DEB03-B444-4DF1-9CDF-C80E81F3F237@stufft.io> Message-ID: On Sat, Dec 6, 2014 at 9:07 AM, Donald Stufft wrote: > > Heh, one of my thoughts on deploying the bug tracker into production was > via a container, especially since we have multiple instances of it. I got > side tracked on getting the rest of the infrastructure readier for a web > application and some improvements there as well as getting a big postgresql > database cluster set up (2x 15GB RAM servers running in Primary/Replica > mode). The downside of course to this is that afaik Docker is a lot harder > to use on Windows and to some degree OS X than linux. However if the > tracker could be deployed as a docker image that would make the > infrastructure side a ton easier. I also have control over the python/ > organization on Docker Hub too for whatever uses we have for it. > Are you referring to https://registry.hub.docker.com/repos/python/ ? IPython / Jupyter have some useful Docker images: * https://registry.hub.docker.com/repos/ipython/ * https://registry.hub.docker.com/repos/jupyter/ CI integration with roundup seems to be the major gap here: * https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin * https://wiki.jenkins-ci.org/display/JENKINS/saltstack-plugin * https://github.com/saltstack-formulas/docker-formula > > Unrelated to the tracker: > > Something that any PEP should consider is security, particularly that of > running the tests. Currently we have a buildbot fleet that checks out the > code and executes the test suite (aka code). A problem that any pre-merge > test runner needs to solve is that unlike a post-merge runner, which will > only run code that has been committed by a committer, a pre-merge runner > will run code that _anybody_ has submitted. This means that it?s not merely > enough to simply trigger a build in our buildbot fleet prior to the merge > happening as that would allow anyone to execute arbitrary code there. As > far as I?m aware there are two solutions to this problem in common use, > either use throw away environments/machines/containers that isolate the > running code and then get destroyed after each test run, or don?t run the > pre-merge tests immediately unless it?s from a ?trusted? person and for > ?untrusted? or ?unknown? people require a ?trusted? person to give the OK > for each test run. > > The throw away machine solution is obviously much nicer experience for the > ?untrusted? or ?unknown? users since they don?t require any intervention to > get their tests run which means that they can see if their tests pass, fix > things, and then see if that fixes it much quicker. The obvious downside > here is that it?s more effort to do that and the availability of throw away > environments for all the systems we support. Linux, most (all?) of the > BSDs, and Windows are pretty easy here since there are cloud offerings for > them that can be used to spin up a temporary environment, run tests, and > then delete it. OS X is a problem because afaik you can only virtualize OS > X on Apple hardware and I?m not aware of any cloud provider that offers > metered access to OS X hosts. The more esoteric systems like AIX and what > not are likely an even bigger problem in this regard since I?m unsure of > the ability to get virtualized instances of these at all. It may be > possible to build our own images of these on a cloud provider assuming that > their licenses allow that. > > The other solution would work easier with our current buildbot fleet since > you?d just tell it to run some tests but you?d wait until a ?trusted? > person gave the OK before you did that. > > A likely solution is to use a pre-merge test runner for the systems that > we can isolate which will give a decent indication if the tests are going > to pass across the entire supported matrix or not and then continue to use > the current post-merge test runner to handle testing the esoteric systems > that we can?t work into the pre-merge testing. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sun Dec 7 02:32:45 2014 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 6 Dec 2014 19:32:45 -0600 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: <20141206012608.B55DBB1408D@webabinitio.net> <4E6DEB03-B444-4DF1-9CDF-C80E81F3F237@stufft.io> Message-ID: On Sat, Dec 6, 2014 at 7:27 PM, Wes Turner wrote: > > > On Sat, Dec 6, 2014 at 9:07 AM, Donald Stufft wrote: > >> >> Heh, one of my thoughts on deploying the bug tracker into production was >> via a container, especially since we have multiple instances of it. I got >> side tracked on getting the rest of the infrastructure readier for a web >> application and some improvements there as well as getting a big postgresql >> database cluster set up (2x 15GB RAM servers running in Primary/Replica >> mode). The downside of course to this is that afaik Docker is a lot harder >> to use on Windows and to some degree OS X than linux. However if the >> tracker could be deployed as a docker image that would make the >> infrastructure side a ton easier. I also have control over the python/ >> organization on Docker Hub too for whatever uses we have for it. >> > > Are you referring to https://registry.hub.docker.com/repos/python/ ? > > IPython / Jupyter have some useful Docker images: > > * https://registry.hub.docker.com/repos/ipython/ > * https://registry.hub.docker.com/repos/jupyter/ > > CI integration with roundup seems to be the major gap here: > > * https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin > * https://wiki.jenkins-ci.org/display/JENKINS/saltstack-plugin > * https://github.com/saltstack-formulas/docker-formula > ShiningPandas supports virtualenv and tox, but I don't know how well suited it would be for fail-fast CPython testing across a grid/graph: * https://wiki.jenkins-ci.org/display/JENKINS/ShiningPanda+Plugin * https://wiki.jenkins-ci.org/display/JENKINS/Matrix+Project+Plugin The branch merging workflows of https://datasift.github.io/gitflow/IntroducingGitFlow.html (hotfix/name, feature/name, release/name) are surely portable across VCS systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sun Dec 7 02:49:16 2014 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 6 Dec 2014 19:49:16 -0600 Subject: [Python-Dev] Tracker test instances (was: My thinking about the development process) In-Reply-To: <20141206161132.C7EAD250F16@webabinitio.net> References: <20141206012608.B55DBB1408D@webabinitio.net> <4E6DEB03-B444-4DF1-9CDF-C80E81F3F237@stufft.io> <20141206161132.C7EAD250F16@webabinitio.net> Message-ID: On Sat, Dec 6, 2014 at 10:11 AM, R. David Murray wrote: > On Sat, 06 Dec 2014 15:21:46 +0000, Brett Cannon wrote: > > On Sat Dec 06 2014 at 10:07:50 AM Donald Stufft > wrote: > > > On Dec 6, 2014, at 9:11 AM, Brett Cannon wrote: > > > > > >> On Fri Dec 05 2014 at 8:31:27 PM R. David Murray < > rdmurray at bitdance.com> > > >> wrote: > > >>> That's probably the biggest issue with *anyone* contributing to > tracker > > >>> maintenance, and if we could solve that, I think we could get more > > >>> people interested in helping maintain it. We need the equivalent of > > >>> dev-in-a-box for setting up for testing proposed changes to > > >>> bugs.python.org, but including some standard way to get it deployed > so > > >>> others can look at a live system running the change in order to > review > > >>> the patch. > > >> > > >> Maybe it's just me and all the Docker/Rocket hoopla that's occurred > over > > >> the past week, but this just screams "container" to me which would > make > > >> getting a test instance set up dead simple. > > > > > > Heh, one of my thoughts on deploying the bug tracker into production > was > > > via a container, especially since we have multiple instances of it. I > got > > > side tracked on getting the rest of the infrastructure readier for a > web > > > application and some improvements there as well as getting a big > postgresql > > > database cluster set up (2x 15GB RAM servers running in Primary/Replica > > > mode). The downside of course to this is that afaik Docker is a lot > harder > > > to use on Windows and to some degree OS X than linux. However if the > > > tracker could be deployed as a docker image that would make the > > > infrastructure side a ton easier. I also have control over the python/ > > > organization on Docker Hub too for whatever uses we have for it. > > > > > > > I think it's something worth thinking about, but like you I don't know if > > the containers work on OS X or Windows (I don't work with containers > > personally). > > (Had to fix the quoting there, somebody's email program got it wrong.) > > For the tracker, being unable to run a test instance on Windows would > likely not be a severe limitation. Given how few Windows people we get > making contributions to CPython, I'd really rather encourage them to > work there, rather than on the tracker. OS/X is a bit more problematic, > but it sounds like it is also a bit more doable. > > On the other hand, what's the overhead on setting up to use Docker? If > that task is non-trivial, we're back to having a higher barrier to > entry than running a dev-in-a-box script... > > Note also in thinking about setting up a test tracker instance we have > an additional concern: it requires postgres, and needs either a copy of > the full data set (which includes account data/passwords which would > need to be creatively sanitized) or a fairly large test data set. I'd > prefer a sanitized copy of the real data. > FactoryBoy would make generating issue tracker test fixtures fairly simple: http://factoryboy.readthedocs.org/en/latest/introduction.html#lazyattribute There are probably lots of instances of free-form usernames in issue tickets; which some people may or may not be comfortable with, considering that the data is and has always been public. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sun Dec 7 02:55:02 2014 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 6 Dec 2014 19:55:02 -0600 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: <55D026B6-484D-4AEA-90C3-F41B2EA79142@stufft.io> Message-ID: On Sat, Dec 6, 2014 at 7:23 PM, Wes Turner wrote: > > > On Sat, Dec 6, 2014 at 8:01 AM, Donald Stufft wrote: > >> >> >> One potential solution is Phabricator (http://phabricator.org) which is >> a gerrit like tool except it also works with Mercurial. It is a fully open >> source platform though it works on a ?patch? bases rather than a pull >> request basis. >> > > I've been pleasantly unsurprised with the ReviewBoard CLI tools (RBtools): > > * https://www.reviewboard.org/docs/rbtools/dev/ > * https://www.reviewboard.org/docs/codebase/dev/contributing-patches/ > * https://www.reviewboard.org/docs/manual/2.0/users/ > > ReviewBoard supports Markdown, {Git, Mercurial, Subversion, ... }, > full-text search > > https://www.reviewboard.org/docs/manual/dev/extending/ * "Writing Review Board Extensions " * "Writing Authentication Backends " > > > >> Terry spoke about CLAs, which is an interesting thing too, because >> phabricator itself has some workflow around this I believe, at least one of >> the examples in their tour is setting up some sort of notification about >> requiring a CLA. It even has a built in thing for signing legal documents >> (although I?m not sure if that?s acceptable to the PSF, we?d need to ask >> VanL I suspect). Another neat feature, although I?m not sure we?re actually >> setup to take advantage of it, is that if you run test coverage numbers you >> can report that directly inline with the review / diff to see what lines of >> the patch are being exercised by a test or not. >> > > AFAIU, these are not (yet) features of ReviewBoard (which is written in > Python). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sun Dec 7 02:58:40 2014 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 6 Dec 2014 19:58:40 -0600 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: <55D026B6-484D-4AEA-90C3-F41B2EA79142@stufft.io> Message-ID: This lists the ReviewBoard workflow steps for a pre-commit workflow: https://www.reviewboard.org/docs/manual/dev/users/getting-started/workflow/ On Sat, Dec 6, 2014 at 7:55 PM, Wes Turner wrote: > > > On Sat, Dec 6, 2014 at 7:23 PM, Wes Turner wrote: > >> >> >> On Sat, Dec 6, 2014 at 8:01 AM, Donald Stufft wrote: >> >>> >>> >>> One potential solution is Phabricator (http://phabricator.org) which is >>> a gerrit like tool except it also works with Mercurial. It is a fully open >>> source platform though it works on a ?patch? bases rather than a pull >>> request basis. >>> >> >> I've been pleasantly unsurprised with the ReviewBoard CLI tools (RBtools): >> >> * https://www.reviewboard.org/docs/rbtools/dev/ >> * https://www.reviewboard.org/docs/codebase/dev/contributing-patches/ >> * https://www.reviewboard.org/docs/manual/2.0/users/ >> >> ReviewBoard supports Markdown, {Git, Mercurial, Subversion, ... }, >> full-text search >> >> > https://www.reviewboard.org/docs/manual/dev/extending/ > > * "Writing Review Board Extensions > " > * "Writing Authentication Backends > " > > >> >> >> >>> Terry spoke about CLAs, which is an interesting thing too, because >>> phabricator itself has some workflow around this I believe, at least one of >>> the examples in their tour is setting up some sort of notification about >>> requiring a CLA. It even has a built in thing for signing legal documents >>> (although I?m not sure if that?s acceptable to the PSF, we?d need to ask >>> VanL I suspect). Another neat feature, although I?m not sure we?re actually >>> setup to take advantage of it, is that if you run test coverage numbers you >>> can report that directly inline with the review / diff to see what lines of >>> the patch are being exercised by a test or not. >>> >> >> AFAIU, these are not (yet) features of ReviewBoard (which is written in >> Python). >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmiscml at gmail.com Mon Dec 8 13:26:49 2014 From: pmiscml at gmail.com (Paul Sokolovsky) Date: Mon, 8 Dec 2014 14:26:49 +0200 Subject: [Python-Dev] MicroPython 1.3.7 released Message-ID: <20141208142649.3bad9d2f@x230> Hello, MicroPython is a Python3 language implementation which scales down to run on microcontrollers with tens of Ks of RAM and few hundreds of Ks of code size. Besides microcontrollers, it's also useful for small embedded Linux systems, where storage space is limited, for embedding as a scripting engine into standalone applications, where quick startup time is needed, etc. http://micropython.org/ https://github.com/micropython/micropython It went several months since the original announcement of MicroPython 1.0 (https://mail.python.org/pipermail/python-list/2014-June/672994.html), there were number of releases in the meantime, but we were too busy implementing new features, so this announcement provides just high-level overview of changes: * Basic Unicode support added (thanks to Chris Angelico for driving the effort) * More functionality of standard types and functions are implemented (for example, MicroPython can run subset of http.client module functionality from CPython3 stdlib). * Highly optimized for code size implementations of important Python modules are added. There offer subset of functionality and prefixed with "u". For example, ure, uheapq, uzlib, uhashlib, ubinascii are provided. * Lots of microcontroller hardware bindings added and generalized. Besides core interpreter, there's also good progress on modules and applications: * MicroPython standard library project, https://github.com/micropython/micropython-lib , an effort to port/develop as much as possible Python stdlib modules to MicroPython, has good progress, with few dozens of modules available on PyPI already (pip-micropython wrapper is provided to install them). * An asyncio subset implementation, dubbed "uasyncio", is available and should be stable enough. * Proof of concept web microframework, "picoweb", based on uasyncio is being developed: https://github.com/pfalcon/picoweb * Lots of other projects available on github. Reference implementation of MicroPython runs on a microcontroller board with 1Mb Flash and 128Kb RAM, which should offer good platform for people interested in microcontroller usage (more info: http://micropython.org/). MicroPython can also be easily built and supported on Linux, MacOSX, and Windows systems (more info: https://github.com/micropython/micropython) -- Best regards, Paul mailto:pmiscml at gmail.com From jimjjewett at gmail.com Mon Dec 8 21:27:23 2014 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Mon, 08 Dec 2014 12:27:23 -0800 (PST) Subject: [Python-Dev] My thinking about the development process In-Reply-To: Message-ID: <548609ab.89688c0a.1045.ffffcc43@mx.google.com> Brett Cannon wrote: > 4. Contributor creates account on bugs.python.org and signs the > [contributor agreement](https://www.python.org/psf/contrib/contrib-form/) Is there an expiration on such forms? If there doesn't need to be (and one form is good for multiple tickets), is there an objection (besides "not done yet") to making "signed the form" part of the bug reporter account, and required to submit to the CI process? (An "I can't sign yet, bug me later" option would allow the current workflow without the "this isn't technically a patch" workaround for "small enough" patches from those with slow-moving employers.) > There's the simple spelling mistake patches and then there's the > code change patches. There are a fair number of one-liner code patches; ideally, they could also be handled quickly. > For the code change patches, contributors need an easy way to get a hold of > the code and get their changes to the core developers. For a fair number of patches, the same workflow as spelling errors is appropriate, except that it would be useful to have an automated state saying "yes, this currently merges fine", so that committers can focus only on patches that are (still) at least that ready. > At best core developers tell a contributor "please send your PR > against 3.4", push-button merge it, update a local clone, merge from > 3.4 to default, do the usual stuff, commit, and then push; Is it common for a patch that should apply to multiple branches to fail on some but not all of them? In other words, is there any reason beyond "not done yet" that submitting a patch (or pull request) shouldn't automatically create a patch per branch, with pushbuttons to test/reject/commit? > Our code review tool is a fork that probably should be > replaced as only Martin von Loewis can maintain it. Only he knows the innards, or only he is authorized, or only he knows where the code currently is/how to deploy an update? I know that there were times in the (not-so-recent) past when I had time and willingness to help with some part of the infrastructure, but didn't know where the code was, and didn't feel right making a blind offer. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From brett at python.org Mon Dec 8 21:38:18 2014 From: brett at python.org (Brett Cannon) Date: Mon, 08 Dec 2014 20:38:18 +0000 Subject: [Python-Dev] My thinking about the development process References: <548609ab.89688c0a.1045.ffffcc43@mx.google.com> Message-ID: On Mon Dec 08 2014 at 3:27:43 PM Jim J. Jewett wrote: > > > Brett Cannon wrote: > > 4. Contributor creates account on bugs.python.org and signs the > > [contributor agreement](https://www.python. > org/psf/contrib/contrib-form/) > > Is there an expiration on such forms? If there doesn't need to be > (and one form is good for multiple tickets), is there an objection > (besides "not done yet") to making "signed the form" part of the bug > reporter account, and required to submit to the CI process? (An "I > can't sign yet, bug me later" option would allow the current workflow > without the "this isn't technically a patch" workaround for "small enough" > patches from those with slow-moving employers.) > IANAL but I believe that as long as you didn't sign on behalf of work for your employer it's good for life. > > > > There's the simple spelling mistake patches and then there's the > > code change patches. > > There are a fair number of one-liner code patches; ideally, they > could also be handled quickly. > Depends on the change. Syntactic typos could still get through. But yes, they are also a possibility for a quick submission. > > > For the code change patches, contributors need an easy way to get a hold > of > > the code and get their changes to the core developers. > > For a fair number of patches, the same workflow as spelling errors is > appropriate, except that it would be useful to have an automated state > saying "yes, this currently merges fine", so that committers can focus > only on patches that are (still) at least that ready. > > > At best core developers tell a contributor "please send your PR > > against 3.4", push-button merge it, update a local clone, merge from > > 3.4 to default, do the usual stuff, commit, and then push; > > Is it common for a patch that should apply to multiple branches to fail > on some but not all of them? > Going from 3.4 -> 3.5 is almost always clean sans NEWS, but from 2.7 it is no where near as guaranteed. > > In other words, is there any reason beyond "not done yet" that submitting > a patch (or pull request) shouldn't automatically create a patch per > branch, with pushbuttons to test/reject/commit? > Assuming that you specify which branches, then not really. But if it is blindly then yes as that's unnecessary noise and could lead to arguments over whether something should (not) be applied to some specific version. > > > Our code review tool is a fork that probably should be > > replaced as only Martin von Loewis can maintain it. > > Only he knows the innards, or only he is authorized, or only he knows > where the code currently is/how to deploy an update? > Innards. -Brett > > I know that there were times in the (not-so-recent) past when I had > time and willingness to help with some part of the infrastructure, but > didn't know where the code was, and didn't feel right making a blind > offer. > > > -jJ > > -- > > If there are still threading problems with my replies, please > email me with details, so that I can try to resolve them. -jJ > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Mon Dec 8 21:42:14 2014 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 08 Dec 2014 15:42:14 -0500 Subject: [Python-Dev] My thinking about the development process In-Reply-To: <548609ab.89688c0a.1045.ffffcc43@mx.google.com> References: <548609ab.89688c0a.1045.ffffcc43@mx.google.com> Message-ID: <20141208204215.A1356250EEE@webabinitio.net> On Mon, 08 Dec 2014 12:27:23 -0800, "Jim J. Jewett" wrote: > Brett Cannon wrote: > > 4. Contributor creates account on bugs.python.org and signs the > > [contributor agreement](https://www.python.org/psf/contrib/contrib-form/) > > Is there an expiration on such forms? If there doesn't need to be > (and one form is good for multiple tickets), is there an objection > (besides "not done yet") to making "signed the form" part of the bug > reporter account, and required to submit to the CI process? (An "I > can't sign yet, bug me later" option would allow the current workflow > without the "this isn't technically a patch" workaround for "small enough" > patches from those with slow-moving employers.) No expiration. Whether or not we have a CLA from a given tracker id is recorded in the tracker. People also get reminded to submit a CLA if they haven't yet but have submitted a patch. > > At best core developers tell a contributor "please send your PR > > against 3.4", push-button merge it, update a local clone, merge from > > 3.4 to default, do the usual stuff, commit, and then push; > > Is it common for a patch that should apply to multiple branches to fail > on some but not all of them? Currently? Yes when 2.7 is involved. If we fix NEWS, then it won't be *common* for maint->default, but it will happen. > In other words, is there any reason beyond "not done yet" that submitting > a patch (or pull request) shouldn't automatically create a patch per > branch, with pushbuttons to test/reject/commit? Not Done Yet (by any of the tools we know about) is the only reason I'm aware of. > > Our code review tool is a fork that probably should be > > replaced as only Martin von Loewis can maintain it. > > Only he knows the innards, or only he is authorized, or only he knows > where the code currently is/how to deploy an update? Only he knows the innards. (Although Ezio has made at least one patch to it.) I think Guido's point was that we (the community) shouldn't be maintaining this private fork of a project that has moved on well beyond us; instead we should be using an active project and leveraging its community with our own contributions (like we do with Roundup). > I know that there were times in the (not-so-recent) past when I had > time and willingness to help with some part of the infrastructure, but > didn't know where the code was, and didn't feel right making a blind > offer. Yeah, that's something that's been getting better lately (thanks, infrastructure team), but where to get the info is still not as clear as would be optimal. --David From ben+python at benfinney.id.au Mon Dec 8 23:31:04 2014 From: ben+python at benfinney.id.au (Ben Finney) Date: Tue, 09 Dec 2014 09:31:04 +1100 Subject: [Python-Dev] Making it possible to accept contributions without CLA (was: My thinking about the development process) References: Message-ID: <85sigprcyf.fsf_-_@benfinney.id.au> Eric Snow writes: > There's no real way around this, is there? [?] the CLA part is pretty > unavoidable. The PSF presently madates that any contributor to Python sign the ?Contributor Agreement?. This is a unilateral grant from the contributor to the PSF, and is unequal because the PSF does not grant these same powers to the recipients of Python. I raise this, not to start another disagreement about whether this is desirable; I understand that many within the PSF regard it as an unfortunate barrier to entry, even if it is necessary. Rather, I'm asking what, specifically, necessitates this situation. What would need to change, for the PSF to accept contributions to the Python copyrighted works, without requiring the contributor to do anything but license the work under Apache 2.0 license? Is it specific code within the Python code base which somehow creates this need? How much, and how would the PSF view work to re-implement that code for contribution under Apache 2.0 license? Is it some other dependency? What, specifically; and what can be done to remove that dependency? My goal is to see the PSF reach a state where the licensing situation is an equal-footing ?inbound = outbound? like most free software projects; where the PSF can happily receive from a contributor only the exact same license the PSF grants to any recipient of Python. For that to happen, we need to know the specific barriers to such a goal. What are they? -- \ ?A computer once beat me at chess, but it was no match for me | `\ at kick boxing.? ?Emo Philips | _o__) | Ben Finney From ethan at stoneleaf.us Mon Dec 8 23:40:56 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 08 Dec 2014 14:40:56 -0800 Subject: [Python-Dev] Making it possible to accept contributions without CLA In-Reply-To: <85sigprcyf.fsf_-_@benfinney.id.au> References: <85sigprcyf.fsf_-_@benfinney.id.au> Message-ID: <548628F8.8030100@stoneleaf.us> On 12/08/2014 02:31 PM, Ben Finney wrote: > Eric Snow writes: > >> There's no real way around this, is there? [?] the CLA part is pretty >> unavoidable. > > The PSF presently madates that any contributor to Python sign > > the ?Contributor Agreement?. This is a unilateral grant from the > contributor to the PSF, and is unequal because the PSF does not grant > these same powers to the recipients of Python. > > I raise this, not to start another disagreement about whether this is > desirable; I understand that many within the PSF regard it as > an unfortunate barrier to entry, even if it is necessary. > > Rather, I'm asking what, specifically, necessitates this situation. > > What would need to change, for the PSF to accept contributions to the > Python copyrighted works, without requiring the contributor to do > anything but license the work under Apache 2.0 license? > > Is it specific code within the Python code base which somehow creates > this need? How much, and how would the PSF view work to re-implement > that code for contribution under Apache 2.0 license? > > Is it some other dependency? What, specifically; and what can be done to > remove that dependency? > > My goal is to see the PSF reach a state where the licensing situation is > an equal-footing ?inbound = outbound? like most free software projects; > where the PSF can happily receive from a contributor only the exact same > license the PSF grants to any recipient of Python. > > For that to happen, we need to know the specific barriers to such a > goal. What are they? Well, this is the wrong mailing list for those questions. Maybe one of these would work instead? About Python-legal-sig (https://mail.python.org/mailman/listinfo/python-legal-sig) English (USA) This list is for the discussion of Python Legal/Compliance issues. Its focus should be on questions regarding compliance, copyrights on core python, etc. Actual Legal decisions, or legal counsel questions, alterations to the Contributor License Agreement for Python the language should be sent to psf at python.org Python/PSF trademark questions should be sent to psf-trademarks at python.org. Please Note: Legal decisions affecting the IP, Python license stack, etc *must* be approved by Python Software Foundation legal counsel and the board of directors: psf at python.org To see the collection of prior postings to the list, visit the Python-legal-sig Archives. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From barry at python.org Mon Dec 8 23:44:35 2014 From: barry at python.org (Barry Warsaw) Date: Mon, 8 Dec 2014 17:44:35 -0500 Subject: [Python-Dev] Making it possible to accept contributions without CLA (was: My thinking about the development process) In-Reply-To: <85sigprcyf.fsf_-_@benfinney.id.au> References: <85sigprcyf.fsf_-_@benfinney.id.au> Message-ID: <20141208174435.46369f93@anarchist.wooz.org> On Dec 09, 2014, at 09:31 AM, Ben Finney wrote: >Rather, I'm asking what, specifically, necessitates this situation. > >What would need to change, for the PSF to accept contributions to the >Python copyrighted works, without requiring the contributor to do >anything but license the work under Apache 2.0 license? My understanding is that the PSF needs the ability to relicense the contribution under the standard PSF license, and it is the contributor agreement that gives the PSF the legal right to do this. Many organizations, both for- and non-profit have this legal requirement, and there are many avenues for satisfying these needs, mostly based on different legal and business interpretations. In the scheme of such things, and IMHO, the PSF CLA is quite reasonable and lightweight, both in what it requires a contributor to provide, and in the value, rights, and guarantees it extends to the contributor. Cheers, -Barry From ben+python at benfinney.id.au Tue Dec 9 00:26:58 2014 From: ben+python at benfinney.id.au (Ben Finney) Date: Tue, 09 Dec 2014 10:26:58 +1100 Subject: [Python-Dev] Making it possible to accept contributions without CLA References: <85sigprcyf.fsf_-_@benfinney.id.au> <548628F8.8030100@stoneleaf.us> Message-ID: <85lhmhrad9.fsf@benfinney.id.au> Ethan Furman writes: > Well, this is the wrong mailing list for those questions. Thanks. I addressed the claim here where it was made; but you're right that a different forum is better for an ongoing discussion about this topic. Barry Warsaw writes: > My understanding is that the PSF needs the ability to relicense the > contribution under the standard PSF license, and it is the contributor > agreement that gives the PSF the legal right to do this. Okay, that's been raised before. If anyone can cite other specific dependencies that would necessitate a CLA for Python, please contact me off-list, and/or in the Python legal-sig . > Many organizations, both for- and non-profit have this legal > requirement, and there are many avenues for satisfying these needs, > mostly based on different legal and business interpretations. And many do not. It would be good to shift the PSF into the larger set of organisations that do not require a CLA for accepting contributions. Thanks, all. Sorry to bring the topic up again here. -- \ ?When I was born I was so surprised I couldn't talk for a year | `\ and a half.? ?Gracie Allen | _o__) | Ben Finney From jdhardy at gmail.com Tue Dec 9 10:33:25 2014 From: jdhardy at gmail.com (Jeff Hardy) Date: Tue, 9 Dec 2014 09:33:25 +0000 Subject: [Python-Dev] IronPython 2.7.5 Released Message-ID: On behalf of the IronPython team, I'm very happy to announce the release of IronPython 2.7.5[1]. Like all IronPython 2.7-series releases, .NET 4 is required to install it. Installing this release will replace any existing IronPython 2.7-series installation. Assemblies for embedding are provided for .NET 3.5, .NET 4, .NET 4.5, and Silverlight 5. IronPython 2.7.5 is primarily a collection of bug fixes[2] which smooths off many of the remaining rough edges. The complete list of changes[3] is also available. A major new feature is the inclusion of `ensurepip`, which will install the `pip` package manager: ``` ; -X:Frames is required when using pip ipy.exe -X:Frames -m ensurepip ; Run from an Administrator console if using IronPython installer ipy.exe -X:Frames -m pip install html5lib ``` **Note:** The assembly version of IronPython has changed to 2.7.5.0. All previous 2.7 versions had the same version (2.7.0.40) which caused issues when different versions were installed. Publisher policy files are used to so that applications don't have to be recompiled, but recompiling is strongly recommended. A huge thanks goes out to Pawel Jasinski, who contributed most of the changes in this release. Thanks is also due to Simon Opelt, Alex Earl, Jeffrey Bester, yngipy hernan, Alexander K?plinger,Vincent Ducros, and fdanny. For Visual Studio integration, check out Python Tools for Visual Studio[4] which has support for IronPython as well as CPython, and many other fantastic features. IronPython 2.7.5 is also available for embedding via NuGet. The main package is IronPython, and the standard library is in IronPython.StdLib. - Jeff [1] http://ironpython.codeplex.com/releases/view/169382 [2] http://bit.ly/ipy275fixed [3] https://github.com/IronLanguages/main/compare/ipy-2.7.4...ipy-2.7.5 [4] http://pytools.codeplex.com/ From ncoghlan at gmail.com Tue Dec 9 10:42:39 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 9 Dec 2014 19:42:39 +1000 Subject: [Python-Dev] Making it possible to accept contributions without CLA (was: My thinking about the development process) In-Reply-To: <20141208174435.46369f93@anarchist.wooz.org> References: <85sigprcyf.fsf_-_@benfinney.id.au> <20141208174435.46369f93@anarchist.wooz.org> Message-ID: On 9 Dec 2014 08:47, "Barry Warsaw" wrote: > > On Dec 09, 2014, at 09:31 AM, Ben Finney wrote: > > >Rather, I'm asking what, specifically, necessitates this situation. > > > >What would need to change, for the PSF to accept contributions to the > >Python copyrighted works, without requiring the contributor to do > >anything but license the work under Apache 2.0 license? > > My understanding is that the PSF needs the ability to relicense the > contribution under the standard PSF license, and it is the contributor > agreement that gives the PSF the legal right to do this. This matches my understanding as well. The problem is that the PSF licence itself isn't suitable as "licence in", and changing the "licence out" could have a broad ripple effect on downstream consumers (especially since the early history means "just change the outgoing license to the Apache License" isn't an available option, at least as far as I am aware). A more restricted CLA that limited the PSF's outgoing licence choices to OSI approved open source licenses might address some of the concerns without causing problems elsewhere, but the combination of being both interested in core development and having a philosophical or personal objection to signing the CLA seems to be genuinely rare. Cheers, Nick. > > Many organizations, both for- and non-profit have this legal requirement, and > there are many avenues for satisfying these needs, mostly based on different > legal and business interpretations. In the scheme of such things, and IMHO, > the PSF CLA is quite reasonable and lightweight, both in what it requires a > contributor to provide, and in the value, rights, and guarantees it extends to > the contributor. > > Cheers, > -Barry > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Tue Dec 9 15:24:41 2014 From: barry at python.org (Barry Warsaw) Date: Tue, 9 Dec 2014 09:24:41 -0500 Subject: [Python-Dev] Making it possible to accept contributions without CLA (was: My thinking about the development process) In-Reply-To: References: <85sigprcyf.fsf_-_@benfinney.id.au> <20141208174435.46369f93@anarchist.wooz.org> Message-ID: <20141209092441.3059c5c9@limelight.wooz.org> On Dec 09, 2014, at 07:42 PM, Nick Coghlan wrote: >A more restricted CLA that limited the PSF's outgoing licence choices to >OSI approved open source licenses might address some of the concerns >without causing problems elsewhere, but the combination of being both >interested in core development and having a philosophical or personal >objection to signing the CLA seems to be genuinely rare. The CLA does explicitly say "Contributor understands and agrees that PSF shall have the irrevocable and perpetual right to make and distribute copies of any Contribution, as well as to create and distribute collective works and derivative works of any Contribution, under the Initial License or under any other open source license approved by a unanimous vote of the PSF board." So while not explicitly limited to an OSI approved license, it must still be "open source", at least in the view of the entire (unanimous) PSF board. "OSI approved" would probably be the least controversial definition of "open source" that the PSF could adopt. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From mdcb808 at gmail.com Wed Dec 10 07:31:57 2014 From: mdcb808 at gmail.com (Matthieu Bec) Date: Tue, 09 Dec 2014 22:31:57 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) Message-ID: <5487E8DD.5010806@gmail.com> newbie first post on this list, if what follows is of context ... Hi all, I'm struggling with issue per the subject, read different threads and issue http://bugs.python.org/issue15443 that started 2012 still opened as of today. Isn't there a legitimate case for nanosecond support? it's all over the place in 'struct timespec' and maybe wrongly I always found python and C were best neighbors. That's for the notional aspect. More practically, aren't we close enough yet with current hardware, PTP and the likes, this deserves more consideration? Maybe this has been mentioned before but the limiting factor isn't just getting nanoseconds, but anything sub-microseconds wont work with the current format. OpcUA that I was looking right now has 10-th us resolution, so really cares about 100ns, but the datetime 1us simply wont cut it. Regards, Matthieu From jacob at luxion.com Wed Dec 10 12:56:30 2014 From: jacob at luxion.com (jacob toft pedersen) Date: Wed, 10 Dec 2014 12:56:30 +0100 Subject: [Python-Dev] Access control for buildbot Message-ID: Hi there I was visiting you buildbot page for inspiration and found that i apparently have the option to force stop/start all your builds without any access control. You may want to put something to enforce access control? /pedersen From trent at trent.me Wed Dec 10 15:08:56 2014 From: trent at trent.me (Trent Nelson) Date: Wed, 10 Dec 2014 14:08:56 +0000 Subject: [Python-Dev] Access control for buildbot In-Reply-To: References: Message-ID: On Dec 10, 2014, at 6:56 AM, jacob toft pedersen wrote: > Hi there > > I was visiting you buildbot page for inspiration and found that i apparently have the option to force stop/start all your builds without any access control. > > You may want to put something to enforce access control? > Nah, as far as I know, no-one has abused it, and it?s definitely useful when you need to legitimately use it. Trent. From ncoghlan at gmail.com Wed Dec 10 16:33:20 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Dec 2014 01:33:20 +1000 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <5487E8DD.5010806@gmail.com> References: <5487E8DD.5010806@gmail.com> Message-ID: On 10 December 2014 at 16:31, Matthieu Bec wrote: > > newbie first post on this list, if what follows is of context ... > > Hi all, > > I'm struggling with issue per the subject, read different threads and issue > http://bugs.python.org/issue15443 that started 2012 still opened as of > today. > > Isn't there a legitimate case for nanosecond support? it's all over the > place in 'struct timespec' and maybe wrongly I always found python and C > were best neighbors. That's for the notional aspect. If you skip down to the more recent 2014 part of the discussion, the use case has been accepted as valid, but the idea still needs a concrete change proposal that addresses the various API design and backwards compatibility issues that arise. Specifically, questions like: * preserving compatibility with passing in microsecond values * how to accept nanosecond values * how to correctly unpickle old datetime pickle values * how to update strptime() and strftime() Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Dec 10 16:49:37 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Dec 2014 01:49:37 +1000 Subject: [Python-Dev] Access control for buildbot In-Reply-To: References: Message-ID: On 11 December 2014 at 00:08, Trent Nelson wrote: > > On Dec 10, 2014, at 6:56 AM, jacob toft pedersen wrote: > >> Hi there >> >> I was visiting you buildbot page for inspiration and found that i apparently have the option to force stop/start all your builds without any access control. >> >> You may want to put something to enforce access control? >> > > Nah, as far as I know, no-one has abused it, and it?s definitely useful when you need to legitimately use it. There are controls on the permitted input for forced builds, and if anyone starts being annoying with it, we have the option of just disabling it entirely until we set up authentication for it. Requiring authentication for the BuildBot triggers is likely an improvement we should consider in the current infrastructure review regardless, though. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From zhuoyikang at gmail.com Wed Dec 10 17:27:52 2014 From: zhuoyikang at gmail.com (=?UTF-8?B?5Y2T5LiA5oqX?=) Date: Thu, 11 Dec 2014 00:27:52 +0800 Subject: [Python-Dev] python compile error on mac os x Message-ID: hello, everybody ,i occur an ld error in my mac os x python 3.4.2 gcc 4.8.2 /Applications/Xcode.app/Contents/Developer/usr/bin/make Parser/pgen gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -L/usr/local/lib -export-dynamic Parser/acceler.o Parser/grammar1.o Parser/listnode.o Parser/node.o Parser/parser.o Parser/bitset.o Parser/metagrammar.o Parser/firstsets.o Parser/grammar.o Parser/pgen.o Objects/obmalloc.o Python/dynamic_annotations.o Python/mysnprintf.o Python/pyctype.o Parser/tokenizer_pgen.o Parser/printgrammar.o Parser/parsetok_pgen.o Parser/pgenmain.o -ldl -framework CoreFoundation -o Parser/pgen ld: unknown option: -export-dynamic collect2: error: ld returned 1 exit status make[1]: *** [Parser/pgen] Error 1 make: *** [Include/graminit.h] Error 2 how to solve this ? anybody help me ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Wed Dec 10 17:40:21 2014 From: brett at python.org (Brett Cannon) Date: Wed, 10 Dec 2014 16:40:21 +0000 Subject: [Python-Dev] python compile error on mac os x References: Message-ID: It would be better to file a bug at bugs.python.org so it's easier to track the problem. On Wed Dec 10 2014 at 11:37:30 AM ??? wrote: > hello, everybody ,i occur an ld error in my mac os x > > python 3.4.2 gcc 4.8.2 > > /Applications/Xcode.app/Contents/Developer/usr/bin/make Parser/pgen > gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -L/usr/local/lib > -export-dynamic Parser/acceler.o Parser/grammar1.o Parser/listnode.o > Parser/node.o Parser/parser.o Parser/bitset.o Parser/metagrammar.o > Parser/firstsets.o Parser/grammar.o Parser/pgen.o Objects/obmalloc.o > Python/dynamic_annotations.o Python/mysnprintf.o Python/pyctype.o > Parser/tokenizer_pgen.o Parser/printgrammar.o Parser/parsetok_pgen.o > Parser/pgenmain.o -ldl -framework CoreFoundation -o Parser/pgen > ld: unknown option: -export-dynamic > collect2: error: ld returned 1 exit status > make[1]: *** [Parser/pgen] Error 1 > make: *** [Include/graminit.h] Error 2 > > > how to solve this ? anybody help me ? > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhuoyikang at gmail.com Wed Dec 10 17:45:39 2014 From: zhuoyikang at gmail.com (=?UTF-8?B?5Y2T5LiA5oqX?=) Date: Thu, 11 Dec 2014 00:45:39 +0800 Subject: [Python-Dev] python compile error on mac os x In-Reply-To: References: Message-ID: thank u very much. 2014-12-11 0:40 GMT+08:00 Brett Cannon : > It would be better to file a bug at bugs.python.org so it's easier to > track the problem. > > On Wed Dec 10 2014 at 11:37:30 AM ??? wrote: > >> hello, everybody ,i occur an ld error in my mac os x >> >> python 3.4.2 gcc 4.8.2 >> >> /Applications/Xcode.app/Contents/Developer/usr/bin/make Parser/pgen >> gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -L/usr/local/lib >> -export-dynamic Parser/acceler.o Parser/grammar1.o Parser/listnode.o >> Parser/node.o Parser/parser.o Parser/bitset.o Parser/metagrammar.o >> Parser/firstsets.o Parser/grammar.o Parser/pgen.o Objects/obmalloc.o >> Python/dynamic_annotations.o Python/mysnprintf.o Python/pyctype.o >> Parser/tokenizer_pgen.o Parser/printgrammar.o Parser/parsetok_pgen.o >> Parser/pgenmain.o -ldl -framework CoreFoundation -o Parser/pgen >> ld: unknown option: -export-dynamic >> collect2: error: ld returned 1 exit status >> make[1]: *** [Parser/pgen] Error 1 >> make: *** [Include/graminit.h] Error 2 >> >> >> how to solve this ? anybody help me ? >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/ >> brett%40python.org >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brunocauet at gmail.com Wed Dec 10 17:59:55 2014 From: brunocauet at gmail.com (Bruno Cauet) Date: Wed, 10 Dec 2014 17:59:55 +0100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition Message-ID: Hi all, Last year a survey was conducted on python 2 and 3 usage. Here is the 2014 edition, slightly updated (from 9 to 11 questions). It should not take you more than 1 minute to fill. I would be pleased if you took that time. Here's the url: http://goo.gl/forms/tDTcm8UzB3 I'll publish the results around the end of the year. Last year results: https://wiki.python.org/moin/2.x-vs-3.x-survey Thank you Bruno -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Dec 10 18:10:24 2014 From: donald at stufft.io (Donald Stufft) Date: Wed, 10 Dec 2014 12:10:24 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: Message-ID: <341920B4-12B5-499D-95FC-5572338AE602@stufft.io> > On Dec 10, 2014, at 11:59 AM, Bruno Cauet wrote: > > Hi all, > Last year a survey was conducted on python 2 and 3 usage. > Here is the 2014 edition, slightly updated (from 9 to 11 questions). > It should not take you more than 1 minute to fill. I would be pleased if you took that time. > > Here's the url: http://goo.gl/forms/tDTcm8UzB3 > I'll publish the results around the end of the year. > > Last year results: https://wiki.python.org/moin/2.x-vs-3.x-survey Just going to say http://d.stufft.io/image/0z1841112o0C is a hard question to answer, since most code I write is both. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Wed Dec 10 18:15:17 2014 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Wed, 10 Dec 2014 11:15:17 -0600 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: <341920B4-12B5-499D-95FC-5572338AE602@stufft.io> References: <341920B4-12B5-499D-95FC-5572338AE602@stufft.io> Message-ID: On Wed, Dec 10, 2014 at 11:10 AM, Donald Stufft wrote: > > On Dec 10, 2014, at 11:59 AM, Bruno Cauet wrote: > > Hi all, > Last year a survey was conducted on python 2 and 3 usage. > Here is the 2014 edition, slightly updated (from 9 to 11 questions). > It should not take you more than 1 minute to fill. I would be pleased if you > took that time. > > Here's the url: http://goo.gl/forms/tDTcm8UzB3 > I'll publish the results around the end of the year. > > Last year results: https://wiki.python.org/moin/2.x-vs-3.x-survey > > > Just going to say http://d.stufft.io/image/0z1841112o0C is a hard question > to answer, since most code I write is both. > The same holds for me. From njs at pobox.com Wed Dec 10 18:24:10 2014 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 10 Dec 2014 17:24:10 +0000 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <341920B4-12B5-499D-95FC-5572338AE602@stufft.io> Message-ID: On 10 Dec 2014 17:16, "Ian Cordasco" wrote: > > On Wed, Dec 10, 2014 at 11:10 AM, Donald Stufft wrote: > > > > On Dec 10, 2014, at 11:59 AM, Bruno Cauet wrote: > > > > Hi all, > > Last year a survey was conducted on python 2 and 3 usage. > > Here is the 2014 edition, slightly updated (from 9 to 11 questions). > > It should not take you more than 1 minute to fill. I would be pleased if you > > took that time. > > > > Here's the url: http://goo.gl/forms/tDTcm8UzB3 > > I'll publish the results around the end of the year. > > > > Last year results: https://wiki.python.org/moin/2.x-vs-3.x-survey > > > > > > Just going to say http://d.stufft.io/image/0z1841112o0C is a hard question > > to answer, since most code I write is both. > > > > The same holds for me. That question appears to have just grown a "compatible with both" option. It might make sense to add a similar option to the following question about what you use for personal projects. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From brunocauet at gmail.com Wed Dec 10 18:32:08 2014 From: brunocauet at gmail.com (Bruno Cauet) Date: Wed, 10 Dec 2014 18:32:08 +0100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <341920B4-12B5-499D-95FC-5572338AE602@stufft.io> Message-ID: Remarks heard & form updated. Nathaniel, I'm not sure about that: even if the code is 2- and 3-compatible you'll pick one runtime. 2 others questions now mention writing polyglot code. By the way I published the survey on HN, /r/programming & /r/python: https://news.ycombinator.com/item?id=8730156 http://redd.it/2ovlwm http://redd.it/2ovls4 Feel free to publish it anywhere else, to get as many answers as possible. Bruno 2014-12-10 18:24 GMT+01:00 Nathaniel Smith : > On 10 Dec 2014 17:16, "Ian Cordasco" wrote: > > > > On Wed, Dec 10, 2014 at 11:10 AM, Donald Stufft > wrote: > > > > > > On Dec 10, 2014, at 11:59 AM, Bruno Cauet > wrote: > > > > > > Hi all, > > > Last year a survey was conducted on python 2 and 3 usage. > > > Here is the 2014 edition, slightly updated (from 9 to 11 questions). > > > It should not take you more than 1 minute to fill. I would be pleased > if you > > > took that time. > > > > > > Here's the url: http://goo.gl/forms/tDTcm8UzB3 > > > I'll publish the results around the end of the year. > > > > > > Last year results: https://wiki.python.org/moin/2.x-vs-3.x-survey > > > > > > > > > Just going to say http://d.stufft.io/image/0z1841112o0C is a hard > question > > > to answer, since most code I write is both. > > > > > > > The same holds for me. > > That question appears to have just grown a "compatible with both" option. > > It might make sense to add a similar option to the following question > about what you use for personal projects. > > -n > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brunocauet%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdcb808 at gmail.com Wed Dec 10 19:28:59 2014 From: mdcb808 at gmail.com (mdcb808) Date: Wed, 10 Dec 2014 10:28:59 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: References: <5487E8DD.5010806@gmail.com> Message-ID: <548890EB.4070002@gmail.com> On 12/10/14 7:33 AM, Nick Coghlan wrote: > On 10 December 2014 at 16:31, Matthieu Bec wrote: >> newbie first post on this list, if what follows is of context ... >> >> Hi all, >> >> I'm struggling with issue per the subject, read different threads and issue >> http://bugs.python.org/issue15443 that started 2012 still opened as of >> today. >> >> Isn't there a legitimate case for nanosecond support? it's all over the >> place in 'struct timespec' and maybe wrongly I always found python and C >> were best neighbors. That's for the notional aspect. > If you skip down to the more recent 2014 part of the discussion, the > use case has been accepted as valid, but the idea still needs a > concrete change proposal that addresses the various API design and > backwards compatibility issues that arise. Specifically, questions > like: Thanks Nick. These are typically discussed on this list or using the bug tracker? maybe YNGTNI applied, not clear why it's not there after 2 eyars. I'm no expert but one could imagine something reasonably simple: - a new type datetime.struct_timespec (a la time.struct_tm) - a new constructor datetime.time(struct_timespec), so what already exists untouched - pickle versioning using free bits, the new format that favors clarity over saving byte (as described in 15443) - not sure what's at stake with the strp/ftime() but cant imagine it's a biggie Regards, Matthieu > * preserving compatibility with passing in microsecond values > * how to accept nanosecond values > * how to correctly unpickle old datetime pickle values > * how to update strptime() and strftime() > > Cheers, > Nick. > From benjamin at python.org Wed Dec 10 23:59:00 2014 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 10 Dec 2014 17:59:00 -0500 Subject: [Python-Dev] [RELEASE] Python 2.7.9 Message-ID: <1418252340.1904966.201424689.3F29EF28@webmail.messagingengine.com> It is my pleasure to announce the release of Python 2.7.9, a new bugfix release in the Python 2.7 series. Despite technically being a maintenance release, Python 2.7.9 includes several majors changes from 2.7.8: - The "ensurepip" module has been backported to Python 2.7 - Python 3's ssl module has been backported to Python 2.7. - HTTPS certificates are now verified by default using the system's certificate store. - SSLv3 has been disabled by default due to the POODLE attack. Downloads are at https://www.python.org/downloads/release/python-279/ Please report bugs to https://bugs.python.org/ I would like to thank the people who made the above security and usability improvements listed above possible. Among others, Alex Gaynor, David Reid, Nick Coghlan, and Donald Stufft wrote many PEPs and a lot of code to bring those features to 2.7.9. Thank you. Enjoy, Benjamin 2.7 release manager on behalf on python-dev and all of Python's contributors From stephen at xemacs.org Thu Dec 11 06:10:05 2014 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Thu, 11 Dec 2014 14:10:05 +0900 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <548890EB.4070002@gmail.com> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> Message-ID: <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> mdcb808 writes: > These are typically discussed on this list or using the bug > tracker? I think this discussion belongs on python-dev because the requirement is clear, but a full specification involves backward compatibility with older interfaces, and clearly different people place different values on the various aspects of the problem. It makes sense to go straight to tracker when the design is done or obvious, or backward compatibility is clearly not involved. The tracker is also the place to record objective progress (patches, tests, bug reports). Python-Dev is where minds meet. What Nick is saying is that more design needs to be done to resolve differences of opinion on the best way to move forward. > maybe YNGTNI applied, Evidently not. If a senior developer really thought it's a YAGNI, the issue would have been closed WONTFIX. It seems the need is believable. > not clear why it's not there after 2 eyars. There's only one reason you need to worry about: nobody wrote a patch that meets the concerns of the senior developers (one of which is that concerns raised by anybody remain unresolved; they don't always have strong opinions themselves).[1] > - not sure what's at stake with the strp/ftime() but cant imagine > it's a biggie If you want something done, you don't necessarily need to supply a patch. But you have to do more to move things forward that just say "I can't imagine why anybody worries about that." You have to find out what their worries are, and explain that their worries won't be realized in the case of the obvious design (eg, the one you presented), or provide a design that avoids realizing those worries. Or you can get the senior developers to overrule the worriers, but you need a relatively important use case to make that fly. Or you can get somebody else to do some of the above, but that also requires presenting an important use case (to that somebody). Footnotes: [1] That's not 100% accurate: there is a shortage of senior developer time for reviewing patches. If it's simply that nobody has looked at the issue, simply bringing it up may be sufficient to get attention and then action. But Nick's response makes it clear that doesn't apply to this issue; people have looked at the issue and have unresolved concerns. From g.rodola at gmail.com Thu Dec 11 15:47:34 2014 From: g.rodola at gmail.com (Giampaolo Rodola') Date: Thu, 11 Dec 2014 15:47:34 +0100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: Message-ID: On Wed, Dec 10, 2014 at 5:59 PM, Bruno Cauet wrote: > Hi all, > Last year a survey was conducted on python 2 and 3 usage. > Here is the 2014 edition, slightly updated (from 9 to 11 questions). > It should not take you more than 1 minute to fill. I would be pleased if > you took that time. > > Here's the url: http://goo.gl/forms/tDTcm8UzB3 > I'll publish the results around the end of the year. > > Last year results: https://wiki.python.org/moin/2.x-vs-3.x-survey > > Thank you > Bruno > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/g.rodola%40gmail.com > I still think the only *real* obstacle remains the lack of important packages such as twisted, gevent and pika which haven't been ported yet. With those ones ported switching to Python 3 *right now* is not only possible and relatively easy, but also convenient. -- Giampaolo - http://grodola.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Thu Dec 11 15:59:28 2014 From: bcannon at gmail.com (Brett Cannon) Date: Thu, 11 Dec 2014 14:59:28 +0000 Subject: [Python-Dev] My thinking about the development process References: Message-ID: As I didn't hear any objections, I'm officially stating that I expect initial draft PEPs to be in by February 1 to know who is in the running to focus discussion. I then expect complete PEPs by April 1 so I can read them before PyCon and have informed discussions while I'm there. I will then plan to make a final decision by May 1 so that we can try to have the changes ready for Python 3.6 development (currently scheduled for Sep 2015). On Fri Dec 05 2014 at 3:04:48 PM Brett Cannon wrote: > This is a bit long as I espoused as if this was a blog post to try and > give background info on my thinking, etc. The TL;DR folks should start at > the "Ideal Scenario" section and read to the end. > > P.S.: This is in Markdown and I have put it up at > https://gist.github.com/brettcannon/a9c9a5989dc383ed73b4 if you want a > nicer formatted version for reading. > > # History lesson > Since I signed up for the python-dev mailing list way back in June 2002, > there seems to be a cycle where we as a group come to a realization that > our current software development process has not kept up with modern > practices and could stand for an update. For me this was first shown when > we moved from SourceForge to our own infrastructure, then again when we > moved from Subversion to Mercurial (I led both of these initiatives, so > it's somewhat a tradition/curse I find myself in this position yet again). > And so we again find ourselves at the point of realizing that we are not > keeping up with current practices and thus need to evaluate how we can > improve our situation. > > # Where we are now > Now it should be realized that we have to sets of users of our development > process: contributors and core developers (the latter whom can play both > roles). If you take a rough outline of our current, recommended process it > goes something like this: > > 1. Contributor clones a repository from hg.python.org > 2. Contributor makes desired changes > 3. Contributor generates a patch > 4. Contributor creates account on bugs.python.org and signs the > [contributor agreement]( > https://www.python.org/psf/contrib/contrib-form/) > 4. Contributor creates an issue on bugs.python.org (if one does not > already exist) and uploads a patch > 5. Core developer evaluates patch, possibly leaving comments through our > [custom version of Rietveld](http://bugs.python.org/review/) > 6. Contributor revises patch based on feedback and uploads new patch > 7. Core developer downloads patch and applies it to a clean clone > 8. Core developer runs the tests > 9. Core developer does one last `hg pull -u` and then commits the changes > to various branches > > I think we can all agree it works to some extent, but isn't exactly > smooth. There are multiple steps in there -- in full or partially -- that > can be automated. There is room to improve everyone's lives. > > And we can't forget the people who help keep all of this running as well. > There are those that manage the SSH keys, the issue tracker, the review > tool, hg.python.org, and the email system that let's use know when stuff > happens on any of these other systems. The impact on them needs to also be > considered. > > ## Contributors > I see two scenarios for contributors to optimize for. There's the simple > spelling mistake patches and then there's the code change patches. The > former is the kind of thing that you can do in a browser without much > effort and should be a no-brainer commit/reject decision for a core > developer. This is what the GitHub/Bitbucket camps have been promoting > their solution for solving while leaving the cpython repo alone. > Unfortunately the bulk of our documentation is in the Doc/ directory of > cpython. While it's nice to think about moving the devguide, peps, and even > breaking out the tutorial to repos hosting on Bitbucket/GitHub, everything > else is in Doc/ (language reference, howtos, stdlib, C API, etc.). So > unless we want to completely break all of Doc/ out of the cpython repo and > have core developers willing to edit two separate repos when making changes > that impact code **and** docs, moving only a subset of docs feels like a > band-aid solution that ignores the big, white elephant in the room: the > cpython repo, where a bulk of patches are targeting. > > For the code change patches, contributors need an easy way to get a hold > of the code and get their changes to the core developers. After that it's > things like letting contributors knowing that their patch doesn't apply > cleanly, doesn't pass tests, etc. As of right now getting the patch into > the issue tracker is a bit manual but nothing crazy. The real issue in this > scenario is core developer response time. > > ## Core developers > There is a finite amount of time that core developers get to contribute to > Python and it fluctuates greatly. This means that if a process can be found > which allows core developers to spend less time doing mechanical work and > more time doing things that can't be automated -- namely code reviews -- > then the throughput of patches being accepted/rejected will increase. This > also impacts any increased patch submission rate that comes from improving > the situation for contributors because if the throughput doesn't change > then there will simply be more patches sitting in the issue tracker and > that doesn't benefit anyone. > > # My ideal scenario > If I had an infinite amount of resources (money, volunteers, time, etc.), > this would be my ideal scenario: > > 1. Contributor gets code from wherever; easiest to just say "fork on > GitHub or Bitbucket" as they would be official mirrors of hg.python.org > and are updated after every commit, but could clone hg.python.org/cpython > if they wanted > 2. Contributor makes edits; if they cloned on Bitbucket or GitHub then > they have browser edit access already > 3. Contributor creates an account at bugs.python.org and signs the CLA > 3. The contributor creates an issue at bugs.python.org (probably the one > piece of infrastructure we all agree is better than the other options, > although its workflow could use an update) > 4. If the contributor used Bitbucket or GitHub, they send a pull request > with the issue # in the PR message > 5. bugs.python.org notices the PR, grabs a patch for it, and puts it on > bugs.python.org for code review > 6. CI runs on the patch based on what Python versions are specified in the > issue tracker, letting everyone know if it applied cleanly, passed tests on > the OSs that would be affected, and also got a test coverage report > 7. Core developer does a code review > 8. Contributor updates their code based on the code review and the updated > patch gets pulled by bugs.python.org automatically and CI runs again > 9. Once the patch is acceptable and assuming the patch applies cleanly to > all versions to commit to, the core developer clicks a "Commit" button, > fills in a commit message and NEWS entry, and everything gets committed (if > the patch can't apply cleanly then the core developer does it the > old-fashion way, or maybe auto-generate a new PR which can be manually > touched up so it does apply cleanly?) > > Basically the ideal scenario lets contributors use whatever tools and > platforms that they want and provides as much automated support as possible > to make sure their code is tip-top before and during code review while core > developers can review and commit patches so easily that they can do their > job from a beach with a tablet and some WiFi. > > ## Where the current proposed solutions seem to fall short > ### GitHub/Bitbucket > Basically GitHub/Bitbucket is a win for contributors but doesn't buy core > developers that much. GitHub/Bitbucket gives contributors the easy cloning, > drive-by patches, CI, and PRs. Core developers get a code review tool -- > I'm counting Rietveld as deprecated after Guido's comments about the code's > maintenance issues -- and push-button commits **only for single branch > changes**. But for any patch that crosses branches we don't really gain > anything. At best core developers tell a contributor "please send your PR > against 3.4", push-button merge it, update a local clone, merge from 3.4 to > default, do the usual stuff, commit, and then push; that still keeps me off > the beach, though, so that doesn't get us the whole way. You could force > people to submit two PRs, but I don't see that flying. Maybe some tool > could be written that automatically handles the merge/commit across > branches once the initial PR is in? Or automatically create a PR that core > developers can touch up as necessary and then accept that as well? > Regardless, some solution is necessary to handle branch-crossing PRs. > > As for GitHub vs. Bitbucket, I personally don't care. I like GitHub's > interface more, but that's personal taste. I like hg more than git, but > that's also personal taste (and I consider a transition from hg to git a > hassle but not a deal-breaker but also not a win). It is unfortunate, > though, that under this scenario we would have to choose only one platform. > > It's also unfortunate both are closed-source, but that's not a > deal-breaker, just a knock against if the decision is close. > > ### Our own infrastructure > The shortcoming here is the need for developers, developers, developers! > Everything outlined in the ideal scenario is totally doable on our own > infrastructure with enough code and time (donated/paid-for infrastructure > shouldn't be an issue). But historically that code and time has not > materialized. Our code review tool is a fork that probably should be > replaced as only Martin von L?wis can maintain it. Basically Ezio Melotti > maintains the issue tracker's code. We don't exactly have a ton of people > constantly going "I'm so bored because everything for Python's development > infrastructure gets sorted so quickly!" A perfect example is that R. David > Murray came up with a nice update for our workflow after PyCon but then ran > out of time after mostly defining it and nothing ever became of it (maybe > we can rectify that at PyCon?). Eric Snow has pointed out how he has > written similar code for pulling PRs from I think GitHub to another code > review tool, but that doesn't magically make it work in our infrastructure > or get someone to write it and help maintain it (no offense, Eric). > > IOW our infrastructure can do anything, but it can't run on hopes and > dreams. Commitments from many people to making this happen by a certain > deadline will be needed so as to not allow it to drag on forever. People > would also have to commit to continued maintenance to make this viable > long-term. > > # Next steps > I'm thinking first draft PEPs by February 1 to know who's all-in (8 weeks > away), all details worked out in final PEPs and whatever is required to > prove to me it will work by the PyCon language summit (4 months away). I > make a decision by May 1, and > then implementation aims to be done by the time 3.5.0 is cut so we can > switch over shortly thereafter (9 months away). Sound like a reasonable > timeline? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Dec 11 16:02:19 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 11 Dec 2014 10:02:19 -0500 Subject: [Python-Dev] My thinking about the development process In-Reply-To: References: Message-ID: <79337E4D-7C72-46EA-B2D5-6835DB1F3DF7@stufft.io> > On Dec 11, 2014, at 9:59 AM, Brett Cannon wrote: > > As I didn't hear any objections, I'm officially stating that I expect initial draft PEPs to be in by February 1 to know who is in the running to focus discussion. I then expect complete PEPs by April 1 so I can read them before PyCon and have informed discussions while I'm there. I will then plan to make a final decision by May 1 so that we can try to have the changes ready for Python 3.6 development (currently scheduled for Sep 2015). Is it OK to adapt my current PEP or should I create a whole new one? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Thu Dec 11 16:08:26 2014 From: bcannon at gmail.com (Brett Cannon) Date: Thu, 11 Dec 2014 15:08:26 +0000 Subject: [Python-Dev] My thinking about the development process References: <79337E4D-7C72-46EA-B2D5-6835DB1F3DF7@stufft.io> Message-ID: Just adapt your current PEP. On Thu Dec 11 2014 at 10:02:23 AM Donald Stufft wrote: > > On Dec 11, 2014, at 9:59 AM, Brett Cannon wrote: > > As I didn't hear any objections, I'm officially stating that I expect > initial draft PEPs to be in by February 1 to know who is in the running to > focus discussion. I then expect complete PEPs by April 1 so I can read them > before PyCon and have informed discussions while I'm there. I will then > plan to make a final decision by May 1 so that we can try to have the > changes ready for Python 3.6 development (currently scheduled for Sep 2015). > > > Is it OK to adapt my current PEP or should I create a whole new one? > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdcb808 at gmail.com Thu Dec 11 18:58:22 2014 From: mdcb808 at gmail.com (Matthieu Bec) Date: Thu, 11 Dec 2014 09:58:22 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <5489DB3E.7020005@gmail.com> Thanks Stephen elaborating on the process. and apologies, I was dismissing the last point only half jokingly. I read the comment for strftime / strptime in the report as meant to remember to implement it. It seems picking a new format letter (or keep using "%f" if acceptable) that would accept/produce up to 9 characters instead of 6 for nanoseconds would do most of the trick. Maybe there's no issue or I don't understand it. That completes my chant to awaken the Elderers! Regards, Matthieu On 12/10/14 9:10 PM, Stephen J. Turnbull wrote: > mdcb808 writes: > > > These are typically discussed on this list or using the bug > > tracker? > > I think this discussion belongs on python-dev because the requirement > is clear, but a full specification involves backward compatibility > with older interfaces, and clearly different people place different > values on the various aspects of the problem. It makes sense to go > straight to tracker when the design is done or obvious, or backward > compatibility is clearly not involved. The tracker is also the place > to record objective progress (patches, tests, bug reports). > Python-Dev is where minds meet. > > What Nick is saying is that more design needs to be done to resolve > differences of opinion on the best way to move forward. > > > maybe YNGTNI applied, > > Evidently not. If a senior developer really thought it's a YAGNI, the > issue would have been closed WONTFIX. It seems the need is believable. > > > not clear why it's not there after 2 eyars. > > There's only one reason you need to worry about: nobody wrote a patch > that meets the concerns of the senior developers (one of which is that > concerns raised by anybody remain unresolved; they don't always have > strong opinions themselves).[1] > > > - not sure what's at stake with the strp/ftime() but cant imagine > > it's a biggie > > If you want something done, you don't necessarily need to supply a > patch. But you have to do more to move things forward that just say > "I can't imagine why anybody worries about that." You have to find > out what their worries are, and explain that their worries won't be > realized in the case of the obvious design (eg, the one you > presented), or provide a design that avoids realizing those worries. > Or you can get the senior developers to overrule the worriers, but you > need a relatively important use case to make that fly. > > Or you can get somebody else to do some of the above, but that also > requires presenting an important use case (to that somebody). > > Footnotes: > [1] That's not 100% accurate: there is a shortage of senior developer > time for reviewing patches. If it's simply that nobody has looked at > the issue, simply bringing it up may be sufficient to get attention > and then action. But Nick's response makes it clear that doesn't > apply to this issue; people have looked at the issue and have > unresolved concerns. > From skip.montanaro at gmail.com Thu Dec 11 19:33:17 2014 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Thu, 11 Dec 2014 12:33:17 -0600 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <5489DB3E.7020005@gmail.com> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> Message-ID: On Thu, Dec 11, 2014 at 11:58 AM, Matthieu Bec wrote: > ...or keep using "%f" if acceptable... That might be a problem. While it will probably work most of the time, there are likely to be situations where the caller assumes it generates a six-digit string. I did a little poking around. It seems like "%N" isn't used. Skip From guido at python.org Thu Dec 11 20:14:27 2014 From: guido at python.org (Guido van Rossum) Date: Thu, 11 Dec 2014 11:14:27 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> Message-ID: Another issue to consider here is that parsing and printing should be symmetrical. The %f format gobbles up exactly 6 digits. Finally, strptime and strftime are not invented by Python, the same functions with (mostly) the same format characters are defined by other languages. Is there not a single other language that has added support for nanoseconds to its strftime/strptime? (I wouldn't be surprised if there wasn't -- while computer clocks have a precision in nanoseconds, that doesn't mean they are that *accurate* at all (even with ntpd running). On Thu, Dec 11, 2014 at 10:33 AM, Skip Montanaro wrote: > On Thu, Dec 11, 2014 at 11:58 AM, Matthieu Bec wrote: > > ...or keep using "%f" if acceptable... > > That might be a problem. While it will probably work most of the time, > there are likely to be situations where the caller assumes it > generates a six-digit string. I did a little poking around. It seems > like "%N" isn't used. > > Skip > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Dec 11 20:23:56 2014 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 11 Dec 2014 20:23:56 +0100 Subject: [Python-Dev] datetime nanosecond support (ctd?) References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> Message-ID: <20141211202356.07462c01@fsol> I think strftime / strptime support is a low-priority concern on this topic, and can probably be discussed independently of the core nanosecond support. Regards Antoine. On Thu, 11 Dec 2014 11:14:27 -0800 Guido van Rossum wrote: > Another issue to consider here is that parsing and printing should be > symmetrical. The %f format gobbles up exactly 6 digits. > > Finally, strptime and strftime are not invented by Python, the same > functions with (mostly) the same format characters are defined by other > languages. Is there not a single other language that has added support for > nanoseconds to its strftime/strptime? (I wouldn't be surprised if there > wasn't -- while computer clocks have a precision in nanoseconds, that > doesn't mean they are that *accurate* at all (even with ntpd running). > > On Thu, Dec 11, 2014 at 10:33 AM, Skip Montanaro > wrote: > > > On Thu, Dec 11, 2014 at 11:58 AM, Matthieu Bec wrote: > > > ...or keep using "%f" if acceptable... > > > > That might be a problem. While it will probably work most of the time, > > there are likely to be situations where the caller assumes it > > generates a six-digit string. I did a little poking around. It seems > > like "%N" isn't used. > > > > Skip > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > > From wizzat at gmail.com Thu Dec 11 20:35:07 2014 From: wizzat at gmail.com (Mark Roberts) Date: Thu, 11 Dec 2014 11:35:07 -0800 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: Message-ID: I disagree. I know there's a huge focus on The Big Libraries (and wholesale migration is all but impossible without them), but the long tail of libraries is still incredibly important. It's like saying that migrating the top 10 Perl libraries to Perl 6 would allow people to completely ignore all of CPAN. It just doesn't make sense. -Mark On Thu, Dec 11, 2014 at 6:47 AM, Giampaolo Rodola' wrote: > > > On Wed, Dec 10, 2014 at 5:59 PM, Bruno Cauet wrote: > >> Hi all, >> Last year a survey was conducted on python 2 and 3 usage. >> Here is the 2014 edition, slightly updated (from 9 to 11 questions). >> It should not take you more than 1 minute to fill. I would be pleased if >> you took that time. >> >> Here's the url: http://goo.gl/forms/tDTcm8UzB3 >> I'll publish the results around the end of the year. >> >> Last year results: https://wiki.python.org/moin/2.x-vs-3.x-survey >> >> Thank you >> Bruno >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/g.rodola%40gmail.com >> > > I still think the only *real* obstacle remains the lack of important > packages such as twisted, gevent and pika which haven't been ported yet. > With those ones ported switching to Python 3 *right now* is not only > possible and relatively easy, but also convenient. > > > -- > Giampaolo - http://grodola.blogspot.com > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/wizzat%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Thu Dec 11 20:37:22 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 11 Dec 2014 11:37:22 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <20141211202356.07462c01@fsol> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> Message-ID: <5489F272.5040209@stoneleaf.us> On 12/11/2014 11:23 AM, Antoine Pitrou wrote: > > I think strftime / strptime support is a low-priority concern on this > topic, and can probably be discussed independently of the core > nanosecond support. Agreed. -- ~Ethan~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From ethan at stoneleaf.us Thu Dec 11 20:37:05 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 11 Dec 2014 11:37:05 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> Message-ID: <5489F261.8090501@stoneleaf.us> On 12/11/2014 11:14 AM, Guido van Rossum wrote: > > (I wouldn't be surprised if there wasn't -- while computer clocks have a precision in > nanoseconds, that doesn't mean they are that *accurate* at all (even with ntpd running). [reading issue] The real-world use cases deal with getting this information from other devices (network cards, GPS, particle accelerators, etc.), so it's not really a matter of cross-computer accurancy, but relative accuracy (i.e. how long did something take?). All in all, looks like a good idea. -- ~Ethan~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From skip.montanaro at gmail.com Thu Dec 11 20:43:05 2014 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Thu, 11 Dec 2014 13:43:05 -0600 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <20141211202356.07462c01@fsol> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> Message-ID: On Thu, Dec 11, 2014 at 1:23 PM, Antoine Pitrou wrote: > I think strftime / strptime support is a low-priority concern on this > topic, and can probably be discussed independently of the core > nanosecond support. Might be low-priority, but with %f support as a template, supporting something to specify nanoseconds should be pretty trivial. The hardest question will be to convince ourselves that we aren't choosing a format code which some other strftime/strptime implementation is already using. In addition, ISTR that one of the use cases was analysis of datetime data generated by other applications which has nanosecond resolution. Unless those values are stored as epoch seconds, you're going to need to parse them. It's not clear to me why you'd give people only half the solution they need. Skip From solipsis at pitrou.net Thu Dec 11 20:46:52 2014 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 11 Dec 2014 20:46:52 +0100 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> Message-ID: <20141211204652.40a0c807@fsol> On Thu, 11 Dec 2014 13:43:05 -0600 Skip Montanaro wrote: > On Thu, Dec 11, 2014 at 1:23 PM, Antoine Pitrou wrote: > > I think strftime / strptime support is a low-priority concern on this > > topic, and can probably be discussed independently of the core > > nanosecond support. > > Might be low-priority, but with %f support as a template, supporting > something to specify nanoseconds should be pretty trivial. The hardest > question will be to convince ourselves that we aren't choosing a > format code which some other strftime/strptime implementation is > already using. > > In addition, ISTR that one of the use cases was analysis of datetime > data generated by other applications which has nanosecond resolution. One of the use cases is to deal with OS-generated timestamps without losing information. As long as you don't need to represent or parse those timestamps, strptime / strftime don't come into the picture. Regards Antoine. From drsalists at gmail.com Thu Dec 11 21:14:16 2014 From: drsalists at gmail.com (Dan Stromberg) Date: Thu, 11 Dec 2014 12:14:16 -0800 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: Message-ID: On Thu, Dec 11, 2014 at 11:35 AM, Mark Roberts wrote: > I disagree. I know there's a huge focus on The Big Libraries (and wholesale > migration is all but impossible without them), but the long tail of > libraries is still incredibly important. It's like saying that migrating the > top 10 Perl libraries to Perl 6 would allow people to completely ignore all > of CPAN. It just doesn't make sense. Things in the Python 2.x vs 3.x world aren't that bad. See: https://python3wos.appspot.com/ and https://wiki.python.org/moin/PortingPythonToPy3k http://stromberg.dnsalias.org/~strombrg/Intro-to-Python/ (writing code to run on 2.x and 3.x) I believe just about everything I've written over the last few years either ran on 2.x and 3.x unmodified, or ran on 3.x alone. If you go the former route, you don't need to wait for your libraries to be updated. I usually run pylint twice for my projects (after each change, prior to checkin), once with a 2.x interpreter, and once with a 3.x interpreter (using http://stromberg.dnsalias.org/svn/this-pylint/trunk/this-pylint) , but I gather pylint has the option of running on a 2.x interpreter and warning about anything that wouldn't work on 3.x. From marko at pacujo.net Thu Dec 11 20:59:28 2014 From: marko at pacujo.net (Marko Rauhamaa) Date: Thu, 11 Dec 2014 21:59:28 +0200 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <5489F261.8090501@stoneleaf.us> (Ethan Furman's message of "Thu, 11 Dec 2014 11:37:05 -0800") References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <5489F261.8090501@stoneleaf.us> Message-ID: <87d27qhs9r.fsf@elektro.pacujo.net> Ethan Furman : > On 12/11/2014 11:14 AM, Guido van Rossum wrote: >> (I wouldn't be surprised if there wasn't -- while computer clocks >> have a precision in nanoseconds, that doesn't mean they are that >> *accurate* at all (even with ntpd running). > > The real-world use cases deal with getting this information from other > devices (network cards, GPS, particle accelerators, etc.), so it's not > really a matter of cross-computer accurancy, but relative accuracy > (i.e. how long did something take?). It would be nice if it were possible to deal with high-precision epoch times and time deltas without special tricks. I have had to deal with femtosecond-precision IRL (albeit in a realtime C application, not in Python). Quad-precision floats () would do it for Python: * just do it in seconds * have enough precision for any needs * have enough renge for any needs Marko From brett at python.org Thu Dec 11 21:17:31 2014 From: brett at python.org (Brett Cannon) Date: Thu, 11 Dec 2014 20:17:31 +0000 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition References: Message-ID: On Thu Dec 11 2014 at 3:14:42 PM Dan Stromberg wrote: > On Thu, Dec 11, 2014 at 11:35 AM, Mark Roberts wrote: > > I disagree. I know there's a huge focus on The Big Libraries (and > wholesale > > migration is all but impossible without them), but the long tail of > > libraries is still incredibly important. It's like saying that migrating > the > > top 10 Perl libraries to Perl 6 would allow people to completely ignore > all > > of CPAN. It just doesn't make sense. > > Things in the Python 2.x vs 3.x world aren't that bad. > > See: > https://python3wos.appspot.com/ and > https://wiki.python.org/moin/PortingPythonToPy3k > http://stromberg.dnsalias.org/~strombrg/Intro-to-Python/ (writing code > to run on 2.x and 3.x) > > I believe just about everything I've written over the last few years > either ran on 2.x and 3.x unmodified, or ran on 3.x alone. If you go > the former route, you don't need to wait for your libraries to be > updated. > > I usually run pylint twice for my projects (after each change, prior > to checkin), once with a 2.x interpreter, and once with a 3.x > interpreter (using > http://stromberg.dnsalias.org/svn/this-pylint/trunk/this-pylint) , but > I gather pylint has the option of running on a 2.x interpreter and > warning about anything that wouldn't work on 3.x. > Pylint 1.4 has a --py3k flag to run only checks related to Python 3 compatibility under Python 2. -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Thu Dec 11 22:00:12 2014 From: python at mrabarnett.plus.com (MRAB) Date: Thu, 11 Dec 2014 21:00:12 +0000 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> Message-ID: <548A05DC.2080607@mrabarnett.plus.com> On 2014-12-11 18:33, Skip Montanaro wrote: > On Thu, Dec 11, 2014 at 11:58 AM, Matthieu Bec wrote: >> ...or keep using "%f" if acceptable... > > That might be a problem. While it will probably work most of the time, > there are likely to be situations where the caller assumes it > generates a six-digit string. I did a little poking around. It seems > like "%N" isn't used. > Could the number of digits be specified? You could have "%9f" for nanoseconds, "%3f" for milliseconds, etc. The default would be 6 microseconds for backwards compatibility. Maybe, also, strptime could support "%*f" to gobble as many digits as are available. From maxischmeii at gmail.com Thu Dec 11 22:58:28 2014 From: maxischmeii at gmail.com (schmeii) Date: Thu, 11 Dec 2014 22:58:28 +0100 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <548A05DC.2080607@mrabarnett.plus.com> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <548A05DC.2080607@mrabarnett.plus.com> Message-ID: 2014-12-11 22:00 GMT+01:00 MRAB : > > On 2014-12-11 18:33, Skip Montanaro wrote: >> >> >> there are likely to be situations where the caller assumes it >> generates a six-digit string. I did a little poking around. It seems >> like "%N" isn't used. >> >> Could the number of digits be specified? You could have "%9f" for > nanoseconds, "%3f" for milliseconds, etc. The default would be 6 > microseconds for backwards compatibility. Ruby does that, but use %9N. (a plain %N consume 9 digits by default). GNU date also use %N, but doesn't allow to specify the number of digits to consume. -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Thu Dec 11 23:02:27 2014 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 11 Dec 2014 23:02:27 +0100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: Message-ID: 2014-12-11 15:47 GMT+01:00 Giampaolo Rodola' : > I still think the only *real* obstacle remains the lack of important > packages such as twisted, gevent and pika which haven't been ported yet. twisted core works on python 3, right now. Contribute to Twisted if you want to port more code... Or start something new, asyncio (with trollius, it works on Python 2 too). The develpoment branch of gevent supports Python 3, especially if you dont use monkey patching. Ask the developers to release a version, at least with "experimental" Python 3 support. I don't know pika. I read "Pika Python AMQP Client Library". You may take a look at https://github.com/dzen/aioamqp if you would like to play with asyncio. > With those ones ported switching to Python 3 *right now* is not only > possible and relatively easy, but also convenient. Victor From greg.ewing at canterbury.ac.nz Thu Dec 11 23:07:18 2014 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 12 Dec 2014 11:07:18 +1300 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <548A05DC.2080607@mrabarnett.plus.com> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <548A05DC.2080607@mrabarnett.plus.com> Message-ID: <548A1596.7000200@canterbury.ac.nz> MRAB wrote: > Maybe, also, strptime could support "%*f" to gobble as many digits as > are available. The * would suggest that the number of digits is being supplied as a parameter. Maybe "%?f". -- Greg From status at bugs.python.org Fri Dec 12 18:08:14 2014 From: status at bugs.python.org (Python tracker) Date: Fri, 12 Dec 2014 18:08:14 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20141212170814.37257560CA@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2014-12-05 - 2014-12-12) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 4666 ( +0) closed 30137 (+42) total 34803 (+42) Open issues with patches: 2173 Issues opened (31) ================== #9647: os.confstr() does not handle value changing length between cal http://bugs.python.org/issue9647 reopened by haypo #22866: ssl module in 2.7 should provide a way to configure default co http://bugs.python.org/issue22866 reopened by lemburg #22935: Disabling SSLv3 support http://bugs.python.org/issue22935 reopened by ned.deily #23001: Accept mutable bytes-like objects http://bugs.python.org/issue23001 opened by serhiy.storchaka #23003: traceback.{print_exc,print_exception,format_exc,format_excepti http://bugs.python.org/issue23003 opened by Arfrever #23004: mock_open() should allow reading binary data http://bugs.python.org/issue23004 opened by jcea #23008: pydoc enum.{,Int}Enum fails http://bugs.python.org/issue23008 opened by Antony.Lee #23010: "unclosed file" warning when defining unused logging FileHandl http://bugs.python.org/issue23010 opened by wdoekes #23011: Duplicate Paragraph in documentation for json module http://bugs.python.org/issue23011 opened by berndca #23012: RuntimeError: settrace/setprofile function gets lost http://bugs.python.org/issue23012 opened by arigo #23013: Tweak wording for importlib.util.LazyLoader in regards to Load http://bugs.python.org/issue23013 opened by brett.cannon #23014: Don't have importlib.abc.Loader.create_module() be optional http://bugs.python.org/issue23014 opened by brett.cannon #23015: Improve test_uuid http://bugs.python.org/issue23015 opened by serhiy.storchaka #23017: string.printable.isprintable() returns False http://bugs.python.org/issue23017 opened by planet36 #23018: Add version info to python[w].exe http://bugs.python.org/issue23018 opened by steve.dower #23019: pyexpat.errors wrongly bound to message strings instead of mes http://bugs.python.org/issue23019 opened by bkarge #23020: New matmul operator crashes modules compiled with CPython3.4 http://bugs.python.org/issue23020 opened by amaury.forgeotdarc #23021: Get rid of references to PyString in Modules/ http://bugs.python.org/issue23021 opened by berker.peksag #23023: ./Modules/ld_so_aix not found on AIX during test_distutils http://bugs.python.org/issue23023 opened by lemburg #23025: ssl.RAND_bytes docs should mention os.urandom http://bugs.python.org/issue23025 opened by alex #23026: Winreg module doesn't support REG_QWORD, small DWORD doc updat http://bugs.python.org/issue23026 opened by markgrandi #23027: test_warnings fails with -Werror http://bugs.python.org/issue23027 opened by serhiy.storchaka #23028: CEnvironmentVariableTests and PyEnvironmentVariableTests test http://bugs.python.org/issue23028 opened by serhiy.storchaka #23029: test_warnings produces extra output in quiet mode http://bugs.python.org/issue23029 opened by serhiy.storchaka #23030: lru_cache manual get/put http://bugs.python.org/issue23030 opened by ConnyOnny #23031: pdb crashes when jumping over "with" statement http://bugs.python.org/issue23031 opened by DSP #23033: Disallow support for a*.example.net, *a.example.net, and a*b.e http://bugs.python.org/issue23033 opened by dstufft #23034: Dynamically control debugging output http://bugs.python.org/issue23034 opened by serhiy.storchaka #23035: python -c: Line causing exception not shown for exceptions oth http://bugs.python.org/issue23035 opened by Arfrever #23040: Better documentation for the urlencode safe parameter http://bugs.python.org/issue23040 opened by wrwrwr #23041: csv needs more quoting rules http://bugs.python.org/issue23041 opened by samwyse Most recent 15 issues with no replies (15) ========================================== #23029: test_warnings produces extra output in quiet mode http://bugs.python.org/issue23029 #23028: CEnvironmentVariableTests and PyEnvironmentVariableTests test http://bugs.python.org/issue23028 #23027: test_warnings fails with -Werror http://bugs.python.org/issue23027 #23026: Winreg module doesn't support REG_QWORD, small DWORD doc updat http://bugs.python.org/issue23026 #23021: Get rid of references to PyString in Modules/ http://bugs.python.org/issue23021 #23015: Improve test_uuid http://bugs.python.org/issue23015 #23013: Tweak wording for importlib.util.LazyLoader in regards to Load http://bugs.python.org/issue23013 #23012: RuntimeError: settrace/setprofile function gets lost http://bugs.python.org/issue23012 #23008: pydoc enum.{,Int}Enum fails http://bugs.python.org/issue23008 #23004: mock_open() should allow reading binary data http://bugs.python.org/issue23004 #23003: traceback.{print_exc,print_exception,format_exc,format_excepti http://bugs.python.org/issue23003 #22990: bdist installation dialog http://bugs.python.org/issue22990 #22981: Use CFLAGS when extracting multiarch http://bugs.python.org/issue22981 #22970: Cancelling wait() after notification leaves Condition in an in http://bugs.python.org/issue22970 #22969: Compile fails with --without-signal-module http://bugs.python.org/issue22969 Most recent 15 issues waiting for review (15) ============================================= #23040: Better documentation for the urlencode safe parameter http://bugs.python.org/issue23040 #23030: lru_cache manual get/put http://bugs.python.org/issue23030 #23026: Winreg module doesn't support REG_QWORD, small DWORD doc updat http://bugs.python.org/issue23026 #23025: ssl.RAND_bytes docs should mention os.urandom http://bugs.python.org/issue23025 #23018: Add version info to python[w].exe http://bugs.python.org/issue23018 #23017: string.printable.isprintable() returns False http://bugs.python.org/issue23017 #23015: Improve test_uuid http://bugs.python.org/issue23015 #23003: traceback.{print_exc,print_exception,format_exc,format_excepti http://bugs.python.org/issue23003 #23001: Accept mutable bytes-like objects http://bugs.python.org/issue23001 #22997: Minor improvements to "Functional API" section of Enum documen http://bugs.python.org/issue22997 #22992: Adding a git developer's guide to Mercurial to devguide http://bugs.python.org/issue22992 #22991: test_gdb leaves the terminal in raw mode with gdb 7.8.1 http://bugs.python.org/issue22991 #22986: Improved handling of __class__ assignment http://bugs.python.org/issue22986 #22984: test_json.test_endless_recursion(): "Fatal Python error: Canno http://bugs.python.org/issue22984 #22982: BOM incorrectly inserted before writing, after seeking in text http://bugs.python.org/issue22982 Top 10 most discussed issues (10) ================================= #22935: Disabling SSLv3 support http://bugs.python.org/issue22935 17 msgs #22939: integer overflow in iterator object http://bugs.python.org/issue22939 11 msgs #22992: Adding a git developer's guide to Mercurial to devguide http://bugs.python.org/issue22992 10 msgs #18835: Add aligned memory variants to the suite of PyMem functions/ma http://bugs.python.org/issue18835 8 msgs #22823: Use set literals instead of creating a set from a list http://bugs.python.org/issue22823 8 msgs #22980: C extension naming doesn't take bitness into account http://bugs.python.org/issue22980 8 msgs #23014: Don't have importlib.abc.Loader.create_module() be optional http://bugs.python.org/issue23014 8 msgs #23020: New matmul operator crashes modules compiled with CPython3.4 http://bugs.python.org/issue23020 8 msgs #22866: ssl module in 2.7 should provide a way to configure default co http://bugs.python.org/issue22866 7 msgs #21600: mock.patch.stopall doesn't work with patch.dict to sys.modules http://bugs.python.org/issue21600 6 msgs Issues closed (37) ================== #12602: Missing cross-references in Doc/using http://bugs.python.org/issue12602 closed by berker.peksag #16041: poplib: unlimited readline() from connection http://bugs.python.org/issue16041 closed by python-dev #16042: smtplib: unlimited readline() from connection http://bugs.python.org/issue16042 closed by python-dev #16043: xmlrpc: gzip_decode has unlimited read() http://bugs.python.org/issue16043 closed by python-dev #18305: [patch] Fast sum() for non-numbers http://bugs.python.org/issue18305 closed by gvanrossum #19451: urlparse accepts invalid hostnames http://bugs.python.org/issue19451 closed by terry.reedy #20603: sys.path disappears at shutdown http://bugs.python.org/issue20603 closed by brett.cannon #20866: Crash in the libc fwrite() on SIGPIPE (segfault with os.popen http://bugs.python.org/issue20866 closed by terry.reedy #20895: Add bytes.empty_buffer and deprecate bytes(17) for the same pu http://bugs.python.org/issue20895 closed by ethan.furman #21427: Windows installer: cannot register 64 bit component http://bugs.python.org/issue21427 closed by terry.reedy #21740: doctest doesn't allow duck-typing callables http://bugs.python.org/issue21740 closed by yselivanov #21775: shutil.copytree() crashes copying to VFAT on Linux: AttributeE http://bugs.python.org/issue21775 closed by berker.peksag #22095: Use of set_tunnel with default port results in incorrect post http://bugs.python.org/issue22095 closed by serhiy.storchaka #22225: Add SQLite support to http.cookiejar http://bugs.python.org/issue22225 closed by demian.brecht #22394: Update documentation building to use venv and pip http://bugs.python.org/issue22394 closed by brett.cannon #22581: Other mentions of the buffer protocol http://bugs.python.org/issue22581 closed by serhiy.storchaka #22696: Add a function to know about interpreter shutdown http://bugs.python.org/issue22696 closed by pitrou #22918: Doc for __iter__ makes inexact comment about dict.__iter__ http://bugs.python.org/issue22918 closed by r.david.murray #22959: http.client.HTTPSConnection checks hostname when SSL context h http://bugs.python.org/issue22959 closed by benjamin.peterson #22985: Segfault on time.sleep http://bugs.python.org/issue22985 closed by haypo #22998: inspect.Signature and default arguments http://bugs.python.org/issue22998 closed by yselivanov #23000: More support for Visual Studio users on Windows? http://bugs.python.org/issue23000 closed by SilentGhost #23002: Trackpad scrolling in tkinter doesn't work on some laptops http://bugs.python.org/issue23002 closed by zach.ware #23005: typos on heapq doc http://bugs.python.org/issue23005 closed by rhettinger #23006: Improve the doc and indexing of adict.__missing__. http://bugs.python.org/issue23006 closed by terry.reedy #23007: Unnecessary big intermediate result in Lib/bisect.py http://bugs.python.org/issue23007 closed by mark.dickinson #23009: selectors.EpollSelector.select raises exception when nothing t http://bugs.python.org/issue23009 closed by yselivanov #23016: uncaught exception in lib/warnings.py when executed with pytho http://bugs.python.org/issue23016 closed by serhiy.storchaka #23022: heap-use-after-free in find_maxchar_surrogates http://bugs.python.org/issue23022 closed by haypo #23024: Python Compile Error on Mac os X ld: unknown option: -export-d http://bugs.python.org/issue23024 closed by berker.peksag #23032: 2.7 OS X installer builds fail building OpenSSL on OS X 10.4 / http://bugs.python.org/issue23032 closed by ned.deily #23036: Crash Error? http://bugs.python.org/issue23036 closed by haypo #23037: cpu_count() unreliable on Windows http://bugs.python.org/issue23037 closed by haypo #23038: #python.web irc channel is dead http://bugs.python.org/issue23038 closed by python-dev #23039: File name restriction on Windows http://bugs.python.org/issue23039 closed by tim.golden #1218234: inspect.getsource doesn't update when a module is reloaded http://bugs.python.org/issue1218234 closed by yselivanov #1425127: os.remove OSError: [Errno 13] Permission denied http://bugs.python.org/issue1425127 closed by terry.reedy From wizzat at gmail.com Fri Dec 12 19:24:15 2014 From: wizzat at gmail.com (Mark Roberts) Date: Fri, 12 Dec 2014 10:24:15 -0800 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: Message-ID: So, I'm more than aware of how to write Python 2/3 compatible code. I've ported 10-20 libraries to Python 3 and write Python 2/3 compatible code at work. I'm also aware of how much writing 2/3 compatible code makes me hate Python as a language. It'll be a happy day when one of the two languages dies so that I never have to write code like that again. However, my point was that just because the core libraries by usage are *starting* to roll out Python 3 support doesn't mean that things are "easy" or "convenient" yet. There are too many libraries in the long tail which fulfill semi-common purposes and haven't been moved over yet. Yeah, sure, they haven't been updated in years... but neither has the language they're built on. I suppose what I'm saying is that the long tail of libraries is far more valuable than it seems the Python3 zealots are giving it credit for. Please don't claim it's "easy" to move over just because merely most of the top 20 libraries have been moved over. :-/ -Mark On Thu, Dec 11, 2014 at 12:14 PM, Dan Stromberg wrote: > On Thu, Dec 11, 2014 at 11:35 AM, Mark Roberts wrote: > > I disagree. I know there's a huge focus on The Big Libraries (and > wholesale > > migration is all but impossible without them), but the long tail of > > libraries is still incredibly important. It's like saying that migrating > the > > top 10 Perl libraries to Perl 6 would allow people to completely ignore > all > > of CPAN. It just doesn't make sense. > > Things in the Python 2.x vs 3.x world aren't that bad. > > See: > https://python3wos.appspot.com/ and > https://wiki.python.org/moin/PortingPythonToPy3k > http://stromberg.dnsalias.org/~strombrg/Intro-to-Python/ (writing code > to run on 2.x and 3.x) > > I believe just about everything I've written over the last few years > either ran on 2.x and 3.x unmodified, or ran on 3.x alone. If you go > the former route, you don't need to wait for your libraries to be > updated. > > I usually run pylint twice for my projects (after each change, prior > to checkin), once with a 2.x interpreter, and once with a 3.x > interpreter (using > http://stromberg.dnsalias.org/svn/this-pylint/trunk/this-pylint) , but > I gather pylint has the option of running on a 2.x interpreter and > warning about anything that wouldn't work on 3.x. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Steve.Dower at microsoft.com Fri Dec 12 18:53:40 2014 From: Steve.Dower at microsoft.com (Steve Dower) Date: Fri, 12 Dec 2014 17:53:40 +0000 Subject: [Python-Dev] Issue 22919: Update PCBuild for VS 2015 Message-ID: FYI, I've just committed these changes (http://bugs.python.org/issue22919). There shouldn't be any immediate failures, as the updated projects will still build with VS 2010, but our Windows developers/buildbots can migrate onto the later tools as they feel comfortable. I know there are at least a few bugs coming out of the new compiler, so I'll be tracking those down and fixing things. Feel free to nosy me (or Windows) on the issue tracker if you find anything. Cheers, Steve From mcepl at cepl.eu Fri Dec 12 19:57:59 2014 From: mcepl at cepl.eu (=?UTF-8?Q?Mat=C4=9Bj?= Cepl) Date: Fri, 12 Dec 2014 19:57:59 +0100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition References: Message-ID: On 2014-12-11, 14:47 GMT, Giampaolo Rodola' wrote: > I still think the only *real* obstacle remains the lack of > important packages such as twisted, gevent and pika which > haven't been ported yet. And unwise decisions of some vendors (like, unfortunately my belvoed employer with RHEL-7) not to ship python3. Oh well. Mat?j From encukou at gmail.com Fri Dec 12 20:07:42 2014 From: encukou at gmail.com (Petr Viktorin) Date: Fri, 12 Dec 2014 20:07:42 +0100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: Message-ID: Also keep in mind that not all Python libraries are on PyPI. For non-Python projects with Python bindings (think video players, OpenCV, systemd, Samba), distribution via PyPI doesn't make much sense. And since the Python bindings are usually second-class citizens, the porting doesn't have a high priority. If anyone is wondering why their favorite Linux distribution is stuck with Python 2 ? well, I can only speak for Fedora, but nowadays most of what's left are CPython bindings. No pylint --py3k or 2to3 will help there... On Fri, Dec 12, 2014 at 7:24 PM, Mark Roberts wrote: > So, I'm more than aware of how to write Python 2/3 compatible code. I've > ported 10-20 libraries to Python 3 and write Python 2/3 compatible code at > work. I'm also aware of how much writing 2/3 compatible code makes me hate > Python as a language. It'll be a happy day when one of the two languages > dies so that I never have to write code like that again. However, my point > was that just because the core libraries by usage are *starting* to roll out > Python 3 support doesn't mean that things are "easy" or "convenient" yet. > There are too many libraries in the long tail which fulfill semi-common > purposes and haven't been moved over yet. Yeah, sure, they haven't been > updated in years... but neither has the language they're built on. > > I suppose what I'm saying is that the long tail of libraries is far more > valuable than it seems the Python3 zealots are giving it credit for. Please > don't claim it's "easy" to move over just because merely most of the top 20 > libraries have been moved over. :-/ > > -Mark > > On Thu, Dec 11, 2014 at 12:14 PM, Dan Stromberg wrote: >> >> On Thu, Dec 11, 2014 at 11:35 AM, Mark Roberts wrote: >> > I disagree. I know there's a huge focus on The Big Libraries (and >> > wholesale >> > migration is all but impossible without them), but the long tail of >> > libraries is still incredibly important. It's like saying that migrating >> > the >> > top 10 Perl libraries to Perl 6 would allow people to completely ignore >> > all >> > of CPAN. It just doesn't make sense. >> >> Things in the Python 2.x vs 3.x world aren't that bad. >> >> See: >> https://python3wos.appspot.com/ and >> https://wiki.python.org/moin/PortingPythonToPy3k >> http://stromberg.dnsalias.org/~strombrg/Intro-to-Python/ (writing code >> to run on 2.x and 3.x) >> >> I believe just about everything I've written over the last few years >> either ran on 2.x and 3.x unmodified, or ran on 3.x alone. If you go >> the former route, you don't need to wait for your libraries to be >> updated. >> >> I usually run pylint twice for my projects (after each change, prior >> to checkin), once with a 2.x interpreter, and once with a 3.x >> interpreter (using >> http://stromberg.dnsalias.org/svn/this-pylint/trunk/this-pylint) , but >> I gather pylint has the option of running on a 2.x interpreter and >> warning about anything that wouldn't work on 3.x. From barry at python.org Fri Dec 12 20:58:03 2014 From: barry at python.org (Barry Warsaw) Date: Fri, 12 Dec 2014 14:58:03 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: Message-ID: <20141212145803.26d770a7@anarchist.wooz.org> On Dec 12, 2014, at 08:07 PM, Petr Viktorin wrote: >If anyone is wondering why their favorite Linux distribution is stuck with >Python 2 ? well, I can only speak for Fedora, but nowadays most of what's >left are CPython bindings. No pylint --py3k or 2to3 will help there... It's true that some of these are tough. I tried and failed a few times to port Xapian to Python 3. The issue was opened upstream 6 years ago and it's still unresolved: http://trac.xapian.org/ticket/346 OTOH, I ported dbus-python to Python 3 and that worked out much better; we've had solid Python 3 bindings for several years now, which allowed us to port many important Debian/Ubuntu tools to Python 3 and more importantly, do all our new work in Python 3. With other big toolkits like GObject introspection working on Python 3, there's a lot you can do. IME, if the underlying model is string/bytes clean, then the C extension port can sometimes be easier than pure-Python, thanks to cpp games. D-Bus's model is pretty clean, Xapian I found to be not so much (it doesn't help that Xapian is C++ ;). We're actually not terribly far from switching Debian and Ubuntu's default to Python 3. On Debian, the big blocker is the BTS code (which uses SOAP) and on Ubuntu it's the launchpadlib stack. I hope to find time after Jessie to work on the former, and before 16.04 LTS to work on the latter. Not that I disagree that there's a long tail of code that would still benefit a significant population if it got ported to Python 3. By far Python 3 is a better language, with a better stdlib, so the work is worth it. Cheers, -Barry From bcannon at gmail.com Fri Dec 12 21:16:11 2014 From: bcannon at gmail.com (Brett Cannon) Date: Fri, 12 Dec 2014 20:16:11 +0000 Subject: [Python-Dev] Python 2/3 porting HOWTO has been updated References: <1417813664.3644099.199403849.649886EF@webmail.messagingengine.com> <1417826693.3685392.199461181.23A9D812@webmail.messagingengine.com> Message-ID: I have now addressed Nick's comments and backported to Python 2.7. On Sat Dec 06 2014 at 8:40:24 AM Brett Cannon wrote: > Thanks for the feedback. I'll update the doc probably on Friday. > > On Sat Dec 06 2014 at 12:41:54 AM Nick Coghlan wrote: > >> On 6 December 2014 at 14:40, Nick Coghlan wrote: >> > On 6 December 2014 at 10:44, Benjamin Peterson >> wrote: >> >> On Fri, Dec 5, 2014, at 18:16, Donald Stufft wrote: >> >>> Do we need to update it? Can it just redirect to the 3 version? >> >> >> >> Technically, yes, of course. However, that would unexpected take you >> out >> >> of the Python 2 docs "context". Also, that doesn't solve the problem >> for >> >> the downloadable versions of the docs. >> > >> > As Benjamin says, we'll likely want to update the Python 2 version >> > eventually for the benefit of the downloadable version of the docs, >> > but Brett's also right it makes sense to wait for feedback on the >> > Python 3 version and then backport the most up to date text wholesale. >> > >> > In terms of the text itself, this is a great update Brett - thanks! >> > >> > A couple of specific notes: >> > >> > * http://python-future.org/compatible_idioms.html is my favourite >> > short list of "What are the specific Python 2 only habits that I need >> > to unlearn in order to start writing 2/3 compatible code?". It could >> > be worth mentioning in addition to the What's New documents and the >> > full Python 3 Porting book. >> > >> > * it's potentially worth explicitly noting the "bytes(index_value)" >> > and "str(bytes_value)" traps when discussing the bytes/text changes. >> > Those do rather different things in Python 2 & 3, but won't emit an >> > error or warning in either version. >> >> Given that 3.4 and 2.7.9 will be the first exposure some users will >> have had to pip, would it perhaps be worth explicitly mentioning the >> "pip install " commands for the various tools? At least pylint's >> PyPI page only gives the manual download instructions, including which >> dependencies you will need to install. >> >> Cheers, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Fri Dec 12 23:29:38 2014 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 12 Dec 2014 17:29:38 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: Message-ID: On 12/12/2014 1:24 PM, Mark Roberts wrote: > However, my point was that just because the core libraries by usage are > *starting* to roll out Python 3 support doesn't mean that things are > "easy" or "convenient" yet. ... > I suppose what I'm saying is that the long tail of libraries is far more > valuable than it seems the Python3 zealots are giving it credit for. > Please don't claim it's "easy" to move over just because merely most of > the top 20 libraries have been moved over. :-/ I agree that we should refrain from characterizing the difficulty of other peoples' work. Conversions range from trivial to effectively impossible. What we can say is that a) each library conversion make conversion easier for the users of that library and b) the number of conversions continue to increase. I think some are trying to say that the number has reached a point where it is no longer fair to say that conversion is typically impossible. -- Terry Jan Reedy From steve at pearwood.info Sat Dec 13 05:55:26 2014 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 13 Dec 2014 15:55:26 +1100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: Message-ID: <20141213045525.GG20332@ando.pearwood.info> On Fri, Dec 12, 2014 at 10:24:15AM -0800, Mark Roberts wrote: > So, I'm more than aware of how to write Python 2/3 compatible code. I've > ported 10-20 libraries to Python 3 and write Python 2/3 compatible code at > work. I'm also aware of how much writing 2/3 compatible code makes me hate > Python as a language. I'm surprised by the strength of feeling there. Most of the code I write supports 2.4+, with the exception of 3.0 where I say "it should work, but if it doesn't, I don't care". I'll be *very* happy when I can drop support for 2.4, but with very few exceptions I have not found many major problems supporting both 2.7 and 3.3+ in the one code-base, and nothing I couldn't work around (sometimes by just dropping support for a specific feature in certain versions). I'm not disputing that your experiences are valid, but I am curious what specific issues you have come across and wondering if there are things which 3.5 can include to ease that transition. E.g. 3.3 re-added support for u'' syntax. -- Steven From donald at stufft.io Sat Dec 13 06:29:39 2014 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Dec 2014 00:29:39 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: <20141213045525.GG20332@ando.pearwood.info> References: <20141213045525.GG20332@ando.pearwood.info> Message-ID: > On Dec 12, 2014, at 11:55 PM, Steven D'Aprano wrote: > > On Fri, Dec 12, 2014 at 10:24:15AM -0800, Mark Roberts wrote: >> So, I'm more than aware of how to write Python 2/3 compatible code. I've >> ported 10-20 libraries to Python 3 and write Python 2/3 compatible code at >> work. I'm also aware of how much writing 2/3 compatible code makes me hate >> Python as a language. > > I'm surprised by the strength of feeling there. > > Most of the code I write supports 2.4+, with the exception of 3.0 where > I say "it should work, but if it doesn't, I don't care". I'll be *very* > happy when I can drop support for 2.4, but with very few exceptions I > have not found many major problems supporting both 2.7 and 3.3+ in the > one code-base, and nothing I couldn't work around (sometimes by just > dropping support for a specific feature in certain versions). > > I'm not disputing that your experiences are valid, but I am curious what > specific issues you have come across and wondering if there are things > which 3.5 can include to ease that transition. E.g. 3.3 re-added support > for u'' syntax. For what it?s worth, I almost exclusively write 2/3 compatible code (and that?s with the ?easy? subset of 2.6+ and either 3.2+ or 3.3+) and doing so does make the language far less fun for me than when I was writing 2.x only code. I?ve though a lot about why that is, because it?s certainly not *hard* to do so, and what I think it is for me at least is inherient in the fact you're using a lowest common denominator approach to programming. Because I can only use things which work the same in 2.6+ and 3.2+ it means I cannot take advantage of any new features unless they are available as a backport. Now this is always true of code that needs to straddle multiple versions in order to maintain compatability. However the way it "used" to work is that the newest version, with all the new features, would quickly become the dominant version within a year or two. The older versions might still command a respectable amount of use, but that tended to fall off quicker and it wouldn't be unreasonable to be more aggresive in some situations than others depending on what the new feature was I wanted to use. However when we look at today, the "new" versions are Python 3.4, 3.3, or even 3.2. However I can't really justify for most situations supporting _only_ those things because even today they are not the dominant version (or really close to it in any number I have access too). This means that if I want to take advantage of something newer I'm essentially dropping the largest part of the ecosystem. On top of all of this, I'm not sure I see a point in the near future where this tipping point might happen and the "normal" order of the newest version with the newest features rapidly becoming the dominant version gets restored. I'm sort of holding out hope that the Linux distribution switching to Python 3 as a default might push it over, but I'm also not holding my breath there. So that's basically it, lowest common demoniator programming where it's hard to look at the future and see anything but the same (or similar) language subset that I'm currently using. This is especially frustrating when you see other languages doing cool and interesting new things and it feels like we're stuck with what we had in 2008 or 2010. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From donald at stufft.io Sat Dec 13 06:38:24 2014 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Dec 2014 00:38:24 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> Message-ID: <9AE5B3DD-54BD-42F3-ACCD-B3AA44FAE102@stufft.io> > On Dec 13, 2014, at 12:29 AM, Donald Stufft wrote: > >> >> On Dec 12, 2014, at 11:55 PM, Steven D'Aprano wrote: >> >> On Fri, Dec 12, 2014 at 10:24:15AM -0800, Mark Roberts wrote: >>> So, I'm more than aware of how to write Python 2/3 compatible code. I've >>> ported 10-20 libraries to Python 3 and write Python 2/3 compatible code at >>> work. I'm also aware of how much writing 2/3 compatible code makes me hate >>> Python as a language. >> >> I'm surprised by the strength of feeling there. >> >> Most of the code I write supports 2.4+, with the exception of 3.0 where >> I say "it should work, but if it doesn't, I don't care". I'll be *very* >> happy when I can drop support for 2.4, but with very few exceptions I >> have not found many major problems supporting both 2.7 and 3.3+ in the >> one code-base, and nothing I couldn't work around (sometimes by just >> dropping support for a specific feature in certain versions). >> >> I'm not disputing that your experiences are valid, but I am curious what >> specific issues you have come across and wondering if there are things >> which 3.5 can include to ease that transition. E.g. 3.3 re-added support >> for u'' syntax. > > For what it?s worth, I almost exclusively write 2/3 compatible code (and that?s > with the ?easy? subset of 2.6+ and either 3.2+ or 3.3+) and doing so does make > the language far less fun for me than when I was writing 2.x only code. I?ve > though a lot about why that is, because it?s certainly not *hard* to do so, and > what I think it is for me at least is inherient in the fact you're using a > lowest common denominator approach to programming. > > Because I can only use things which work the same in 2.6+ and 3.2+ it means I > cannot take advantage of any new features unless they are available as a > backport. Now this is always true of code that needs to straddle multiple > versions in order to maintain compatability. However the way it "used" to work > is that the newest version, with all the new features, would quickly become > the dominant version within a year or two. The older versions might still > command a respectable amount of use, but that tended to fall off quicker and > it wouldn't be unreasonable to be more aggresive in some situations than others > depending on what the new feature was I wanted to use. > > However when we look at today, the "new" versions are Python 3.4, 3.3, or even > 3.2. However I can't really justify for most situations supporting _only_ those > things because even today they are not the dominant version (or really close > to it in any number I have access too). This means that if I want to take > advantage of something newer I'm essentially dropping the largest part of > the ecosystem. > > On top of all of this, I'm not sure I see a point in the near future where this > tipping point might happen and the "normal" order of the newest version with > the newest features rapidly becoming the dominant version gets restored. I'm > sort of holding out hope that the Linux distribution switching to Python 3 > as a default might push it over, but I'm also not holding my breath there. > > So that's basically it, lowest common demoniator programming where it's hard to > look at the future and see anything but the same (or similar) language subset > that I'm currently using. This is especially frustrating when you see other > languages doing cool and interesting new things and it feels like we're stuck > with what we had in 2008 or 2010. > Oh yea, in addition to this, actually backporting things is becoming increasingly harder the further Python 3 gets developed. When the language was mostly forwards compatible if a new feature/function was added you could often times backport it into your own code in a compat shim by simply copy/pasting the code. However with all the new features being done in Python 3, it's increasingly the case that this code will *not* run on Python 2.6 and 2.7 because it's essentially being written for a different, but similar, language and requires some amount of porting. This porting process might even need to include incompatible changes because of differences in the language (see for example, Trollius). --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From rosuav at gmail.com Sat Dec 13 06:40:07 2014 From: rosuav at gmail.com (Chris Angelico) Date: Sat, 13 Dec 2014 16:40:07 +1100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> Message-ID: On Sat, Dec 13, 2014 at 4:29 PM, Donald Stufft wrote: > So that's basically it, lowest common demoniator programming where it's hard to > look at the future and see anything but the same (or similar) language subset > that I'm currently using. This is especially frustrating when you see other > languages doing cool and interesting new things and it feels like we're stuck > with what we had in 2008 or 2010. That's what happens when you want to support a version of Python that was released in 2008 or 2010. Perhaps the impetus for people to move onto Python 3 has to come from people like you saying "I'm not going to support 2.7 any more as of version X.Y", and letting them run two interpreters. It's really not that hard to keep 2.7 around for what expects it, and 3.4/3.5/whatever for everything else. ChrisA From donald at stufft.io Sat Dec 13 07:13:20 2014 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Dec 2014 01:13:20 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> Message-ID: > On Dec 13, 2014, at 12:40 AM, Chris Angelico wrote: > > On Sat, Dec 13, 2014 at 4:29 PM, Donald Stufft wrote: >> So that's basically it, lowest common demoniator programming where it's hard to >> look at the future and see anything but the same (or similar) language subset >> that I'm currently using. This is especially frustrating when you see other >> languages doing cool and interesting new things and it feels like we're stuck >> with what we had in 2008 or 2010. > > That's what happens when you want to support a version of Python that > was released in 2008 or 2010. Perhaps the impetus for people to move > onto Python 3 has to come from people like you saying "I'm not going > to support 2.7 any more as of version X.Y", and letting them run two > interpreters. It's really not that hard to keep 2.7 around for what > expects it, and 3.4/3.5/whatever for everything else. I don?t think this option is really grounded in reality. First of all, it's essentially the route that Python itself took and the side effects of that is essentially what is making things less-fun for me to write Python. Doing the same to the users of the things I write would make me feel bad that I was forcing them to either do all the work to port their stuff (and any dependencies) just so they can use a newer version of my library. It also I think is incredibly likely to backfire on any author who does it unless the thing they are writing is absolutely essential to their users AND there are no alternatives AND it's essential that those people are using a new version of that library. If all of those things are not the case, you're going to end up with a majoritity of users either just stop using your tool all together, switching to an alternative, or just sticking with an old version. I don't think I'm unique in that I like writing software that other people want to and can use. I think it also assumes that people are writing one off scripts and things like that which only use the standard library. I tend to write libraries and work on complex distributed systems. I need to either port the entire stack or nothing. I can't run half of my process in 2.7 and half in 3.4. It's also something I've never had to do in Python before, I've always been able to "follow" with the things I write. I could take a look at, or estimate, the amount of users that dropping an older version of Python and I could say that it was time to drop support for that version because very few people are actively using it and the cost of maintaining support for that version is no longer worth it. This is the same process I go through for *any* backwards incompatible change I make to the things I write and when I drop the old compatability shims. Ironically all versions of Python 3 combined are low enough that if they were the *old* versions and not the *new* versions I'd probably drop support from them for now. Really the major reasons I support them at all is because I hold out hope that maybe at some point it will become the dominant Python and because I try to be a good member of the ecosystem and not hold back the adoption. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From rosuav at gmail.com Sat Dec 13 07:28:19 2014 From: rosuav at gmail.com (Chris Angelico) Date: Sat, 13 Dec 2014 17:28:19 +1100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> Message-ID: On Sat, Dec 13, 2014 at 5:13 PM, Donald Stufft wrote: > First of all, it's essentially the route that Python itself took and the side > effects of that is essentially what is making things less-fun for me to write > Python. Doing the same to the users of the things I write would make me feel > bad that I was forcing them to either do all the work to port their stuff > (and any dependencies) just so they can use a newer version of my library. Ultimately, those programs WILL have to be migrated, or they will have to remain on an unsupported system. You have the choice of either continuing to do what you find un-fun (cross-compatibility code) until 2020 and maybe beyond, or stopping support for 2.7 sooner than that. All you're doing is changing *when* the inevitable migration happens. ChrisA From ncoghlan at gmail.com Sat Dec 13 15:48:51 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 14 Dec 2014 00:48:51 +1000 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> Message-ID: On 13 December 2014 at 16:28, Chris Angelico wrote: > On Sat, Dec 13, 2014 at 5:13 PM, Donald Stufft wrote: >> First of all, it's essentially the route that Python itself took and the side >> effects of that is essentially what is making things less-fun for me to write >> Python. Doing the same to the users of the things I write would make me feel >> bad that I was forcing them to either do all the work to port their stuff >> (and any dependencies) just so they can use a newer version of my library. > > Ultimately, those programs WILL have to be migrated, or they will have > to remain on an unsupported system. You have the choice of either > continuing to do what you find un-fun (cross-compatibility code) until > 2020 and maybe beyond, or stopping support for 2.7 sooner than that. > All you're doing is changing *when* the inevitable migration happens. One of the biggest blockers has been the lack of ready access to Python 3 on RHEL & CentOS (just as a lot of folks waited until RHEL 7 and CentOS 7 were out before they started dropping Python 2.6 support). It's perfectly sensible for folks in that ecosystem to wait for the appropriate tools to be provided at the platform layer to make it easier for them to switch, but that unfortunately has consequences for upstream library and framework developers who have to continue to support those users. The initial release of Software Collections (softwarecollections.org and the associated Red Hat downstream product) was the first step in providing easier access to Python 3 within the RHEL/CentOS ecosystem (as far back as RHEL 6 and CentOS 6), and that approach is still our long term preference for getting user applications out of the system Python (so upstream isn't stuck supporting legacy Python versions for years just because Red Hat is still supporting them for the base RHEL platform). Unfortunately, the usage model for software collections is sufficiently different from running directly in the system Python that a lot of folks currently still prefer to stick with the system version (and that's the case even for the much lower barrier of using the Python 2.7 SCL instead of the system 2.6 installation on RHEL 6). Containerisation is another technology that aims to make it easier to run end user applications and services under newer language runtimes without interfering with the system Python installation. As with software collections, though, the environments that are reluctant to upgrade to newer versions of Python also tend to be reluctant to upgrade to newer deployment technologies, so it will take time for the full impact of the shift to containerisation to make itself felt. Finally, in addition to the existing work on getting Fedora 22 (due mid next year) to only ship Python 3 on the LiveCD and other similarly minimalist installations, Slavek (the lead Python maintainer for Fedora & RHEL) is now also actively working on getting Python 3 into EPEL 7: https://fedoraproject.org/wiki/User:Bkabrda/EPEL7_Python3 Between the work that is being done to migrate the platform layer in the Fedora/RHEL/CentOS ecosystem, the work by Brett Cannon and others to improve the tooling around lower risk Python 3 migrations, the inclusion of pip in Python 2.7 to improve the availability of migration tools and backported modules, and the return of printf-style binary interpolation support in Python 3.5, several of the concrete challenges that make migration harder than it needs to be are being addressed. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From barry at python.org Sat Dec 13 16:17:59 2014 From: barry at python.org (Barry Warsaw) Date: Sat, 13 Dec 2014 10:17:59 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> Message-ID: <20141213101759.78ecb965@limelight.wooz.org> On Dec 13, 2014, at 12:29 AM, Donald Stufft wrote: >For what it?s worth, I almost exclusively write 2/3 compatible code (and >that?s with the ?easy? subset of 2.6+ and either 3.2+ or 3.3+) and doing so >does make the language far less fun for me than when I was writing 2.x only >code. For myself, the way I'd put it is: With the libraries I maintain, I generally write Python 2/3 compatible code, targeting Python 2.7 and 3.4, with 2.6, 3.3, and 3.2 support as bonuses, although I will not contort too much to support those older versions. Doing so does make the language far less fun for me than when I am writing 3.x only code. All applications I write in pure Python 3, targeting Python 3.4, unless my dependencies are not all available in Python 3, or I haven't yet had the cycles/resources to port to Python 3. Writing and maintaining applications in Python 2 is far less fun than doing so in Python 3. Cheers, -Barry From donald at stufft.io Sat Dec 13 18:08:59 2014 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Dec 2014 12:08:59 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: <20141213101759.78ecb965@limelight.wooz.org> References: <20141213045525.GG20332@ando.pearwood.info> <20141213101759.78ecb965@limelight.wooz.org> Message-ID: > On Dec 13, 2014, at 10:17 AM, Barry Warsaw wrote: > > On Dec 13, 2014, at 12:29 AM, Donald Stufft wrote: > >> For what it?s worth, I almost exclusively write 2/3 compatible code (and >> that?s with the ?easy? subset of 2.6+ and either 3.2+ or 3.3+) and doing so >> does make the language far less fun for me than when I was writing 2.x only >> code. > > For myself, the way I'd put it is: > > With the libraries I maintain, I generally write Python 2/3 compatible code, > targeting Python 2.7 and 3.4, with 2.6, 3.3, and 3.2 support as bonuses, > although I will not contort too much to support those older versions. Doing > so does make the language far less fun for me than when I am writing 3.x only > code. All applications I write in pure Python 3, targeting Python 3.4, unless > my dependencies are not all available in Python 3, or I haven't yet had the > cycles/resources to port to Python 3. Writing and maintaining applications in > Python 2 is far less fun than doing so in Python 3. > Yea that?s not unlike me in that. I don?t write many applications where I have a choice of runtime. Most of what I write tends to be libraries or applications for work where we?re using 2.7 or pip itself where if we dropped 2.7 or 2.6 support people would be after us with pitchforks. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From Steve.Dower at microsoft.com Sat Dec 13 18:05:29 2014 From: Steve.Dower at microsoft.com (Steve Dower) Date: Sat, 13 Dec 2014 17:05:29 +0000 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: <20141213101759.78ecb965@limelight.wooz.org> References: <20141213045525.GG20332@ando.pearwood.info> , <20141213101759.78ecb965@limelight.wooz.org> Message-ID: This is also my approach, and the one that I'm encouraging throughout Microsoft as we start putting out more Python packages for stuff. Top-posted from my Windows Phone ________________________________ From: Barry Warsaw Sent: ?12/?13/?2014 7:19 To: python-dev at python.org Subject: Re: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition On Dec 13, 2014, at 12:29 AM, Donald Stufft wrote: >For what it?s worth, I almost exclusively write 2/3 compatible code (and >that?s with the ?easy? subset of 2.6+ and either 3.2+ or 3.3+) and doing so >does make the language far less fun for me than when I was writing 2.x only >code. For myself, the way I'd put it is: With the libraries I maintain, I generally write Python 2/3 compatible code, targeting Python 2.7 and 3.4, with 2.6, 3.3, and 3.2 support as bonuses, although I will not contort too much to support those older versions. Doing so does make the language far less fun for me than when I am writing 3.x only code. All applications I write in pure Python 3, targeting Python 3.4, unless my dependencies are not all available in Python 3, or I haven't yet had the cycles/resources to port to Python 3. Writing and maintaining applications in Python 2 is far less fun than doing so in Python 3. Cheers, -Barry _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40microsoft.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Sat Dec 13 22:24:29 2014 From: rdmurray at bitdance.com (R. David Murray) Date: Sat, 13 Dec 2014 16:24:29 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: <20141213101759.78ecb965@limelight.wooz.org> References: <20141213045525.GG20332@ando.pearwood.info> <20141213101759.78ecb965@limelight.wooz.org> Message-ID: <20141213212430.5716F250EBF@webabinitio.net> On Sat, 13 Dec 2014 10:17:59 -0500, Barry Warsaw wrote: > On Dec 13, 2014, at 12:29 AM, Donald Stufft wrote: > > >For what it???s worth, I almost exclusively write 2/3 compatible code (and > >that???s with the ???easy??? subset of 2.6+ and either 3.2+ or 3.3+) and doing so > >does make the language far less fun for me than when I was writing 2.x only > >code. > > For myself, the way I'd put it is: > > With the libraries I maintain, I generally write Python 2/3 compatible code, > targeting Python 2.7 and 3.4, with 2.6, 3.3, and 3.2 support as bonuses, > although I will not contort too much to support those older versions. Doing > so does make the language far less fun for me than when I am writing 3.x only > code. All applications I write in pure Python 3, targeting Python 3.4, unless > my dependencies are not all available in Python 3, or I haven't yet had the > cycles/resources to port to Python 3. Writing and maintaining applications in > Python 2 is far less fun than doing so in Python 3. I think this is an important distinction. The considerations are very different for library maintainers than they are for application maintainers. Most of my work is in (customer) applications, and except for one customer who insists on using an old version of RedHat, I've been on "latest" python3 for those for quite a while now. I suspect we hear less here from people in that situation than would be proportional to their absolute numbers. --David From ncoghlan at gmail.com Sun Dec 14 01:14:42 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 14 Dec 2014 10:14:42 +1000 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: Message-ID: On 13 Dec 2014 05:19, "Petr Viktorin" wrote: > > Also keep in mind that not all Python libraries are on PyPI. > For non-Python projects with Python bindings (think video players, > OpenCV, systemd, Samba), distribution via PyPI doesn't make much > sense. And since the Python bindings are usually second-class > citizens, the porting doesn't have a high priority. > > If anyone is wondering why their favorite Linux distribution is stuck > with Python 2 ? well, I can only speak for Fedora, but nowadays most > of what's left are CPython bindings. > No pylint --py3k or 2to3 will help there... That's a good point. I actually think https://docs.python.org/3/howto/cporting.html#cporting-howto is actually in a worse state than the state the Python level porting guide was in until Brett's latest round of updates, as it covers the underlying technical details of the incompatibilities moreso than the available tools and recommended processes for *executing* a migration. For example, replacing a handcrafted Python extension with a normal C library plus cffi, Cython or SWIG generated Python bindings can deliver both an easier to maintain extension *and* Python 3 compatibility. Similarly, converting an extension from C to Cython outright (without a separate C library) can provide both benefits. It's mainly when converting to one of those isn't desirable and/or feasible that you really need to worry about C API level porting. For that, tools like Dave Malcolm's static CPython extension analyser for gcc could potentially be helpful (as pylint was to Brett's update to the Python level guide), and Lennart also provides some more detailed practical suggestions in http://python3porting.com/cextensions.html I'm sure there are other useful techniques that can be employed, but aren't necessarily well known outside the folks that have been busy implementing these migrations. Barry, Petr, any of the other folks working on distro level C extension ports, perhaps one of you would be willing to consider an update to the C extension porting guide to be more in line with Brett's latest version of the Python level porting guide? Regards, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Dec 15 20:30:24 2014 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 15 Dec 2014 11:30:24 -0800 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> Message-ID: OK, this seems weird to me: For what it?s worth, I almost exclusively write 2/3 compatible code (and > that?s > with the ?easy? subset of 2.6+ and either 3.2+ or 3.3+) ouch. > However the way it "used" to work > is that the newest version, with all the new features, would quickly become > the dominant version within a year or two. This seems completely contradictory to me: Yes, the 3.* transition can be difficult, thus the need to support 1.*. But if you are still supporting 2.6, then clearly "the newest version, with all the new features, would quickly become the dominant version within a year or two" But there are those use cases that seem to require sticking with old version for ages, even if there have not been substantial incomparable changes. So we could be on version 2.12 now, and you'd still need to support 2.6, and still be working in a legacy, least common denominator language. How does this have anything to do with the 3.* transition? But plenty of us are kind of stuck on 2.7 at this point -- we can upgrade, but can't accommodate a major shift (for me it's currently wxPython that's the blocker -- that may be the only one. Others are either supported or small enough that we could handle the port ourselves.) But anyway, if you don't hate 2.6 back in the day, why hate it now? (yet, I know Donald didn't use the "hate" word). I guess my pint is that you either would much prefer to be working with the latest and greatest cool features or not -- but if you do the problem at this point isn't anything about py3, it's about the fact that many of us are required to support old versions, period. -Chris However I can't really justify for most situations supporting _only_ those > things because even today they are not the dominant version (or really > close > to it in any number I have access too). This means that if I want to take > advantage of something newer I'm essentially dropping the largest part of > the ecosystem. > Are you primarily writing packages for others to use? if so, then yes. But I wonder how many people are in that camp? Don't most of us spend most of our time writing our own purpose-built code? That might be a nice thing to see in a survey, actually. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Dec 15 21:06:39 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 15 Dec 2014 15:06:39 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> Message-ID: <87A1CFBB-1A2E-4837-B39C-803730F9E1C5@stufft.io> > On Dec 15, 2014, at 2:30 PM, Chris Barker wrote: > > OK, this seems weird to me: > > For what it?s worth, I almost exclusively write 2/3 compatible code (and that?s > with the ?easy? subset of 2.6+ and either 3.2+ or 3.3+) > > ouch. > > However the way it "used" to work > is that the newest version, with all the new features, would quickly become > the dominant version within a year or two. > > This seems completely contradictory to me: Yes, the 3.* transition can be difficult, thus the need to support 1.*. But if you are still supporting 2.6, then clearly "the newest version, with all the new features, would quickly become > the dominant version within a year or two" > > But there are those use cases that seem to require sticking with old version for ages, even if there have not been substantial incomparable changes. > > So we could be on version 2.12 now, and you'd still need to support 2.6, and still be working in a legacy, least common denominator language. How does this have anything to do with the 3.* transition? Most of my libraries probably wouldn?t be 2.6+ if there was something after 2.7. Other than pip itself I mostly only support 2.6 because it?s easy to do compared to 2.7 and there?s nothing in 2.7 that really makes me care to drop it in most situations. Realistically that?s what every decision to drop a version for a library ends up being, look at guesstimate numbers for the old version, and decide if that segment of the user base is worth either the pain of supporting back that far or missing out on the newer features. For 2.7 over 2.6 that answer for me is primarily no it?s not (though 2.7.9 might make me start dropping support for older versions once it?s widely deployed). > > But plenty of us are kind of stuck on 2.7 at this point -- we can upgrade, but can't accommodate a major shift (for me it's currently wxPython that's the blocker -- that may be the only one. Others are either supported or small enough that we could handle the port ourselves.) > > But anyway, if you don't hate 2.6 back in the day, why hate it now? The answer is generally that developers are human beings and like new things, so while 2.6 might have been great back in the day, it?s not back in the day anymore and they are tired of it. > > (yet, I know Donald didn't use the "hate" word). > > I guess my pint is that you either would much prefer to be working with the latest and greatest cool features or not -- but if you do the problem at this point isn't anything about py3, it's about the fact that many of us are required to support old versions, period. Right, It?s not _exactly_ about Python 3, but that Python 3.0 made it so that an old version is by far the dominant version which puts people who have outside users in a situation where they have to decide between new-and-shiny but hurting the bulk of their users and old-and-busted and being friendly to the bulk of their users. > > -Chris > > > However I can't really justify for most situations supporting _only_ those > things because even today they are not the dominant version (or really close > to it in any number I have access too). This means that if I want to take > advantage of something newer I'm essentially dropping the largest part of > the ecosystem. > > Are you primarily writing packages for others to use? if so, then yes. But I wonder how many people are in that camp? Don't most of us spend most of our time writing our own purpose-built code? Yes I am. > > That might be a nice thing to see in a survey, actually. > > > -Chris > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Dec 15 22:26:27 2014 From: barry at python.org (Barry Warsaw) Date: Mon, 15 Dec 2014 16:26:27 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: Message-ID: <20141215162627.7ac4faab@limelight.wooz.org> On Dec 14, 2014, at 10:14 AM, Nick Coghlan wrote: >Barry, Petr, any of the other folks working on distro level C extension >ports, perhaps one of you would be willing to consider an update to the C >extension porting guide to be more in line with Brett's latest version of >the Python level porting guide? It's probably at least worth incorporating the quick guide on the wiki into the howto: https://wiki.python.org/moin/PortingToPy3k/BilingualQuickRef Cheers, -Barry From wizzat at gmail.com Tue Dec 16 04:08:17 2014 From: wizzat at gmail.com (Mark Roberts) Date: Mon, 15 Dec 2014 19:08:17 -0800 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> Message-ID: On Mon, Dec 15, 2014 at 11:30 AM, Chris Barker wrote: > Are you primarily writing packages for others to use? if so, then yes. But > I wonder how many people are in that camp? Don't most of us spend most of > our time writing our own purpose-built code? > > That might be a nice thing to see in a survey, actually. > So, I'm the guy that used the "hate" word in relation to writing 2/3 compliant code. And really, as a library maintainer/writer I do hate writing 2/3 compatible code. Having 4 future imports in every file and being forced to use a compatibility shim to do something as simple as iterating across a dictionary is somewhere between sad and infuriating - and that's just the beginning of the madness. From there we get into identifier related problems with their own compatibility shims - like str(), unicode(), bytes(), int(), and long(). Writing 2/3 compatible Python feels more like torture than fun. Even the python-future.org FAQ mentions how un-fun writing 2/3 compatible Python is. The whole situation is made worse because I *KNOW* that Python 3 is a better language than Python 2, but that it doesn't *MATTER* because Python 2 is what people are - and will be - using for the foreseeable future. It's impractical to drop library support for Python 2 when all of your users use Python 2, and bringing the topic up yields a response that amounts to: "WELL, Python 3 is the future! It has been out for SEVEN YEARS! You know Python 2 won't be updated ever again! Almost every library has been updated to Python 3 and you're just behind the times! And, you'll have to switch EVENTUALLY anyway! If you'd just stop writing Python 2 libraries and focus on pure Python 3 then you wouldn't have to write legacy code! PEOPLE LIKE YOU are why the split is going to be there until at least 2020!". And then my head explodes from the hostility of the "core Python community". Perhaps no individual response is quite so blunt, but the community (taken as a whole) feels outright toxic on this topic to me. Consider some statistics from Pypi: - 13359 Python 2.7 packages - 7140 Python 3.x packages - 2732 Python 3.4 packages - 4024 Python 2.7/3.x compatible packages - 2281 2.7/3.4 compatible modules - 9335 Python 2.7 without ANY Python 3 support - 11078 Python 2.7 without Python 3.4 support - 451 modules 3.4 only packages - 3116 Python 3.x only packages - 1004 Python 3.x modules without (tagged) Python 3.4 support Looking at the numbers, I just cannot fathom how we as a community can react this way. The top 50 projects (!!) still prevent a lot of people from switching to Python 3, but the long tail of library likely an even bigger blocker. I also don't understand how we can claim people should start ALL new projects in Python 3 - and be indignant when they don't!. We haven't successfully converted the top 50 projects after SEVEN YEARS, and the long tail without 3.x support is still getting longer. Claims that we have something approaching library parity seem... hilarious, at best? I suppose what I'm saying is that there's lots of claims that the conversion to Python 3 is inevitable, but I'm not convinced about that. I'd posit that another outcome is the slow death of Python as a language. I would suggest adding some "community health" metrics around the Python 2/3 split, as well as a question about whether someone considers themselves primarily a library author, application developer, or both. I'd also ask how many people have started a new application in another language instead of Python 3 because of the split. -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+python at benfinney.id.au Tue Dec 16 06:00:57 2014 From: ben+python at benfinney.id.au (Ben Finney) Date: Tue, 16 Dec 2014 16:00:57 +1100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition References: <20141213045525.GG20332@ando.pearwood.info> Message-ID: <85wq5snq7q.fsf@benfinney.id.au> Mark Roberts writes: > So, I'm the guy that used the "hate" word in relation to writing 2/3 > compliant code. And really, as a library maintainer/writer I do hate > writing 2/3 compatible code. You're unlikely to get disagreement on that. I certainly concur. The catch is, at the moment it's better that any of the alternatives for writing medium-to-long-term maintainable Python code. > The whole situation is made worse because I *KNOW* that Python 3 is a > better language than Python 2, but that it doesn't *MATTER* because > Python 2 is what people are - and will be - using for the foreseeable > future. Only if ?people? means ?any amount of people at all who are or might be using Python?. While developers might like something that allows them to serve such a broad user base indefinitely, it's simply not realistic ? and *no* feasible support strategy for Python could allow that. So, as developers writing Python code, we must set our expectations for support base according to reality rather than wishing it were otherwise. > It's impractical to drop library support for Python 2 when all of your > users use Python 2 Right. The practical thing to do is to decide explicitly, per project, what portion of those users you can afford to leave behind in Python-2-only land, and how much cost you're willing to bear to keep than number low. > I suppose what I'm saying is that there's lots of claims that the > conversion to Python 3 is inevitable, but I'm not convinced about > that. I've never seen such a claim from the PSF. Can you cite any, and are they representative of the PSF's position on the issue? Rather, the claim is that *if* one's code base doesn't migrate to Python 3, it will be decreasingly supported by the PSF and the Python community at large. Happily, what's also true is there is a huge amount of support ? in language features, tools, and community will ? to help developers do that migration. Much more than most other migrations I've observed. So what's inevitable is: either a code base will benefit from all that support and be migrated to Python 3 and hence be maintainable as the Python ecosystem evolves; or it will be increasingly an outsider of that ecosystem. -- \ ?I have one rule to live by: Don't make it worse.? ?Hazel | `\ Woodcock | _o__) | Ben Finney From alex.gaynor at gmail.com Tue Dec 16 06:06:57 2014 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Tue, 16 Dec 2014 05:06:57 +0000 (UTC) Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition References: <20141213045525.GG20332@ando.pearwood.info> <85wq5snq7q.fsf@benfinney.id.au> Message-ID: Ben Finney benfinney.id.au> writes: > > Rather, the claim is that *if* one's code base doesn't migrate to Python > 3, it will be decreasingly supported by the PSF and the Python community > at large. > The PSF doesn't support any versions of Python. We have effectively no involvement in the development of Python the language, or CPython. We certainly don't care what version of Python you use. Members of the python-dev list, or the CPython core development teams have opinions probably, but that doesn't make them the opinion of the PSF. Alex From ben+python at benfinney.id.au Tue Dec 16 07:03:09 2014 From: ben+python at benfinney.id.au (Ben Finney) Date: Tue, 16 Dec 2014 17:03:09 +1100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition References: <20141213045525.GG20332@ando.pearwood.info> <85wq5snq7q.fsf@benfinney.id.au> Message-ID: <85siggnnc2.fsf@benfinney.id.au> Alex Gaynor writes: > Ben Finney benfinney.id.au> writes: > > > Rather, the claim is that *if* one's code base doesn't migrate to > > Python 3, it will be decreasingly supported by the PSF and the > > Python community at large. > > The PSF doesn't support any versions of Python. We have effectively no > involvement in the development of Python the language, or CPython. We > certainly don't care what version of Python you use. Okay, I was under the impression that the entity blessing Python releases as ?official? is the PSF. What is that entity, then? Whatever entity is the one which makes ?this is an official release of Python the language?, and ?support for Python version A.B will end on YYYY-MM-DD?, that's the entity I meant. -- \ ?I went to the museum where they had all the heads and arms | `\ from the statues that are in all the other museums.? ?Steven | _o__) Wright | Ben Finney From ncoghlan at gmail.com Tue Dec 16 07:27:26 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Dec 2014 16:27:26 +1000 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> Message-ID: On 16 December 2014 at 13:08, Mark Roberts wrote: > The whole situation is made worse because I *KNOW* that Python 3 is a better > language than Python 2, but that it doesn't *MATTER* because Python 2 is > what people are - and will be - using for the foreseeable future. It's > impractical to drop library support for Python 2 when all of your users use > Python 2, and bringing the topic up yields a response that amounts to: > "WELL, Python 3 is the future! It has been out for SEVEN YEARS! You know > Python 2 won't be updated ever again! Almost every library has been updated > to Python 3 and you're just behind the times! And, you'll have to switch > EVENTUALLY anyway! If you'd just stop writing Python 2 libraries and focus > on pure Python 3 then you wouldn't have to write legacy code! PEOPLE LIKE > YOU are why the split is going to be there until at least 2020!". And then > my head explodes from the hostility of the "core Python community". Perhaps > no individual response is quite so blunt, but the community (taken as a > whole) feels outright toxic on this topic to me. The core Python development community are the ones ensuring that folks feel comfortable continuing to run Python 2 (by promising upstream support out to 2020 and adjusting our maintenance release policies to account for the practical realities of long term support), as well as working with redistributors and tool developers to reduce the practical barriers to migration from Python 2 to Python 3 (such as bundling pip with Python 2.7.9, or Brett's recent work on updating the porting guide). It's the folks just *outside* the language core development community that legitimately feel the most hard done by, as they didn't choose this path - we did. Folks working on libraries and frameworks likely won't see any direct benefit from the migration for years - given the timelines of previous version transitions within the Python 2 series, we likely won't see projects widely dropping Python 2 support until after there are versions of RHEL & CentOS available where the default system Python is Python 3. In the meantime, they're stuck with working in a hybrid language that only benefits from the subset of improvements in each new Python 3 release that increase the size of the source compatible Python 2/3 subset. Living with carrier grade operating system update cycles when you're used to upgrading your baseline target Python version every couple of years is genuinely frustrating as a developer. Unfortunately, the anger that library and framework authors should really be directing at us, and at the commercial Linux distros offering long term support for older versions of Python, occasionally spills over into frustration at the *end users* that benefit from those long term support offerings. Explanations of the overarching industry patterns influencing the migration (like http://developerblog.redhat.com/2014/09/09/transition-to-multilingual-programming-python/) are cold comfort when you're one of the ones actually doing the work of supporting two parallel variants of the language. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Dec 16 07:34:47 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Dec 2014 16:34:47 +1000 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: <85siggnnc2.fsf@benfinney.id.au> References: <20141213045525.GG20332@ando.pearwood.info> <85wq5snq7q.fsf@benfinney.id.au> <85siggnnc2.fsf@benfinney.id.au> Message-ID: On 16 December 2014 at 16:03, Ben Finney wrote: > Alex Gaynor writes: > >> Ben Finney benfinney.id.au> writes: >> >> > Rather, the claim is that *if* one's code base doesn't migrate to >> > Python 3, it will be decreasingly supported by the PSF and the >> > Python community at large. >> >> The PSF doesn't support any versions of Python. We have effectively no >> involvement in the development of Python the language, or CPython. We >> certainly don't care what version of Python you use. > > Okay, I was under the impression that the entity blessing Python > releases as ?official? is the PSF. What is that entity, then? The PSF controls the trademark, but its the comparatively informal collective known as python-dev (ultimately helmed by Guido) that makes the technical decisions. To the degree with which the latter body is formally defined by anything, it's defined by PEP 1. > Whatever entity is the one which makes ?this is an official release of > Python the language?, and ?support for Python version A.B will end on > YYYY-MM-DD?, that's the entity I meant. That would be the release managers for the respective CPython releases (in collaboration with the rest of python-dev). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Tue Dec 16 11:45:27 2014 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 16 Dec 2014 11:45:27 +0100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition References: <20141213045525.GG20332@ando.pearwood.info> Message-ID: <20141216114527.1b6ff6a6@fsol> On Mon, 15 Dec 2014 19:08:17 -0800 Mark Roberts wrote: > > So, I'm the guy that used the "hate" word in relation to writing 2/3 > compliant code. And really, as a library maintainer/writer I do hate > writing 2/3 compatible code. Having 4 future imports in every file and > being forced to use a compatibility shim to do something as simple as > iterating across a dictionary is somewhere between sad and infuriating - > and that's just the beginning of the madness. Iterating accross a dictionary doesn't need compatibility shims. It's dead simple in all Python versions: $ python2 Python 2.7.8 (default, Oct 20 2014, 15:05:19) [GCC 4.9.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> d = {'a': 1} >>> for k in d: print(k) ... a $ python3 Python 3.4.2 (default, Oct 8 2014, 13:08:17) [GCC 4.9.1] on linux Type "help", "copyright", "credits" or "license" for more information. >>> d = {'a': 1} >>> for k in d: print(k) ... a Besides, using iteritems() and friends is generally a premature optimization, unless you know you'll have very large containers. Creating a list is cheap. > From there we get into > identifier related problems with their own compatibility shims - like > str(), unicode(), bytes(), int(), and long(). Writing 2/3 compatible Python > feels more like torture than fun. It depends what kind of purpose your code is written for, or how you write it. Unless you have a lot of network-facing code, writing 2/3 compatible code should actually be quite pedestrian. Regards Antoine. From encukou at gmail.com Tue Dec 16 12:05:35 2014 From: encukou at gmail.com (Petr Viktorin) Date: Tue, 16 Dec 2014 12:05:35 +0100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: Message-ID: On Sun, Dec 14, 2014 at 1:14 AM, Nick Coghlan wrote: [...] > Barry, Petr, any of the other folks working on distro level C extension > ports, perhaps one of you would be willing to consider an update to the C > extension porting guide to be more in line with Brett's latest version of > the Python level porting guide? I can make it a 20%-time project starting in January, if no-one beats me to it. From mbec at gmto.org Tue Dec 16 18:10:52 2014 From: mbec at gmto.org (matthieu bec) Date: Tue, 16 Dec 2014 09:10:52 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <20141211204652.40a0c807@fsol> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> Message-ID: <5490679C.9050308@gmto.org> Agreed with Antoine, strftime/strptime are somewhat different concerns. Doesn't mean thay cannot be fixed at the same time but it's a bit a separate. I'm not sure this thread / discussion has reached critical mass yet, meanwhile I was looking at what was involved and came up with this half-baked patch - by no way meant to be complete or correct, but rather get a feel to it. I can't help thinking this is much more involved than I first expected, and what I try to achieve should be reasonably simple. Stepping back a little, I wonder if the datetime module is really the right location, that has constructor(year, month, day, ..., second, microsecond) - with 0 On Thu, 11 Dec 2014 13:43:05 -0600 > Skip Montanaro wrote: >> On Thu, Dec 11, 2014 at 1:23 PM, Antoine Pitrou wrote: >>> I think strftime / strptime support is a low-priority concern on this >>> topic, and can probably be discussed independently of the core >>> nanosecond support. >> >> Might be low-priority, but with %f support as a template, supporting >> something to specify nanoseconds should be pretty trivial. The hardest >> question will be to convince ourselves that we aren't choosing a >> format code which some other strftime/strptime implementation is >> already using. >> >> In addition, ISTR that one of the use cases was analysis of datetime >> data generated by other applications which has nanosecond resolution. > > One of the use cases is to deal with OS-generated timestamps without > losing information. As long as you don't need to represent or parse > those timestamps, strptime / strftime don't come into the picture. > > Regards > > Antoine. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/mdcb808%40gmail.com > -- Matthieu Bec GMTO Corp cell : +1 626 425 7923 251 S Lake Ave, Suite 300 phone: +1 626 204 0527 Pasadena, CA 91101 -------------- next part -------------- A non-text attachment was scrubbed... Name: datetime_ns.patch Type: text/x-patch Size: 39257 bytes Desc: not available URL: From wizzat at gmail.com Tue Dec 16 19:48:07 2014 From: wizzat at gmail.com (Mark Roberts) Date: Tue, 16 Dec 2014 10:48:07 -0800 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: <20141216114527.1b6ff6a6@fsol> References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> Message-ID: On Tue, Dec 16, 2014 at 2:45 AM, Antoine Pitrou wrote: > > Iterating accross a dictionary doesn't need compatibility shims. It's > dead simple in all Python versions: > > $ python2 > Python 2.7.8 (default, Oct 20 2014, 15:05:19) > [GCC 4.9.1] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> d = {'a': 1} > >>> for k in d: print(k) > ... > a > > $ python3 > Python 3.4.2 (default, Oct 8 2014, 13:08:17) > [GCC 4.9.1] on linux > Type "help", "copyright", "credits" or "license" for more information. > >>> d = {'a': 1} > >>> for k in d: print(k) > ... > a > > Besides, using iteritems() and friends is generally a premature > optimization, unless you know you'll have very large containers. > Creating a list is cheap. > It seems to me that every time I hear this, the author is basically admitting that Python is a toy language not meant for "serious computing" (where serious is defined in extremely modest terms). The advice is also very contradictory to literally every talk on performant Python that I've seen at PyCon or PyData or ... well, anywhere. And really, doesn't it strike you as incredibly presumptuous to call the *DEFAULT BEHAVIOR* of Python 3 a "premature optimization"? Isn't the whole reason that the default behavior switch was made is because creating lists willy nilly all over the place really *ISN'T* cheap? This isn't the first time someone has tried to run this line past me, but it's the first time I've been fed up enough with the topic to call it complete BS on the spot. Please help me stop the community at large from saying this, because it really isn't true at all. -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Dec 16 19:57:40 2014 From: brett at python.org (Brett Cannon) Date: Tue, 16 Dec 2014 18:57:40 +0000 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> Message-ID: Mark, your tone is no longer constructive and is hurting your case in arguing for anything. Please take it down a notch. On Tue Dec 16 2014 at 1:48:59 PM Mark Roberts wrote: > On Tue, Dec 16, 2014 at 2:45 AM, Antoine Pitrou > wrote: >> >> Iterating accross a dictionary doesn't need compatibility shims. It's >> dead simple in all Python versions: >> >> $ python2 >> Python 2.7.8 (default, Oct 20 2014, 15:05:19) >> [GCC 4.9.1] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >> >>> d = {'a': 1} >> >>> for k in d: print(k) >> ... >> a >> >> $ python3 >> Python 3.4.2 (default, Oct 8 2014, 13:08:17) >> [GCC 4.9.1] on linux >> Type "help", "copyright", "credits" or "license" for more information. >> >>> d = {'a': 1} >> >>> for k in d: print(k) >> ... >> a >> >> Besides, using iteritems() and friends is generally a premature >> optimization, unless you know you'll have very large containers. >> Creating a list is cheap. >> > > It seems to me that every time I hear this, the author is basically > admitting that Python is a toy language not meant for "serious computing" > (where serious is defined in extremely modest terms). The advice is also > very contradictory to literally every talk on performant Python that I've > seen at PyCon or PyData or ... well, anywhere. And really, doesn't it > strike you as incredibly presumptuous to call the *DEFAULT BEHAVIOR* of > Python 3 a "premature optimization"? Isn't the whole reason that the > default behavior switch was made is because creating lists willy nilly all > over the place really *ISN'T* cheap? This isn't the first time someone has > tried to run this line past me, but it's the first time I've been fed up > enough with the topic to call it complete BS on the spot. Please help me > stop the community at large from saying this, because it really isn't true > at all. > > -Mark > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wizzat at gmail.com Tue Dec 16 20:05:18 2014 From: wizzat at gmail.com (Mark Roberts) Date: Tue, 16 Dec 2014 11:05:18 -0800 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> Message-ID: Perhaps you are correct, and I will attempt to remain more constructive on the topic (despite it being an *incredibly* frustrating experience). However, my point remains: this is a patently false thing that is being parroted throughout the Python community, and it's outright insulting to be told my complaints about writing 2/3 compatible code are invalid on the basis of "premature optimization". -Mark On Tue, Dec 16, 2014 at 10:57 AM, Brett Cannon wrote: > > Mark, your tone is no longer constructive and is hurting your case in > arguing for anything. Please take it down a notch. > > On Tue Dec 16 2014 at 1:48:59 PM Mark Roberts wrote: > >> On Tue, Dec 16, 2014 at 2:45 AM, Antoine Pitrou >> wrote: >>> >>> Iterating accross a dictionary doesn't need compatibility shims. It's >>> dead simple in all Python versions: >>> >>> $ python2 >>> Python 2.7.8 (default, Oct 20 2014, 15:05:19) >>> [GCC 4.9.1] on linux2 >>> Type "help", "copyright", "credits" or "license" for more information. >>> >>> d = {'a': 1} >>> >>> for k in d: print(k) >>> ... >>> a >>> >>> $ python3 >>> Python 3.4.2 (default, Oct 8 2014, 13:08:17) >>> [GCC 4.9.1] on linux >>> Type "help", "copyright", "credits" or "license" for more information. >>> >>> d = {'a': 1} >>> >>> for k in d: print(k) >>> ... >>> a >>> >>> Besides, using iteritems() and friends is generally a premature >>> optimization, unless you know you'll have very large containers. >>> Creating a list is cheap. >>> >> >> It seems to me that every time I hear this, the author is basically >> admitting that Python is a toy language not meant for "serious computing" >> (where serious is defined in extremely modest terms). The advice is also >> very contradictory to literally every talk on performant Python that I've >> seen at PyCon or PyData or ... well, anywhere. And really, doesn't it >> strike you as incredibly presumptuous to call the *DEFAULT BEHAVIOR* of >> Python 3 a "premature optimization"? Isn't the whole reason that the >> default behavior switch was made is because creating lists willy nilly all >> over the place really *ISN'T* cheap? This isn't the first time someone has >> tried to run this line past me, but it's the first time I've been fed up >> enough with the topic to call it complete BS on the spot. Please help me >> stop the community at large from saying this, because it really isn't true >> at all. >> >> -Mark >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/ >> brett%40python.org >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skip.montanaro at gmail.com Tue Dec 16 20:08:30 2014 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Tue, 16 Dec 2014 13:08:30 -0600 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <5490679C.9050308@gmto.org> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> <5490679C.9050308@gmto.org> Message-ID: On Tue, Dec 16, 2014 at 11:10 AM, matthieu bec wrote: > Agreed with Antoine, strftime/strptime are somewhat different concerns. > Doesn't mean thay cannot be fixed at the same time but it's a bit a > separate. Which reminds me... Somewhere else (maybe elsewhere in this thread? maybe on a bug tracker issue?) someone mentioned that Ruby uses %N for fractions of a second (and %L specifically for milliseconds). Here's the bit from the Ruby strftime doc: %L - Millisecond of the second (000..999) %N - Fractional seconds digits, default is 9 digits (nanosecond) %3N millisecond (3 digits) %6N microsecond (6 digits) %9N nanosecond (9 digits) %12N picosecond (12 digits) There's no obvious reason I can see to reinvent this particular wheel, at least the %N spoke. The only question might be whether to modify Python's existing %f format to accept a precision (defaulting to 6), or add %N in a manner similar (or identical) to Ruby's semantics. Skip -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Tue Dec 16 20:18:20 2014 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 16 Dec 2014 14:18:20 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> Message-ID: <20141216191821.0D188250ED0@webabinitio.net> On Tue, 16 Dec 2014 10:48:07 -0800, Mark Roberts wrote: > On Tue, Dec 16, 2014 at 2:45 AM, Antoine Pitrou wrote: > > > > Iterating accross a dictionary doesn't need compatibility shims. It's > > dead simple in all Python versions: > > > > $ python2 > > Python 2.7.8 (default, Oct 20 2014, 15:05:19) > > [GCC 4.9.1] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > > >>> d = {'a': 1} > > >>> for k in d: print(k) > > ... > > a > > > > $ python3 > > Python 3.4.2 (default, Oct 8 2014, 13:08:17) > > [GCC 4.9.1] on linux > > Type "help", "copyright", "credits" or "license" for more information. > > >>> d = {'a': 1} > > >>> for k in d: print(k) > > ... > > a > > > > Besides, using iteritems() and friends is generally a premature > > optimization, unless you know you'll have very large containers. > > Creating a list is cheap. > > > > It seems to me that every time I hear this, the author is basically > admitting that Python is a toy language not meant for "serious computing" > (where serious is defined in extremely modest terms). The advice is also > very contradictory to literally every talk on performant Python that I've > seen at PyCon or PyData or ... well, anywhere. And really, doesn't it > strike you as incredibly presumptuous to call the *DEFAULT BEHAVIOR* of > Python 3 a "premature optimization"? No. A premature optimization is one that is made before doing any performance analysis, so language features are irrelevant to that labeling. This doesn't mean you shouldn't use "better" idioms when they are clear. But if you are complicating your code because of performance concerns *without measuring it* you are doing premature optimization, by definition[*]. > Isn't the whole reason that the > default behavior switch was made is because creating lists willy nilly all > over the place really *ISN'T* cheap? This isn't the first time someone has No. In Python3 we made the iterator protocol more central to the language. Any performance benefit is actually a side effect of that change. One that was considered, yes, but in the context of the *language* as a whole and not any individual program's performance profile. And "this doesn't make things worse for real world programs as far as we can measure" is a more important criterion for this kind of language change than "lets do this because we've measured and it makes things better". --David [*] And yes, *we all do this*. Sometimes doing it doesn't cost much. Sometimes it does. From guido at python.org Tue Dec 16 20:21:18 2014 From: guido at python.org (Guido van Rossum) Date: Tue, 16 Dec 2014 11:21:18 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> <5490679C.9050308@gmto.org> Message-ID: I vote to copy Ruby's %N and leave %f alone. On Tue, Dec 16, 2014 at 11:08 AM, Skip Montanaro wrote: > > > On Tue, Dec 16, 2014 at 11:10 AM, matthieu bec wrote: > > Agreed with Antoine, strftime/strptime are somewhat different concerns. > > Doesn't mean thay cannot be fixed at the same time but it's a bit a > > separate. > > Which reminds me... Somewhere else (maybe elsewhere in this thread? maybe > on a bug tracker issue?) someone mentioned that Ruby uses %N for fractions > of a second (and %L specifically for milliseconds). Here's the bit from the > Ruby strftime doc: > > %L - Millisecond of the second (000..999) > %N - Fractional seconds digits, default is 9 digits (nanosecond) > %3N millisecond (3 digits) > %6N microsecond (6 digits) > %9N nanosecond (9 digits) > %12N picosecond (12 digits) > > There's no obvious reason I can see to reinvent this particular wheel, at > least the %N spoke. The only question might be whether to modify Python's > existing %f format to accept a precision (defaulting to 6), or add %N in a > manner similar (or identical) to Ruby's semantics. > > Skip > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbec at gmto.org Tue Dec 16 20:21:19 2014 From: mbec at gmto.org (Matthieu Bec) Date: Tue, 16 Dec 2014 11:21:19 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> <5490679C.9050308@gmto.org> Message-ID: <5490862F.1080501@gmto.org> yes that was mentioned in this thread, %nN looks quite reasonable. still, I'd like to steer the conversation back to the other aspect - where should something like struct_timespec land in the first place, is datetime really the best to capture that? Most of the conversation has been revolving around strftime/strptime. That seems to validate Antoine's point in the first place. Let's see what people say but maybe this thread should end to restart as separate topics? Regards, Matthieu On 12/16/14 11:08 AM, Skip Montanaro wrote: > > On Tue, Dec 16, 2014 at 11:10 AM, matthieu bec > wrote: > > Agreed with Antoine, strftime/strptime are somewhat different concerns. > > Doesn't mean thay cannot be fixed at the same time but it's a bit a > > separate. > > Which reminds me... Somewhere else (maybe elsewhere in this thread? > maybe on a bug tracker issue?) someone mentioned that Ruby uses %N for > fractions of a second (and %L specifically for milliseconds). Here's the > bit from the Ruby strftime doc: > > %L - Millisecond of the second (000..999) > %N - Fractional seconds digits, default is 9 digits (nanosecond) > %3N millisecond (3 digits) > %6N microsecond (6 digits) > %9N nanosecond (9 digits) > %12N picosecond (12 digits) > > There's no obvious reason I can see to reinvent this particular wheel, > at least the %N spoke. The only question might be whether to modify > Python's existing %f format to accept a precision (defaulting to 6), or > add %N in a manner similar (or identical) to Ruby's semantics. > > Skip > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/mdcb808%40gmail.com > -- Matthieu Bec GMTO Corp cell : +1 626 425 7923 251 S Lake Ave, Suite 300 phone: +1 626 204 0527 Pasadena, CA 91101 From guido at python.org Tue Dec 16 20:33:00 2014 From: guido at python.org (Guido van Rossum) Date: Tue, 16 Dec 2014 11:33:00 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <5490862F.1080501@gmto.org> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> <5490679C.9050308@gmto.org> <5490862F.1080501@gmto.org> Message-ID: Aren't the wrappers for the kernel's time-related structs typically in the time module? That seems the place to start. Eventually we can support going between those structs and the datetype datatype (the latter may have to grow an option to specify nsec). On Tue, Dec 16, 2014 at 11:21 AM, Matthieu Bec wrote: > > yes that was mentioned in this thread, %nN looks quite reasonable. > > still, I'd like to steer the conversation back to the other aspect - where > should something like struct_timespec land in the first place, is datetime > really the best to capture that? > > Most of the conversation has been revolving around strftime/strptime. > That seems to validate Antoine's point in the first place. > > Let's see what people say but maybe this thread should end to restart as > separate topics? > > Regards, > Matthieu > > On 12/16/14 11:08 AM, Skip Montanaro wrote: > >> >> On Tue, Dec 16, 2014 at 11:10 AM, matthieu bec > > wrote: >> > Agreed with Antoine, strftime/strptime are somewhat different concerns. >> > Doesn't mean thay cannot be fixed at the same time but it's a bit a >> > separate. >> >> Which reminds me... Somewhere else (maybe elsewhere in this thread? >> maybe on a bug tracker issue?) someone mentioned that Ruby uses %N for >> fractions of a second (and %L specifically for milliseconds). Here's the >> bit from the Ruby strftime doc: >> >> %L - Millisecond of the second (000..999) >> %N - Fractional seconds digits, default is 9 digits (nanosecond) >> %3N millisecond (3 digits) >> %6N microsecond (6 digits) >> %9N nanosecond (9 digits) >> %12N picosecond (12 digits) >> >> There's no obvious reason I can see to reinvent this particular wheel, >> at least the %N spoke. The only question might be whether to modify >> Python's existing %f format to accept a precision (defaulting to 6), or >> add %N in a manner similar (or identical) to Ruby's semantics. >> >> Skip >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/ >> mdcb808%40gmail.com >> >> > -- > Matthieu Bec GMTO Corp > cell : +1 626 425 7923 251 S Lake Ave, Suite 300 > phone: +1 626 204 0527 Pasadena, CA 91101 > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Dec 16 20:25:35 2014 From: brett at python.org (Brett Cannon) Date: Tue, 16 Dec 2014 19:25:35 +0000 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> Message-ID: On Tue Dec 16 2014 at 2:05:28 PM Mark Roberts wrote: > Perhaps you are correct, and I will attempt to remain more constructive on > the topic (despite it being an *incredibly* frustrating experience). > However, my point remains: this is a patently false thing that is being > parroted throughout the Python community, and it's outright insulting to be > told my complaints about writing 2/3 compatible code are invalid on the > basis of "premature optimization". > See, you're still using a very negative tone even after saying you would try to scale it back. What Antoine said is not patently false and all he said was that relying on iter*() methods on dicts is typically a premature optimization for Python 2 code which is totally reasonable for him to say and was done so in a calm tone. He didn't say "you are prematurely optimizing and you need to stop telling the community that because you're wasting everyone's time in caring about performance!" which how I would expect you to state it if you were make the same claim based on how you have been reacting. For most use cases, you simply don't need a memory-efficient iterator. If you have a large dict where memory issues from constructing a list comes into play, then yes you should use iterkeys(), but otherwise the overhead of temporarily constructing a list to hold all the keys is cheap since it's just a list of pointers at the C level. As for the changing of the default in Python 3, that's because we decided to make iterators the default everywhere. And that was mostly for consistency, not performance reasons. It was also for flexibility as you can go from an iterator to a list by just wrapping the iterator in list(), but you can't go the other way around. At no time did anyone go "we really need to change the default iterator for dicts to a memory-saving iterator because people are wasting memory and having issues with memory pressure all the time"; it was always about consistency and using the best idiom that had developed over the years. So Antoine's point is entirely reasonable and valid and right. -Brett > > -Mark > > On Tue, Dec 16, 2014 at 10:57 AM, Brett Cannon wrote: >> >> Mark, your tone is no longer constructive and is hurting your case in >> arguing for anything. Please take it down a notch. >> >> On Tue Dec 16 2014 at 1:48:59 PM Mark Roberts wrote: >> >>> On Tue, Dec 16, 2014 at 2:45 AM, Antoine Pitrou >>> wrote: >>>> >>>> Iterating accross a dictionary doesn't need compatibility shims. It's >>>> dead simple in all Python versions: >>>> >>>> $ python2 >>>> Python 2.7.8 (default, Oct 20 2014, 15:05:19) >>>> [GCC 4.9.1] on linux2 >>>> Type "help", "copyright", "credits" or "license" for more information. >>>> >>> d = {'a': 1} >>>> >>> for k in d: print(k) >>>> ... >>>> a >>>> >>>> $ python3 >>>> Python 3.4.2 (default, Oct 8 2014, 13:08:17) >>>> [GCC 4.9.1] on linux >>>> Type "help", "copyright", "credits" or "license" for more information. >>>> >>> d = {'a': 1} >>>> >>> for k in d: print(k) >>>> ... >>>> a >>>> >>>> Besides, using iteritems() and friends is generally a premature >>>> optimization, unless you know you'll have very large containers. >>>> Creating a list is cheap. >>>> >>> >>> It seems to me that every time I hear this, the author is basically >>> admitting that Python is a toy language not meant for "serious computing" >>> (where serious is defined in extremely modest terms). The advice is also >>> very contradictory to literally every talk on performant Python that I've >>> seen at PyCon or PyData or ... well, anywhere. And really, doesn't it >>> strike you as incredibly presumptuous to call the *DEFAULT BEHAVIOR* of >>> Python 3 a "premature optimization"? Isn't the whole reason that the >>> default behavior switch was made is because creating lists willy nilly all >>> over the place really *ISN'T* cheap? This isn't the first time someone has >>> tried to run this line past me, but it's the first time I've been fed up >>> enough with the topic to call it complete BS on the spot. Please help me >>> stop the community at large from saying this, because it really isn't true >>> at all. >>> >>> -Mark >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: https://mail.python.org/mailman/options/python-dev/ >>> brett%40python.org >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Dec 16 20:42:06 2014 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 16 Dec 2014 20:42:06 +0100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> Message-ID: <20141216204206.093b5b66@fsol> On Tue, 16 Dec 2014 19:25:35 +0000 Brett Cannon wrote: > > As for the changing of the default in Python 3, that's because we decided > to make iterators the default everywhere. And that was mostly for > consistency, not performance reasons. It was also for flexibility as you > can go from an iterator to a list by just wrapping the iterator in list(), > but you can't go the other way around. And two other reasons: - the API becomes simpler to use as there's no need to choose between .items() and .iteritems(), etc. - the 3.x methods don't return iterators but views, which have set-like features in addition to basic iterating Regards Antoine. From marko at pacujo.net Tue Dec 16 20:58:36 2014 From: marko at pacujo.net (Marko Rauhamaa) Date: Tue, 16 Dec 2014 21:58:36 +0200 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: (Mark Roberts's message of "Tue, 16 Dec 2014 11:05:18 -0800") References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> Message-ID: <87tx0ve58z.fsf@elektro.pacujo.net> Mark Roberts : > it's outright insulting to be told my complaints about writing 2/3 > compatible code are invalid on the basis of "premature optimization". IMO, you should consider forking your library code for Python2 and Python3. The multidialect code will look unidiomatic for each dialect. When the critical mass favors Python3 (possibly within a couple of years), the transition will be as total and quick as from VHS to DVDs. At that point, a multidialect library would seem quaint, while a separate Python2 fork can simply be left behind (bug fixes only). Marko From skip.montanaro at gmail.com Tue Dec 16 21:15:53 2014 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Tue, 16 Dec 2014 14:15:53 -0600 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: <87tx0ve58z.fsf@elektro.pacujo.net> References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> <87tx0ve58z.fsf@elektro.pacujo.net> Message-ID: On Tue, Dec 16, 2014 at 1:58 PM, Marko Rauhamaa wrote: > > IMO, you should consider forking your library code for Python2 and > Python3. > I don't get the idea that Brett Cannon agrees with you: http://nothingbutsnark.svbtle.com/commentary-on-getting-your-code-to-run-on-python-23 While he doesn't explicitly say so, I got the distinct impression reading his recent blog post that he supports one source, not forked sources. In the absence to evidence to the contrary, I think of Brett as the most expert developer in the porting space. Skip -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Tue Dec 16 21:31:03 2014 From: brian at python.org (Brian Curtin) Date: Tue, 16 Dec 2014 14:31:03 -0600 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> <87tx0ve58z.fsf@elektro.pacujo.net> Message-ID: On Tue, Dec 16, 2014 at 2:15 PM, Skip Montanaro wrote: > > On Tue, Dec 16, 2014 at 1:58 PM, Marko Rauhamaa wrote: >> >> IMO, you should consider forking your library code for Python2 and >> Python3. > > > I don't get the idea that Brett Cannon agrees with you: > > http://nothingbutsnark.svbtle.com/commentary-on-getting-your-code-to-run-on-python-23 > > While he doesn't explicitly say so, I got the distinct impression reading > his recent blog post that he supports one source, not forked sources. > > In the absence to evidence to the contrary, I think of Brett as the most > expert developer in the porting space. I'm a few inches shorter than Brett, but having done several sizable ports, dual-source has never even on the table. I would prefer the "run 2to3 at installation time" option before maintaining two versions (which I do not prefer at all in reality). From alexander.belopolsky at gmail.com Tue Dec 16 21:45:02 2014 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue, 16 Dec 2014 15:45:02 -0500 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <5490679C.9050308@gmto.org> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> <5490679C.9050308@gmto.org> Message-ID: On Tue, Dec 16, 2014 at 12:10 PM, matthieu bec wrote: > I wonder if the datetime module is really the right location, that has > constructor(year, month, day, ..., second, microsecond) - with 0 no millis. adding 0 quite right. We can make nanosecond a keyword-only argument, so that time(1, 2, 3, nanosecond=123456789) -> 01:02:03.123456789 and time(1, 2, 3, 4, nanosecond=123456789) -> error Users will probably be encouraged to avoid positional form when specifying time to subsecond precision. I would say time(1, 2, 3, microsecond=4) is clearer than time(1, 2, 3, 4) anyways. Another option is to allow float for the "second" argument: time(1, 2, 3.123456789) -> 01:02:03.123456789 -------------- next part -------------- An HTML attachment was scrubbed... URL: From marko at pacujo.net Tue Dec 16 22:03:22 2014 From: marko at pacujo.net (Marko Rauhamaa) Date: Tue, 16 Dec 2014 23:03:22 +0200 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: (Brian Curtin's message of "Tue, 16 Dec 2014 14:31:03 -0600") References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> <87tx0ve58z.fsf@elektro.pacujo.net> Message-ID: <87k31re291.fsf@elektro.pacujo.net> Brian Curtin : > I'm a few inches shorter than Brett, but having done several sizable > ports, dual-source has never even on the table. I would prefer the > "run 2to3 at installation time" option before maintaining two versions > (which I do not prefer at all in reality). How about "run 3to2 at installation time?" Marko From skip.montanaro at gmail.com Tue Dec 16 22:11:43 2014 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Tue, 16 Dec 2014 15:11:43 -0600 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: <87k31re291.fsf@elektro.pacujo.net> References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> <87tx0ve58z.fsf@elektro.pacujo.net> <87k31re291.fsf@elektro.pacujo.net> Message-ID: On Tue, Dec 16, 2014 at 3:03 PM, Marko Rauhamaa wrote: > > How about "run 3to2 at installation time?" In theory, yes, but that's not a fork either. Skip -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewb at perfumania.com Tue Dec 16 16:49:21 2014 From: matthewb at perfumania.com (Matthew Braun) Date: Tue, 16 Dec 2014 10:49:21 -0500 Subject: [Python-Dev] Python 3.4.2/ PyGame Registry Message-ID: Good Morning, I installed Python 3.4.2 on my work computer. I was looking at the book "Head First Programming" which references download PYGAME. I downloaded what I believe to be the correct version and it tells me that I don't see the installer. I look in the registry and there is no: *HKEY_CURRENT_USER\Software\Python\* Did I do something wrong? This is all new to me. Any help would be greatly appreciated. Thanks Matt [image: Inline image 1] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 30972 bytes Desc: not available URL: From ethan at stoneleaf.us Tue Dec 16 21:40:22 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 16 Dec 2014 12:40:22 -0800 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> Message-ID: <549098B6.8050209@stoneleaf.us> On 12/16/2014 11:25 AM, Brett Cannon wrote: > > What Antoine said is not patently false [...] What Antoine said was: > Unless you have a lot of network-facing code, writing 2/3 > compatible code should actually be quite pedestrian. Or, to paraphrase slightly, "if you aren't writing network code, and your 2/3 code is painful, you must be doing something wrong!" That may not be what he intended, but that is certainly how it felt. -- ~Ethan~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From ethan at stoneleaf.us Tue Dec 16 22:52:17 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 16 Dec 2014 13:52:17 -0800 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> <87tx0ve58z.fsf@elektro.pacujo.net> Message-ID: <5490A991.9070703@stoneleaf.us> On 12/16/2014 12:31 PM, Brian Curtin wrote: > On Tue, Dec 16, 2014 at 2:15 PM, Skip Montanaro wrote: >> On Tue, Dec 16, 2014 at 1:58 PM, Marko Rauhamaa wrote: >>> >>> IMO, you should consider forking your library code for Python2 and >>> Python3. >> >> I don't get the idea that Brett Cannon agrees with you: >> >> http://nothingbutsnark.svbtle.com/commentary-on-getting-your-code-to-run-on-python-23 >> >> While he doesn't explicitly say so, I got the distinct impression reading >> his recent blog post that he supports one source, not forked sources. >> >> In the absence to evidence to the contrary, I think of Brett as the most >> expert developer in the porting space. > > I'm a few inches shorter than Brett, but having done several sizable > ports, dual-source has never even on the table. I would prefer the > "run 2to3 at installation time" option before maintaining two versions > (which I do not prefer at all in reality). I have a handful of projects. The tiny ones are one-source, the biggest one (dbf) is not. If I had an entire application I would probably split the difference, and just have dual source on a single module to hold the classes/functions that absolutely-had-to-have-this-or-that-feature (exec (the statement) vs exec (the function) comes to mind). -- ~Ethan~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From guido at python.org Tue Dec 16 22:56:51 2014 From: guido at python.org (Guido van Rossum) Date: Tue, 16 Dec 2014 13:56:51 -0800 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: <5490A991.9070703@stoneleaf.us> References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> <87tx0ve58z.fsf@elektro.pacujo.net> <5490A991.9070703@stoneleaf.us> Message-ID: This thread hasn't been productive for a really long time now. On Tue, Dec 16, 2014 at 1:52 PM, Ethan Furman wrote: > > On 12/16/2014 12:31 PM, Brian Curtin wrote: > > On Tue, Dec 16, 2014 at 2:15 PM, Skip Montanaro wrote: > >> On Tue, Dec 16, 2014 at 1:58 PM, Marko Rauhamaa wrote: > >>> > >>> IMO, you should consider forking your library code for Python2 and > >>> Python3. > >> > >> I don't get the idea that Brett Cannon agrees with you: > >> > >> > http://nothingbutsnark.svbtle.com/commentary-on-getting-your-code-to-run-on-python-23 > >> > >> While he doesn't explicitly say so, I got the distinct impression > reading > >> his recent blog post that he supports one source, not forked sources. > >> > >> In the absence to evidence to the contrary, I think of Brett as the most > >> expert developer in the porting space. > > > > I'm a few inches shorter than Brett, but having done several sizable > > ports, dual-source has never even on the table. I would prefer the > > "run 2to3 at installation time" option before maintaining two versions > > (which I do not prefer at all in reality). > > I have a handful of projects. The tiny ones are one-source, the biggest > one (dbf) is not. > > If I had an entire application I would probably split the difference, and > just have dual source on a single module to > hold the classes/functions that > absolutely-had-to-have-this-or-that-feature (exec (the statement) vs exec > (the function) > comes to mind). > > -- > ~Ethan~ > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Tue Dec 16 09:09:27 2014 From: barry at python.org (Barry Warsaw) Date: Tue, 16 Dec 2014 03:09:27 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> <87tx0ve58z.fsf@elektro.pacujo.net> Message-ID: <20141216030927.65c1b051@marathon> On Dec 16, 2014, at 02:15 PM, Skip Montanaro wrote: >While he doesn't explicitly say so, I got the distinct impression reading >his recent blog post that he supports one source, not forked sources. I've ported a fair bit of code, both pure-Python and C extensions, both libraries and applications. For successful library ports to Python 3 that need to remain Python 2 compatible, I would almost always recommend a single source, common dialect, no-2to3 approach. There may be exceptions, but this strategy has proven effective over and over. I generally find I don't need `six` but it does provide some nice conveniences that can be helpful. With something like tox running your test suite, it doesn't even have to be painful to maintain. Cheers, -Barry From brett at python.org Wed Dec 17 00:03:15 2014 From: brett at python.org (Brett Cannon) Date: Tue, 16 Dec 2014 23:03:15 +0000 Subject: [Python-Dev] Python 3.4.2/ PyGame Registry References: Message-ID: This mailing list is for the development OF Python, not its use. You should be able to get help on the python-tutor or Python - list mailing lists. On Tue, Dec 16, 2014, 16:42 Matthew Braun wrote: > Good Morning, > I installed Python 3.4.2 on my work computer. I was looking at the book > "Head First Programming" which references download PYGAME. I downloaded > what I believe to be the correct version and it tells me that I don't see > the installer. I look in the registry and there is no: > > *HKEY_CURRENT_USER\Software\Python\* > > Did I do something wrong? This is all new to me. Any help would be greatly > appreciated. > Thanks Matt > > [image: Inline image 1] > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 30972 bytes Desc: not available URL: From mbec at gmto.org Wed Dec 17 00:28:32 2014 From: mbec at gmto.org (Matthieu Bec) Date: Tue, 16 Dec 2014 15:28:32 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> <5490679C.9050308@gmto.org> Message-ID: <5490C020.6030405@gmto.org> Maybe what I meant with `nothing looks quite right': seconds as float, microseconds as float, nanosecond as 0..999, nanoseconds as 0..999999999 with mandatory keyword that precludes microseconds - all can be made to work, none seems completely satisfying. In fact, I don't really have a use for it from python - but something would be needed in C for the implementation of datetime.from_timespec and time.from_timespec that calls the constructor PyObjectCall_CallFunction(clas,"...",...) - can this happen and remain hidden from the python layer? Regards, Matthieu On 12/16/14 12:45 PM, Alexander Belopolsky wrote: > > On Tue, Dec 16, 2014 at 12:10 PM, matthieu bec > wrote: > > I wonder if the datetime module is really the right location, that > has constructor(year, month, day, ..., second, microsecond) - with > 0 fact nothing looks quite right. > > > We can make nanosecond a keyword-only argument, so that > > time(1, 2, 3, nanosecond=123456789) -> 01:02:03.123456789 > > and > > time(1, 2, 3, 4, nanosecond=123456789) -> error > > Users will probably be encouraged to avoid positional form when > specifying time to subsecond precision. I would say time(1, 2, 3, > microsecond=4) is clearer than time(1, 2, 3, 4) anyways. > > Another option is to allow float for the "second" argument: > > time(1, 2, 3.123456789) -> 01:02:03.123456789 > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/mdcb808%40gmail.com > -- Matthieu Bec GMTO Corp cell : +1 626 425 7923 251 S Lake Ave, Suite 300 phone: +1 626 204 0527 Pasadena, CA 91101 From mbec at gmto.org Wed Dec 17 00:31:20 2014 From: mbec at gmto.org (Matthieu Bec) Date: Tue, 16 Dec 2014 15:31:20 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <5490C020.6030405@gmto.org> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> <5490679C.9050308@gmto.org> <5490C020.6030405@gmto.org> Message-ID: <5490C0C8.3080409@gmto.org> On 12/16/14 3:28 PM, Matthieu Bec wrote: > > Maybe what I meant with `nothing looks quite right': > seconds as float, microseconds as float, nanosecond as 0..999, > nanoseconds as 0..999999999 with mandatory keyword that precludes > microseconds - all can be made to work, none seems completely satisfying. > > In fact, I don't really have a use for it from python - but something > would be needed in C for the implementation of datetime.from_timespec > and time.from_timespec that calls the constructor that's the datetime.time.from_timespec btw. > PyObjectCall_CallFunction(clas,"...",...) - can this happen and remain > hidden from the python layer? > > Regards, > Matthieu > > > > On 12/16/14 12:45 PM, Alexander Belopolsky wrote: >> >> On Tue, Dec 16, 2014 at 12:10 PM, matthieu bec > > wrote: >> >> I wonder if the datetime module is really the right location, that >> has constructor(year, month, day, ..., second, microsecond) - with >> 0> fact nothing looks quite right. >> >> >> We can make nanosecond a keyword-only argument, so that >> >> time(1, 2, 3, nanosecond=123456789) -> 01:02:03.123456789 >> >> and >> >> time(1, 2, 3, 4, nanosecond=123456789) -> error >> >> Users will probably be encouraged to avoid positional form when >> specifying time to subsecond precision. I would say time(1, 2, 3, >> microsecond=4) is clearer than time(1, 2, 3, 4) anyways. >> >> Another option is to allow float for the "second" argument: >> >> time(1, 2, 3.123456789) -> 01:02:03.123456789 >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/mdcb808%40gmail.com >> > -- Matthieu Bec GMTO Corp cell : +1 626 425 7923 251 S Lake Ave, Suite 300 phone: +1 626 204 0527 Pasadena, CA 91101 From guido at python.org Wed Dec 17 01:17:20 2014 From: guido at python.org (Guido van Rossum) Date: Tue, 16 Dec 2014 16:17:20 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <5490C020.6030405@gmto.org> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> <5490679C.9050308@gmto.org> <5490C020.6030405@gmto.org> Message-ID: "Nothing looks quite right" is a common phenomenon when you're constrained by backward compatibility. The perfect solution would throw away compatibility, but that has its own downsides. So just go for what looks the least wrong. On Tue, Dec 16, 2014 at 3:28 PM, Matthieu Bec wrote: > > > Maybe what I meant with `nothing looks quite right': > seconds as float, microseconds as float, nanosecond as 0..999, nanoseconds > as 0..999999999 with mandatory keyword that precludes microseconds - all > can be made to work, none seems completely satisfying. > > In fact, I don't really have a use for it from python - but something > would be needed in C for the implementation of datetime.from_timespec and > time.from_timespec that calls the constructor PyObjectCall_CallFunction(clas,"...",...) > - can this happen and remain hidden from the python layer? > > Regards, > Matthieu > > > > On 12/16/14 12:45 PM, Alexander Belopolsky wrote: > >> >> On Tue, Dec 16, 2014 at 12:10 PM, matthieu bec > > wrote: >> >> I wonder if the datetime module is really the right location, that >> has constructor(year, month, day, ..., second, microsecond) - with >> 0> fact nothing looks quite right. >> >> >> We can make nanosecond a keyword-only argument, so that >> >> time(1, 2, 3, nanosecond=123456789) -> 01:02:03.123456789 >> >> and >> >> time(1, 2, 3, 4, nanosecond=123456789) -> error >> >> Users will probably be encouraged to avoid positional form when >> specifying time to subsecond precision. I would say time(1, 2, 3, >> microsecond=4) is clearer than time(1, 2, 3, 4) anyways. >> >> Another option is to allow float for the "second" argument: >> >> time(1, 2, 3.123456789) -> 01:02:03.123456789 >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/ >> mdcb808%40gmail.com >> >> > -- > Matthieu Bec GMTO Corp > cell : +1 626 425 7923 251 S Lake Ave, Suite 300 > phone: +1 626 204 0527 Pasadena, CA 91101 > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbec at gmto.org Wed Dec 17 01:36:05 2014 From: mbec at gmto.org (Matthieu Bec) Date: Tue, 16 Dec 2014 16:36:05 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> <5490679C.9050308@gmto.org> <5490C020.6030405@gmto.org> Message-ID: <5490CFF5.5060909@gmto.org> python-n (for next) - just poking fun at the other thread On 12/16/14 4:17 PM, Guido van Rossum wrote: > "Nothing looks quite right" is a common phenomenon when you're > constrained by backward compatibility. The perfect solution would throw > away compatibility, but that has its own downsides. So just go for what > looks the least wrong. > > On Tue, Dec 16, 2014 at 3:28 PM, Matthieu Bec > wrote: > > > Maybe what I meant with `nothing looks quite right': > seconds as float, microseconds as float, nanosecond as 0..999, > nanoseconds as 0..999999999 with mandatory keyword that precludes > microseconds - all can be made to work, none seems completely > satisfying. > > In fact, I don't really have a use for it from python - but > something would be needed in C for the implementation of > datetime.from_timespec and time.from_timespec that calls the > constructor PyObjectCall_CallFunction(__clas,"...",...) - can this > happen and remain hidden from the python layer? > > Regards, > Matthieu > > > > On 12/16/14 12:45 PM, Alexander Belopolsky wrote: > > > On Tue, Dec 16, 2014 at 12:10 PM, matthieu bec > >> wrote: > > I wonder if the datetime module is really the right > location, that > has constructor(year, month, day, ..., second, microsecond) > - with > 0 ugly, in > fact nothing looks quite right. > > > We can make nanosecond a keyword-only argument, so that > > time(1, 2, 3, nanosecond=123456789) -> 01:02:03.123456789 > > and > > time(1, 2, 3, 4, nanosecond=123456789) -> error > > Users will probably be encouraged to avoid positional form when > specifying time to subsecond precision. I would say time(1, 2, 3, > microsecond=4) is clearer than time(1, 2, 3, 4) anyways. > > Another option is to allow float for the "second" argument: > > time(1, 2, 3.123456789 ) -> 01:02:03.123456789 > > > > _________________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/__mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/__mailman/options/python-dev/__mdcb808%40gmail.com > > > > -- > Matthieu Bec GMTO Corp > cell : +1 626 425 7923 251 S Lake > Ave, Suite 300 > phone: +1 626 204 0527 Pasadena, > CA 91101 > _________________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/__mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/__mailman/options/python-dev/__guido%40python.org > > > > > -- > --Guido van Rossum (python.org/~guido ) > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/mdcb808%40gmail.com > -- Matthieu Bec GMTO Corp cell : +1 626 425 7923 251 S Lake Ave, Suite 300 phone: +1 626 204 0527 Pasadena, CA 91101 From mbec at gmto.org Wed Dec 17 01:41:41 2014 From: mbec at gmto.org (Matthieu Bec) Date: Tue, 16 Dec 2014 16:41:41 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <5490C0C8.3080409@gmto.org> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> <5490679C.9050308@gmto.org> <5490C020.6030405@gmto.org> <5490C0C8.3080409@gmto.org> Message-ID: <5490D145.5050409@gmto.org> On 12/16/14 3:31 PM, Matthieu Bec wrote: > > > On 12/16/14 3:28 PM, Matthieu Bec wrote: >> >> Maybe what I meant with `nothing looks quite right': >> seconds as float, microseconds as float, nanosecond as 0..999, >> nanoseconds as 0..999999999 with mandatory keyword that precludes >> microseconds - all can be made to work, none seems completely satisfying. >> >> In fact, I don't really have a use for it from python - but something >> would be needed in C for the implementation of datetime.from_timespec >> and time.from_timespec that calls the constructor > > that's the datetime.time.from_timespec btw. datetime.time.from_timespec actually makes no sense. >> PyObjectCall_CallFunction(clas,"...",...) - can this happen and remain >> hidden from the python layer? ... occured to me I might simply create a us datetime object and set its nanofield after. I'll try wrap up a recap proposal later. Regards, Matthieu From chrism at plope.com Wed Dec 17 01:45:07 2014 From: chrism at plope.com (Chris McDonough) Date: Tue, 16 Dec 2014 19:45:07 -0500 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: <20141216030927.65c1b051@marathon> References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> <87tx0ve58z.fsf@elektro.pacujo.net> <20141216030927.65c1b051@marathon> Message-ID: <5490D213.70101@plope.com> On 12/16/2014 03:09 AM, Barry Warsaw wrote: > On Dec 16, 2014, at 02:15 PM, Skip Montanaro wrote: > >> While he doesn't explicitly say so, I got the distinct impression reading >> his recent blog post that he supports one source, not forked sources. > > I've ported a fair bit of code, both pure-Python and C extensions, both > libraries and applications. For successful library ports to Python 3 that > need to remain Python 2 compatible, I would almost always recommend a single > source, common dialect, no-2to3 approach. There may be exceptions, but this > strategy has proven effective over and over. I generally find I don't need > `six` but it does provide some nice conveniences that can be helpful. With > something like tox running your test suite, it doesn't even have to be painful > to maintain. I'll agree; with tox and some automated CI system like travis or jenkins or whatever, once you've done the port, it's only a minor nuisance to maintain a straddled 2/3 codebase. Programming in only the subset still isn't much fun, but maintenance is slightly easier than I expected it to be. "Drive by" contributions become slightly harder to accept because they often break 3 compatibility, and contributors are often unable or unwilling to install all the required versions that are tested by tox. - C From ncoghlan at gmail.com Wed Dec 17 07:52:39 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Dec 2014 16:52:39 +1000 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: <5490D213.70101@plope.com> References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> <87tx0ve58z.fsf@elektro.pacujo.net> <20141216030927.65c1b051@marathon> <5490D213.70101@plope.com> Message-ID: On 17 December 2014 at 10:45, Chris McDonough wrote: > On 12/16/2014 03:09 AM, Barry Warsaw wrote: >> >> On Dec 16, 2014, at 02:15 PM, Skip Montanaro wrote: >> >>> While he doesn't explicitly say so, I got the distinct impression reading >>> his recent blog post that he supports one source, not forked sources. >> >> >> I've ported a fair bit of code, both pure-Python and C extensions, both >> libraries and applications. For successful library ports to Python 3 that >> need to remain Python 2 compatible, I would almost always recommend a >> single >> source, common dialect, no-2to3 approach. There may be exceptions, but >> this >> strategy has proven effective over and over. I generally find I don't >> need >> `six` but it does provide some nice conveniences that can be helpful. >> With >> something like tox running your test suite, it doesn't even have to be >> painful >> to maintain. > > > I'll agree; with tox and some automated CI system like travis or jenkins or > whatever, once you've done the port, it's only a minor nuisance to maintain > a straddled 2/3 codebase. Programming in only the subset still isn't much > fun, but maintenance is slightly easier than I expected it to be. "Drive > by" contributions become slightly harder to accept because they often break > 3 compatibility, and contributors are often unable or unwilling to install > all the required versions that are tested by tox. It's worth noting that the last problem can potentially be mitigated to some degree by taking advantage of the new "pylint --py3k" feature making it easier to check that code is 2/3 source compatible without needing a local copy of Python 3 to test against, and without needing to adhere to pylint's other checks. As far as Marko's suggestion of maintaining two code bases go, that's what we do for the standard library, and we've *never* advised anyone else to do the same. Even before experience showed the source compatible approach was more practical, the original advice to third party developers was to use 2to3 to automatically derive the Python 3 version from the Python 2 version and address any compatibility issues by modifying the Python 2 sources. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From hrvoje.niksic at avl.com Wed Dec 17 10:33:52 2014 From: hrvoje.niksic at avl.com (Hrvoje Niksic) Date: Wed, 17 Dec 2014 10:33:52 +0100 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: <20141216191821.0D188250ED0@webabinitio.net> References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> <20141216191821.0D188250ED0@webabinitio.net> Message-ID: <54914E00.8090205@avl.com> On 12/16/2014 08:18 PM, R. David Murray wrote: > On Tue, 16 Dec 2014 10:48:07 -0800, Mark Roberts wrote: >> > Besides, using iteritems() and friends is generally a premature >> > optimization, unless you know you'll have very large containers. >> > Creating a list is cheap. [...] > No. A premature optimization is one that is made before doing any > performance analysis, so language features are irrelevant to that > labeling. This doesn't mean you shouldn't use "better" idioms when they > are clear. This is a relevant point. I would make it even stronger: using iteritems() is not a premature optimization, it is a statement of intent. More importantly, using items() in iteration is a statement of expectation that the dict will change during iteration. If this is not in fact the case, then items() is the wrong idiom for reasons of readability, not (just) efficiency. From techtonik at gmail.com Wed Dec 17 06:53:10 2014 From: techtonik at gmail.com (anatoly techtonik) Date: Wed, 17 Dec 2014 08:53:10 +0300 Subject: [Python-Dev] Python 2.x and 3.x use survey, 2014 edition In-Reply-To: References: <20141213045525.GG20332@ando.pearwood.info> <20141216114527.1b6ff6a6@fsol> <87tx0ve58z.fsf@elektro.pacujo.net> <5490A991.9070703@stoneleaf.us> Message-ID: On Wed, Dec 17, 2014 at 12:56 AM, Guido van Rossum wrote: > This thread hasn't been productive for a really long time now. I agree. The constructive way would be to concentrate on looking for causes. I don't know if there is a discipline of "programming language usability" in computer science, but now is a good moment to apply it. From tschijnmotschau at gmail.com Wed Dec 17 08:37:09 2014 From: tschijnmotschau at gmail.com (Tschijnmo Tschau) Date: Wed, 17 Dec 2014 01:37:09 -0600 Subject: [Python-Dev] A metaclass for immutability Message-ID: Hi all, Recently when I am writing a computer algebra system for a very special purpose, it is found that being able to have objects of user-defined classes immutable can be very nice. It would greatly enhance the safety of the code. For example in the code that I were writing, objects hold a lot of references to other objects of user-defined class. If other parts of the code mutates the objects that is referenced, quite expected things could happen. As a result, an initial tentative implementation of a metaclass for making objects of user-defined classes immutable is written and put into a Github repository https://github.com/tschijnmo/immutableclass. Since I am not a python expert yet, could you please help me 1. If such a metaclass is Pythonic? Is it considered a good practice to use such a metaclass in a code that needs frequent maintenance? 2. Is this metaclass of interest to other Python developers as well? I mean is it worth-while to try to put this, or something like this, into the standard Python library? 3. If the answer to the above questions are affirmative, is my current implementation Pythonic? Especially if it is better implemented as a class decorator rather than a metaclass? Most of the code should be quite straightforward. It is mimicked after the named tuple in the collections library. Just for the initialization, basically what I did is to make a mutable proxy class for every immutable class. This proxy class is attempted to carry as much behaviour of the immutable class as possible, except it is mutable.Then the initializer defined by users are in fact called with self being an instance of the proxy class, then the actual immutable object is built out of it. This is my first time posting to this list. Any feedback is greatly appreciated. Thank you! Regards, Jinmo -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Dec 17 21:57:27 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Dec 2014 06:57:27 +1000 Subject: [Python-Dev] Proposal: Update PEP 1 to allow an explicit "Provisional" status for PEPs Message-ID: Hi folks, The recent release of setuptools 8.0 brought with it the migration to the more explicit version handling semantics defined in PEP 440. Some of the feedback on that release showed us that we could really use the equivalent of PEP 411 for interoperability PEPs as well as for standard library modules: a way to say "this is well defined enough for us to publish a reference implementation in the default packaging tools, but needs additional user feedback before we consider it completely stable". The reasons for this are mostly pragmatic: the kinds of tweaks we're talking about are small (in this specific case, changing the normalised form when publishing release candidates from 'c' to 'rc' , when installation tools are already required to accept either spelling as valid), but updating hyperlinks, other documentation references, etc means that spinning a full PEP revision just for that change would be excessively expensive in contributor time and energy. So over on distutils-sig, we're currently considering PEP 440 provisional until we're happy with the feedback we're receiving on setuptools 8.x and the upcoming pip 6.0 release. However, I'd be happier if we could communicate that status more explicitly through the PEP process, especially as I think such a capability would be useful more generally as we move towards implementing metadata 2.0 and potentially other enhancements for pip 7+ next year. If folks are OK with this idea, I'll go ahead and make the appropriate changes to PEP 1 and the PEP index generator. I'm also happy to file a tracker issue, or write a short PEP, if folks feel making such a change requires a little more formality in its own right. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ethan at stoneleaf.us Wed Dec 17 22:13:57 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 17 Dec 2014 13:13:57 -0800 Subject: [Python-Dev] Proposal: Update PEP 1 to allow an explicit "Provisional" status for PEPs In-Reply-To: References: Message-ID: <5491F215.3040903@stoneleaf.us> On 12/17/2014 12:57 PM, Nick Coghlan wrote: > > If folks are OK with this idea, I'll go ahead and make the appropriate > changes to PEP 1 and the PEP index generator. I'm also happy to file a > tracker issue, or write a short PEP, if folks feel making such a > change requires a little more formality in its own right. We have provisional for modules, it would seem to also make sense for PEPs. A tracker issue would be good. -- ~Ethan~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From barry at python.org Wed Dec 17 23:10:27 2014 From: barry at python.org (Barry Warsaw) Date: Wed, 17 Dec 2014 17:10:27 -0500 Subject: [Python-Dev] Proposal: Update PEP 1 to allow an explicit "Provisional" status for PEPs In-Reply-To: References: Message-ID: <20141217171027.1b4fa4cb@marathon> On Dec 18, 2014, at 06:57 AM, Nick Coghlan wrote: >However, I'd be happier if we could communicate that status more >explicitly through the PEP process, especially as I think such a >capability would be useful more generally as we move towards >implementing metadata 2.0 and potentially other enhancements for pip >7+ next year. > >If folks are OK with this idea, I'll go ahead and make the appropriate >changes to PEP 1 and the PEP index generator. I'm also happy to file a >tracker issue, or write a short PEP, if folks feel making such a >change requires a little more formality in its own right. Hi Nick. What specific changes do you propose to PEP 1 and/or the PEP process? FWIW, if they are fairly simple, then I think a tracker issue with at least the PEP 1 authors nosied would be fine. Cheers, -Barry From ncoghlan at gmail.com Thu Dec 18 01:32:56 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Dec 2014 10:32:56 +1000 Subject: [Python-Dev] Proposal: Update PEP 1 to allow an explicit "Provisional" status for PEPs In-Reply-To: <20141217171027.1b4fa4cb@marathon> References: <20141217171027.1b4fa4cb@marathon> Message-ID: On 18 December 2014 at 08:10, Barry Warsaw wrote: > On Dec 18, 2014, at 06:57 AM, Nick Coghlan wrote: > >>However, I'd be happier if we could communicate that status more >>explicitly through the PEP process, especially as I think such a >>capability would be useful more generally as we move towards >>implementing metadata 2.0 and potentially other enhancements for pip >>7+ next year. >> >>If folks are OK with this idea, I'll go ahead and make the appropriate >>changes to PEP 1 and the PEP index generator. I'm also happy to file a >>tracker issue, or write a short PEP, if folks feel making such a >>change requires a little more formality in its own right. > > Hi Nick. What specific changes do you propose to PEP 1 and/or the PEP > process? FWIW, if they are fairly simple, then I think a tracker issue with > at least the PEP 1 authors nosied would be fine. Yeah, good point - I'll want a tracker issue regardless to host the Reitveld review. Filed at http://bugs.python.org/issue23077 My current thinking is that for future PEPs relying on PEP 411 to include a provisional API directly in the standard library, the Provisional state would effectively replace the Accepted state: Draft -> Provisional (with PEP 411 disclaimer on the implementation) -> Final (PEP 411 disclaimer removed) For interoperability standards track PEPs, I'd propose tweaking their flow to allow the use of the "Active" state, and stop using Accepted/Final entirely: Draft -> Provisional -> Active (-> Superseded) However, looking at that, I'm starting to wonder if the PEPs like WSGI, the database API, the crypto API, and the packaging PEPs should be pulled out into a new PEP category (e.g. "Standards Track (Interoperability)") to reflect the fact that they're defining a protocol, not just a particular standard library API. At the moment, we have an odd split where many of those are listed under "Other Informational PEPs" (together with things like the instructions for doing releases), while the packaging interoperability PEPs are Standards Track PEPs currently listed under "Accepted PEPs". I think the next step would be for me to come up with a draft patch, and then if we think it needs a PEP for broader review (which now seems likely to me), we can decide that on the tracker issue. Cheers, Nick. P.S. You'd think I'd have learned my lesson by now when it comes to pulling on the thread that is PEP 1, but apparently not :) -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From mdcb808 at gmail.com Thu Dec 18 03:52:10 2014 From: mdcb808 at gmail.com (Matthieu Bec) Date: Wed, 17 Dec 2014 18:52:10 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <5490D145.5050409@gmto.org> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> <5490679C.9050308@gmto.org> <5490C020.6030405@gmto.org> <5490C0C8.3080409@gmto.org> <5490D145.5050409@gmto.org> Message-ID: <5492415A.5010103@gmail.com> Attached patch defines a new type struct_timespec for the time module. A new capsule exports the type along with to/from converters - opening a bridge for C, and for example the datetime module. Your comments welcomed. If people feel this is worth the effort and going the right direction, I should be able to finish doco, unit-tests, whatever else is missing with a bit of guidance and move on other datetime aspects. Regards, Matthieu -------------- next part -------------- A non-text attachment was scrubbed... Name: time.struct_timespec.patch Type: text/x-patch Size: 5642 bytes Desc: not available URL: From ericsnowcurrently at gmail.com Thu Dec 18 05:20:37 2014 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 17 Dec 2014 21:20:37 -0700 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: <5492415A.5010103@gmail.com> References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> <5490679C.9050308@gmto.org> <5490C020.6030405@gmto.org> <5490C0C8.3080409@gmto.org> <5490D145.5050409@gmto.org> <5492415A.5010103@gmail.com> Message-ID: On Wed, Dec 17, 2014 at 7:52 PM, Matthieu Bec wrote: > > > Attached patch defines a new type struct_timespec for the time module. A new > capsule exports the type along with to/from converters - opening a bridge > for C, and for example the datetime module. I'd recommend opening a new issue in the bug tracker (bugs.python.org) and attach the patch there. Attaching it to an email is a good way for it to get lost and forgotten. :) -eric From raysanchez1979 at gmail.com Thu Dec 18 06:57:07 2014 From: raysanchez1979 at gmail.com (Raymond Sanchez) Date: Wed, 17 Dec 2014 23:57:07 -0600 Subject: [Python-Dev] fixing broken link in pep 3 Message-ID: Hello my name is Raymond and I would like to fix a broken link on pep 3. If you go to https://www.python.org/dev/peps/pep-0003/ and click on link http://www.python.org/dev/workflow/, it returns a 404. What is the correct url? Should we also update the description "It has been replaced by the Issue Workflow"? After I'll get the correct answers, I will submit a patch. Thanks for your help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Thu Dec 18 09:09:29 2014 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 18 Dec 2014 09:09:29 +0100 Subject: [Python-Dev] fixing broken link in pep 3 In-Reply-To: References: Message-ID: Hi, Yes, the link is dead. It looks like the following link contains the same info: https://docs.python.org/devguide/triaging.html Dead page: https://web.archive.org/web/20090704040931/http://www.python.org/dev/workflow/ "Core Development > Issue Workflow" Victor 2014-12-18 6:57 GMT+01:00 Raymond Sanchez : > Hello my name is Raymond and I would like to fix a broken link on pep 3. If > you go to > https://www.python.org/dev/peps/pep-0003/ and click on link > http://www.python.org/dev/workflow/, it returns a 404. > > What is the correct url? Should we also update the description "It has been > replaced by the Issue Workflow"? > > After I'll get the correct answers, I will submit a patch. > > > Thanks for your help. > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com > From facundobatista at gmail.com Thu Dec 18 16:59:23 2014 From: facundobatista at gmail.com (Facundo Batista) Date: Thu, 18 Dec 2014 12:59:23 -0300 Subject: [Python-Dev] Redirection of ar.pycon.org Message-ID: Hi! Don't remember where to ask for changing the redirection of that domain name. Somebody knows? I need for the redirection to be to pycon.python.org.ar (and we'll take care of proper year-by-year redirection from there). Thanks! -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ Twitter: @facundobatista From fijall at gmail.com Thu Dec 18 20:13:21 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 18 Dec 2014 21:13:21 +0200 Subject: [Python-Dev] libffi embedded in CPython Message-ID: After reading this http://bugs.python.org/issue23085 and remembering struggling having our own patches into cpython's libffi (but not into libffi itself), I wonder, is there any reason any more for libffi being included in CPython? Cheers, fijal From Steve.Dower at microsoft.com Thu Dec 18 20:17:04 2014 From: Steve.Dower at microsoft.com (Steve Dower) Date: Thu, 18 Dec 2014 19:17:04 +0000 Subject: [Python-Dev] libffi embedded in CPython In-Reply-To: References: Message-ID: Maciej Fijalkowski wrote: > After reading this http://bugs.python.org/issue23085 and remembering struggling > having our own patches into cpython's libffi (but not into libffi itself), I > wonder, is there any reason any more for libffi being included in CPython? We use it for ctypes, so there's certainly still a need. Are you asking whether we need a fork of it as opposed to treating it like an external (like OpenSSL)? > Cheers, > fijal From fijall at gmail.com Thu Dec 18 20:27:05 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 18 Dec 2014 21:27:05 +0200 Subject: [Python-Dev] libffi embedded in CPython In-Reply-To: References: Message-ID: On Thu, Dec 18, 2014 at 9:17 PM, Steve Dower wrote: > Maciej Fijalkowski wrote: >> After reading this http://bugs.python.org/issue23085 and remembering struggling >> having our own patches into cpython's libffi (but not into libffi itself), I >> wonder, is there any reason any more for libffi being included in CPython? > > We use it for ctypes, so there's certainly still a need. Are you asking whether we need a fork of it as opposed to treating it like an external (like OpenSSL)? yes (why is there a copy of libffi in the cpython source). And I'm asking not why it landed there, but why it is still there From benjamin at python.org Thu Dec 18 20:30:37 2014 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 18 Dec 2014 14:30:37 -0500 Subject: [Python-Dev] libffi embedded in CPython In-Reply-To: References: Message-ID: <1418931037.1858422.204561561.7796E60B@webmail.messagingengine.com> On Thu, Dec 18, 2014, at 14:13, Maciej Fijalkowski wrote: > After reading this http://bugs.python.org/issue23085 and remembering > struggling having our own patches into cpython's libffi (but not into > libffi itself), I wonder, is there any reason any more for libffi > being included in CPython? It has some sort of Windows related patches. No one seems to know whether they're still needed for newer libffi. Unfortunately, ctypes doesn't currently have a maintainer. From fijall at gmail.com Thu Dec 18 20:50:58 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 18 Dec 2014 21:50:58 +0200 Subject: [Python-Dev] libffi embedded in CPython In-Reply-To: <1418931037.1858422.204561561.7796E60B@webmail.messagingengine.com> References: <1418931037.1858422.204561561.7796E60B@webmail.messagingengine.com> Message-ID: well, the problem is essentially that libffi gets patched (e.g. for ARM) and it does not make it's way to CPython quickly. This is unlikely to be a security issue (for a variety of reasons, including ctypes), but it's still an issue I think. Segfaults related to e.g. stack alignment are hard to debug On Thu, Dec 18, 2014 at 9:30 PM, Benjamin Peterson wrote: > > > On Thu, Dec 18, 2014, at 14:13, Maciej Fijalkowski wrote: >> After reading this http://bugs.python.org/issue23085 and remembering >> struggling having our own patches into cpython's libffi (but not into >> libffi itself), I wonder, is there any reason any more for libffi >> being included in CPython? > > It has some sort of Windows related patches. No one seems to know > whether they're still needed for newer libffi. Unfortunately, ctypes > doesn't currently have a maintainer. From benjamin at python.org Thu Dec 18 21:06:42 2014 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 18 Dec 2014 15:06:42 -0500 Subject: [Python-Dev] libffi embedded in CPython In-Reply-To: References: <1418931037.1858422.204561561.7796E60B@webmail.messagingengine.com> Message-ID: <1418933202.1866064.204575237.0693A186@webmail.messagingengine.com> On Thu, Dec 18, 2014, at 14:50, Maciej Fijalkowski wrote: > well, the problem is essentially that libffi gets patched (e.g. for > ARM) and it does not make it's way to CPython quickly. This is > unlikely to be a security issue (for a variety of reasons, including > ctypes), but it's still an issue I think. Segfaults related to e.g. > stack alignment are hard to debug Certainly it's a suboptimal situation, but resolving it requires someone to figure out whether we still need/want whatever patches are in there. From benjamin at python.org Thu Dec 18 21:09:22 2014 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 18 Dec 2014 15:09:22 -0500 Subject: [Python-Dev] libffi embedded in CPython Message-ID: <1418933362.1866331.204575237.3FEC494F@webmail.messagingengine.com> On Thu, Dec 18, 2014, at 14:50, Maciej Fijalkowski wrote: > well, the problem is essentially that libffi gets patched (e.g. for > ARM) and it does not make it's way to CPython quickly. This is > unlikely to be a security issue (for a variety of reasons, including > ctypes), but it's still an issue I think. Segfaults related to e.g. > stack alignment are hard to debug Certainly it's a suboptimal situation, but resolving it requires someone to figure out whether we still need/want whatever patches are in there. From bp at benjamin-peterson.org Thu Dec 18 21:05:28 2014 From: bp at benjamin-peterson.org (Benjamin Peterson) Date: Thu, 18 Dec 2014 15:05:28 -0500 Subject: [Python-Dev] Redirection of ar.pycon.org In-Reply-To: References: Message-ID: <1418933128.1865458.204572101.5A6432BA@webmail.messagingengine.com> On Thu, Dec 18, 2014, at 10:59, Facundo Batista wrote: > Hi! > > Don't remember where to ask for changing the redirection of that > domain name. Somebody knows? Seems DNS for that is controlled by eGenix, so ccing mal. (We should move pycon.org DNS to use the PSF's normal DNS infrastructure.) > > I need for the redirection to be to pycon.python.org.ar (and we'll > take care of proper year-by-year redirection from there). From jimjjewett at gmail.com Thu Dec 18 21:36:07 2014 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Thu, 18 Dec 2014 12:36:07 -0800 (PST) Subject: [Python-Dev] libffi embedded in CPython In-Reply-To: <1418931037.1858422.204561561.7796E60B@webmail.messagingengine.com> Message-ID: <54933ab7.ca25e00a.053d.1130@mx.google.com> On Thu, Dec 18, 2014, at 14:13, Maciej Fijalkowski wrote: > ... http://bugs.python.org/issue23085 ... > is there any reason any more for libffi being included in CPython? [And why a fork, instead of just treating it as an external dependency] Benjamin Peterson responded: > It has some sort of Windows related patches. No one seems to know > whether they're still needed for newer libffi. Unfortunately, ctypes > doesn't currently have a maintainer. Are any of the following false? (1) Ideally, we would treat it as an external dependency. (2) At one point, it was intentionally forked to get in needed patches, including at least some for 64 bit windows with MSVC. (3) Upstream libffi maintenance has picked back up. (4) Alas, that means the switch merge would not be trivial. (5) In theory, we could now switch to the external version. [In particular, does libffi have a release policy such that we could assume the newest released version is "safe", so long as our integration doesn't break?] (6) By its very nature, libffi changes are risky and undertested. At the moment, that is also true of its primary user, ctypes. (7) So a switch is OK in theory, but someone has to do the non-trivial testing and merging, and agree to support both libffi and and ctypes in the future. Otherwise, stable wins. (8) The need for future support makes this a bad candidate for "patches wanted"/"bug bounty"/GSoC. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From mdcb808 at gmail.com Thu Dec 18 21:47:49 2014 From: mdcb808 at gmail.com (mdcb808) Date: Thu, 18 Dec 2014 12:47:49 -0800 Subject: [Python-Dev] datetime nanosecond support (ctd?) In-Reply-To: References: <5487E8DD.5010806@gmail.com> <548890EB.4070002@gmail.com> <87ppbqhivm.fsf@uwakimon.sk.tsukuba.ac.jp> <5489DB3E.7020005@gmail.com> <20141211202356.07462c01@fsol> <20141211204652.40a0c807@fsol> <5490679C.9050308@gmto.org> <5490C020.6030405@gmto.org> <5490C0C8.3080409@gmto.org> <5490D145.5050409@gmto.org> <5492415A.5010103@gmail.com> Message-ID: <54933D75.7020409@gmail.com> done - http://bugs.python.org/issue23084 On 12/17/14 8:20 PM, Eric Snow wrote: > On Wed, Dec 17, 2014 at 7:52 PM, Matthieu Bec wrote: >> >> >> Attached patch defines a new type struct_timespec for the time module. A new >> capsule exports the type along with to/from converters - opening a bridge >> for C, and for example the datetime module. > > I'd recommend opening a new issue in the bug tracker (bugs.python.org) > and attach the patch there. Attaching it to an email is a good way > for it to get lost and forgotten. :) > > -eric > From rosuav at gmail.com Thu Dec 18 22:19:24 2014 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 19 Dec 2014 08:19:24 +1100 Subject: [Python-Dev] [PEPs] Fwd: fixing broken link in pep 3 In-Reply-To: References: Message-ID: On Fri, Dec 19, 2014 at 5:39 AM, Guido van Rossum wrote: >---------- Forwarded message ---------- > From: Victor Stinner > > Hi, > > Yes, the link is dead. It looks like the following link contains the same > info: > https://docs.python.org/devguide/triaging.html > > Dead page: > https://web.archive.org/web/20090704040931/http://www.python.org/dev/workflow/ > "Core Development > Issue Workflow" > > Victor Edits made to PEP 3, link now updated. Noticed along the way that the next link down (for people _submitting_ bugs) is pointing to the /2/ section of the docs; should that be updated to send people to /3/, or are the two kept in sync? ChrisA From mal at python.org Fri Dec 19 00:10:13 2014 From: mal at python.org (M.-A. Lemburg) Date: Fri, 19 Dec 2014 00:10:13 +0100 Subject: [Python-Dev] Redirection of ar.pycon.org In-Reply-To: <1418933128.1865458.204572101.5A6432BA@webmail.messagingengine.com> References: <1418933128.1865458.204572101.5A6432BA@webmail.messagingengine.com> Message-ID: <54935ED5.9060808@python.org> Hi Facunda, you should either write to webmaster at pycon.org, the conference ML or me directly, since I'm managing these the pycon.org subdomains. > On Thu, Dec 18, 2014, at 10:59, Facundo Batista wrote: >> Hi! >> >> Don't remember where to ask for changing the redirection of that >> domain name. Somebody knows? >> >> I need for the redirection to be to pycon.python.org.ar (and we'll >> take care of proper year-by-year redirection from there). -- Marc-Andre Lemburg Director Python Software Foundation http://www.python.org/psf/ From tjreedy at udel.edu Fri Dec 19 02:24:36 2014 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 18 Dec 2014 20:24:36 -0500 Subject: [Python-Dev] [PEPs] Fwd: fixing broken link in pep 3 In-Reply-To: References: Message-ID: On 12/18/2014 4:19 PM, Chris Angelico wrote: > On Fri, Dec 19, 2014 at 5:39 AM, Guido van Rossum wrote: >> ---------- Forwarded message ---------- >> From: Victor Stinner >> >> Hi, >> >> Yes, the link is dead. It looks like the following link contains the same >> info: >> https://docs.python.org/devguide/triaging.html >> >> Dead page: >> https://web.archive.org/web/20090704040931/http://www.python.org/dev/workflow/ >> "Core Development > Issue Workflow" >> >> Victor > > Edits made to PEP 3, link now updated. PEP 3 is listed in PEP 0 under Abandoned, Withdrawn, and Rejected PEPs If this is proper, it does not make sense to update it. If this is not, the header should be updated. > Noticed along the way that the > next link down (for people _submitting_ bugs) is pointing to the /2/ > section of the docs; should that be updated to send people to /3/, or > are the two kept in sync? The actual link in the doc is http://docs.python.org/bugs.html. The site redirects that to http://docs.python.org/2/bugs.html. To me, the redirection should be to http://docs.python.org/3/bugs.html, regardless of the PEP 3 status. -- Terry Jan Reedy From rosuav at gmail.com Fri Dec 19 02:32:08 2014 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 19 Dec 2014 12:32:08 +1100 Subject: [Python-Dev] [PEPs] Fwd: fixing broken link in pep 3 In-Reply-To: References: Message-ID: On Fri, Dec 19, 2014 at 12:24 PM, Terry Reedy wrote: > PEP 3 is listed in PEP 0 under Abandoned, Withdrawn, and Rejected PEPs > If this is proper, it does not make sense to update it. > If this is not, the header should be updated. Guido passed the request on to the pep-editors list, which I took to mean that this should be updated. PEP 3 has been replaced with info in the dev guide, and the link in question is to the exact page of that dev guide which replaces it. ChrisA From benjamin at python.org Fri Dec 19 05:05:34 2014 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 18 Dec 2014 23:05:34 -0500 Subject: [Python-Dev] libffi embedded in CPython In-Reply-To: <54933ab7.ca25e00a.053d.1130@mx.google.com> References: <54933ab7.ca25e00a.053d.1130@mx.google.com> Message-ID: <1418961934.493008.204708129.5C26DF14@webmail.messagingengine.com> On Thu, Dec 18, 2014, at 15:36, Jim J. Jewett wrote: > > > On Thu, Dec 18, 2014, at 14:13, Maciej Fijalkowski wrote: > > ... http://bugs.python.org/issue23085 ... > > is there any reason any more for libffi being included in CPython? > > [And why a fork, instead of just treating it as an external dependency] > > Benjamin Peterson responded: > > > It has some sort of Windows related patches. No one seems to know > > whether they're still needed for newer libffi. Unfortunately, ctypes > > doesn't currently have a maintainer. > > Are any of the following false? > > (1) Ideally, we would treat it as an external dependency. > > (2) At one point, it was intentionally forked to get in needed > patches, including at least some for 64 bit windows with MSVC. > > (3) Upstream libffi maintenance has picked back up. > > (4) Alas, that means the switch merge would not be trivial. > > (5) In theory, we could now switch to the external version. > [In particular, does libffi have a release policy such that we > could assume the newest released version is "safe", so long as > our integration doesn't break?] > > (6) By its very nature, libffi changes are risky and undertested. > At the moment, that is also true of its primary user, ctypes. > > (7) So a switch is OK in theory, but someone has to do the > non-trivial testing and merging, and agree to support both libffi > and and ctypes in the future. Otherwise, stable wins. > > (8) The need for future support makes this a bad candidate for > "patches wanted"/"bug bounty"/GSoC. Sounds about right. From fijall at gmail.com Fri Dec 19 09:26:27 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 19 Dec 2014 10:26:27 +0200 Subject: [Python-Dev] libffi embedded in CPython In-Reply-To: <54933ab7.ca25e00a.053d.1130@mx.google.com> References: <1418931037.1858422.204561561.7796E60B@webmail.messagingengine.com> <54933ab7.ca25e00a.053d.1130@mx.google.com> Message-ID: On Thu, Dec 18, 2014 at 10:36 PM, Jim J. Jewett wrote: > > > On Thu, Dec 18, 2014, at 14:13, Maciej Fijalkowski wrote: >> ... http://bugs.python.org/issue23085 ... >> is there any reason any more for libffi being included in CPython? > > [And why a fork, instead of just treating it as an external dependency] > > Benjamin Peterson responded: > >> It has some sort of Windows related patches. No one seems to know >> whether they're still needed for newer libffi. Unfortunately, ctypes >> doesn't currently have a maintainer. > > Are any of the following false? > > (1) Ideally, we would treat it as an external dependency. > > (2) At one point, it was intentionally forked to get in needed > patches, including at least some for 64 bit windows with MSVC. > > (3) Upstream libffi maintenance has picked back up. > > (4) Alas, that means the switch merge would not be trivial. > > (5) In theory, we could now switch to the external version. > [In particular, does libffi have a release policy such that we > could assume the newest released version is "safe", so long as > our integration doesn't break?] > > (6) By its very nature, libffi changes are risky and undertested. > At the moment, that is also true of its primary user, ctypes. > > (7) So a switch is OK in theory, but someone has to do the > non-trivial testing and merging, and agree to support both libffi > and and ctypes in the future. Otherwise, stable wins. > > (8) The need for future support makes this a bad candidate for > "patches wanted"/"bug bounty"/GSoC. > > -jJ I would like to add that "not doing anything" is not a good strategy either, because you accumulate bugs that get fixed upstream (I'm pretty sure all the problems from cpython got fixed in upstream libffi, but not all libffi fixes made it to cpython). From p.f.moore at gmail.com Fri Dec 19 10:52:26 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 19 Dec 2014 09:52:26 +0000 Subject: [Python-Dev] libffi embedded in CPython In-Reply-To: References: <1418931037.1858422.204561561.7796E60B@webmail.messagingengine.com> <54933ab7.ca25e00a.053d.1130@mx.google.com> Message-ID: On 19 December 2014 at 08:26, Maciej Fijalkowski wrote: > I would like to add that "not doing anything" is not a good strategy > either, because you accumulate bugs that get fixed upstream (I'm > pretty sure all the problems from cpython got fixed in upstream > libffi, but not all libffi fixes made it to cpython). Probably the easiest way of moving this forward would be for someone to identify the CPython-specific patches in the current version, and check if they are addressed in the latest libffi version. They haven't been applied as they are, I gather, but maybe equivalent fixes have been made. I've no idea how easy that would be (presumably not trivial, or someone would already have done it). If the patches aren't needed any more, upgrading becomes a lot more plausible. Paul From christian at python.org Fri Dec 19 11:33:21 2014 From: christian at python.org (Christian Heimes) Date: Fri, 19 Dec 2014 11:33:21 +0100 Subject: [Python-Dev] libffi embedded in CPython In-Reply-To: References: <1418931037.1858422.204561561.7796E60B@webmail.messagingengine.com> <54933ab7.ca25e00a.053d.1130@mx.google.com> Message-ID: On 19.12.2014 10:52, Paul Moore wrote: > Probably the easiest way of moving this forward would be for someone > to identify the CPython-specific patches in the current version, and > check if they are addressed in the latest libffi version. They haven't > been applied as they are, I gather, but maybe equivalent fixes have > been made. I've no idea how easy that would be (presumably not > trivial, or someone would already have done it). If the patches aren't > needed any more, upgrading becomes a lot more plausible. That's easy. All patches are tracked in the diff file https://hg.python.org/cpython/file/3de678cd184d/Modules/_ctypes/libffi.diff . The file *should* be up to date. Christian From solipsis at pitrou.net Fri Dec 19 14:01:28 2014 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 19 Dec 2014 14:01:28 +0100 Subject: [Python-Dev] libffi embedded in CPython References: <1418931037.1858422.204561561.7796E60B@webmail.messagingengine.com> <54933ab7.ca25e00a.053d.1130@mx.google.com> Message-ID: <20141219140128.2f1a8e7f@fsol> On Fri, 19 Dec 2014 09:52:26 +0000 Paul Moore wrote: > On 19 December 2014 at 08:26, Maciej Fijalkowski wrote: > > I would like to add that "not doing anything" is not a good strategy > > either, because you accumulate bugs that get fixed upstream (I'm > > pretty sure all the problems from cpython got fixed in upstream > > libffi, but not all libffi fixes made it to cpython). > > Probably the easiest way of moving this forward would be for someone > to identify the CPython-specific patches in the current version, and > check if they are addressed in the latest libffi version. They haven't > been applied as they are, I gather, but maybe equivalent fixes have > been made. I've no idea how easy that would be (presumably not > trivial, or someone would already have done it). If the patches aren't > needed any more, upgrading becomes a lot more plausible. Another strategy is to dump our private fork, link with upstream instead, and see what breaks. Presumably, our test suite should be able to catch some (most?) of that breakage. Regards Antoine. From ncoghlan at gmail.com Fri Dec 19 15:04:03 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Dec 2014 00:04:03 +1000 Subject: [Python-Dev] libffi embedded in CPython In-Reply-To: <20141219140128.2f1a8e7f@fsol> References: <1418931037.1858422.204561561.7796E60B@webmail.messagingengine.com> <54933ab7.ca25e00a.053d.1130@mx.google.com> <20141219140128.2f1a8e7f@fsol> Message-ID: On 19 December 2014 at 23:01, Antoine Pitrou wrote: > On Fri, 19 Dec 2014 09:52:26 +0000 > Paul Moore wrote: > > On 19 December 2014 at 08:26, Maciej Fijalkowski > wrote: > > > I would like to add that "not doing anything" is not a good strategy > > > either, because you accumulate bugs that get fixed upstream (I'm > > > pretty sure all the problems from cpython got fixed in upstream > > > libffi, but not all libffi fixes made it to cpython). > > > > Probably the easiest way of moving this forward would be for someone > > to identify the CPython-specific patches in the current version, and > > check if they are addressed in the latest libffi version. They haven't > > been applied as they are, I gather, but maybe equivalent fixes have > > been made. I've no idea how easy that would be (presumably not > > trivial, or someone would already have done it). If the patches aren't > > needed any more, upgrading becomes a lot more plausible. > > Another strategy is to dump our private fork, link with upstream > instead, and see what breaks. > Presumably, our test suite should be able to catch some (most?) of that > breakage. > And if we're going to do something like that for 3.5, now's the time, since we still have a lot of lead time on the 3.5 release. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From status at bugs.python.org Fri Dec 19 18:08:13 2014 From: status at bugs.python.org (Python tracker) Date: Fri, 19 Dec 2014 18:08:13 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20141219170813.49E055620D@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2014-12-12 - 2014-12-19) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 4683 (+17) closed 30168 (+31) total 34851 (+48) Open issues with patches: 2192 Issues opened (34) ================== #23042: Python 2.7.9 ctypes module doesn't build on FreeBSD x86 http://bugs.python.org/issue23042 opened by lemburg #23043: doctest ignores "from __future__ import print_function" http://bugs.python.org/issue23043 opened by fva #23046: asyncio.BaseEventLoop is documented, but only exported via asy http://bugs.python.org/issue23046 opened by vadmium #23050: Add Japanese legacy encodings http://bugs.python.org/issue23050 opened by t2y #23051: multiprocessing.pool methods imap() and imap_unordered() cause http://bugs.python.org/issue23051 opened by advance512 #23054: ConnectionError: ('Connection aborted.', BadStatusLine(""''''" http://bugs.python.org/issue23054 opened by joecabrera #23055: PyUnicode_FromFormatV crasher http://bugs.python.org/issue23055 opened by gvanrossum #23056: tarfile raises an exception when reading an empty tar in strea http://bugs.python.org/issue23056 opened by gregory.p.smith #23057: asyncio loop on Windows should stop on keyboard interrupt http://bugs.python.org/issue23057 opened by asvetlov #23058: argparse silently ignores arguments http://bugs.python.org/issue23058 opened by remram #23059: cmd module should sort misc help topics http://bugs.python.org/issue23059 opened by samwyse #23060: Assert fails in multiprocessing.heap.Arena.__setstate__ on Win http://bugs.python.org/issue23060 opened by steve.dower #23061: Update pep8 to specify explicitly 'module level' imports at to http://bugs.python.org/issue23061 opened by IanLee1521 #23062: test_argparse --version test cases http://bugs.python.org/issue23062 opened by vadmium #23063: `python setup.py check --restructuredtext --strict --metadata` http://bugs.python.org/issue23063 opened by Marc.Abramowitz #23065: Pyhton27.dll at SysWOW64 not updated when updating Python 2.7. http://bugs.python.org/issue23065 opened by GamesGamble #23067: Export readline forced_update_display http://bugs.python.org/issue23067 opened by dexteradeus #23068: Add a way to determine if the current thread has the import lo http://bugs.python.org/issue23068 opened by gvanrossum #23069: IDLE's F5 Run Module doesn't transfer effects of future import http://bugs.python.org/issue23069 opened by rhettinger #23071: codecs.__all__ incomplete http://bugs.python.org/issue23071 opened by vadmium #23072: 2.7.9 multiprocessing compile conflict http://bugs.python.org/issue23072 opened by aab at purdue.edu #23075: Mock backport in 2.7 relies on implementation defined behavior http://bugs.python.org/issue23075 opened by alex #23076: list(pathlib.Path().glob("")) fails with IndexError http://bugs.python.org/issue23076 opened by Antony.Lee #23077: PEP 1: Allow Provisional status for PEPs http://bugs.python.org/issue23077 opened by ncoghlan #23078: unittest.mock patch autospec doesn't work on staticmethods http://bugs.python.org/issue23078 opened by kevinbenton #23079: os.path.normcase documentation confusing http://bugs.python.org/issue23079 opened by chris.jerdonek #23080: BoundArguments.arguments should be unordered http://bugs.python.org/issue23080 opened by Antony.Lee #23081: Document PySequence_List(o) as equivalent to list(o) http://bugs.python.org/issue23081 opened by larsmans #23082: pathlib relative_to() can give confusing error message http://bugs.python.org/issue23082 opened by chris.jerdonek #23085: update internal libffi copy to 3.2.1 http://bugs.python.org/issue23085 opened by gustavotemple #23086: Add start and stop parameters to the Sequence.index() ABC mixi http://bugs.python.org/issue23086 opened by rhettinger #23087: Exec variable not found error http://bugs.python.org/issue23087 opened by Keith.Chewning #23088: Document that PyUnicode_AsUTF8() returns a null-terminated str http://bugs.python.org/issue23088 opened by vadmium #23089: Update libffi config files http://bugs.python.org/issue23089 opened by gustavotemple Most recent 15 issues with no replies (15) ========================================== #23088: Document that PyUnicode_AsUTF8() returns a null-terminated str http://bugs.python.org/issue23088 #23087: Exec variable not found error http://bugs.python.org/issue23087 #23086: Add start and stop parameters to the Sequence.index() ABC mixi http://bugs.python.org/issue23086 #23081: Document PySequence_List(o) as equivalent to list(o) http://bugs.python.org/issue23081 #23078: unittest.mock patch autospec doesn't work on staticmethods http://bugs.python.org/issue23078 #23077: PEP 1: Allow Provisional status for PEPs http://bugs.python.org/issue23077 #23075: Mock backport in 2.7 relies on implementation defined behavior http://bugs.python.org/issue23075 #23069: IDLE's F5 Run Module doesn't transfer effects of future import http://bugs.python.org/issue23069 #23067: Export readline forced_update_display http://bugs.python.org/issue23067 #23061: Update pep8 to specify explicitly 'module level' imports at to http://bugs.python.org/issue23061 #23059: cmd module should sort misc help topics http://bugs.python.org/issue23059 #23043: doctest ignores "from __future__ import print_function" http://bugs.python.org/issue23043 #23029: test_warnings produces extra output in quiet mode http://bugs.python.org/issue23029 #23028: CEnvironmentVariableTests and PyEnvironmentVariableTests test http://bugs.python.org/issue23028 #23027: test_warnings fails with -Werror http://bugs.python.org/issue23027 Most recent 15 issues waiting for review (15) ============================================= #23089: Update libffi config files http://bugs.python.org/issue23089 #23088: Document that PyUnicode_AsUTF8() returns a null-terminated str http://bugs.python.org/issue23088 #23085: update internal libffi copy to 3.2.1 http://bugs.python.org/issue23085 #23081: Document PySequence_List(o) as equivalent to list(o) http://bugs.python.org/issue23081 #23080: BoundArguments.arguments should be unordered http://bugs.python.org/issue23080 #23075: Mock backport in 2.7 relies on implementation defined behavior http://bugs.python.org/issue23075 #23071: codecs.__all__ incomplete http://bugs.python.org/issue23071 #23067: Export readline forced_update_display http://bugs.python.org/issue23067 #23063: `python setup.py check --restructuredtext --strict --metadata` http://bugs.python.org/issue23063 #23062: test_argparse --version test cases http://bugs.python.org/issue23062 #23061: Update pep8 to specify explicitly 'module level' imports at to http://bugs.python.org/issue23061 #23056: tarfile raises an exception when reading an empty tar in strea http://bugs.python.org/issue23056 #23055: PyUnicode_FromFormatV crasher http://bugs.python.org/issue23055 #23051: multiprocessing.pool methods imap() and imap_unordered() cause http://bugs.python.org/issue23051 #23050: Add Japanese legacy encodings http://bugs.python.org/issue23050 Top 10 most discussed issues (10) ================================= #14134: xmlrpc.client.ServerProxy needs timeout parameter http://bugs.python.org/issue14134 15 msgs #22980: C extension naming doesn't take bitness into account http://bugs.python.org/issue22980 9 msgs #23004: mock_open() should allow reading binary data http://bugs.python.org/issue23004 9 msgs #23085: update internal libffi copy to 3.2.1 http://bugs.python.org/issue23085 9 msgs #23050: Add Japanese legacy encodings http://bugs.python.org/issue23050 8 msgs #21071: struct.Struct.format is bytes, but should be str http://bugs.python.org/issue21071 7 msgs #23014: Don't have importlib.abc.Loader.create_module() be optional http://bugs.python.org/issue23014 7 msgs #23041: csv needs more quoting rules http://bugs.python.org/issue23041 7 msgs #23068: Add a way to determine if the current thread has the import lo http://bugs.python.org/issue23068 7 msgs #23071: codecs.__all__ incomplete http://bugs.python.org/issue23071 7 msgs Issues closed (30) ================== #15506: configure should use PKG_PROG_PKG_CONFIG http://bugs.python.org/issue15506 closed by python-dev #15513: Correct __sizeof__ support for pickle http://bugs.python.org/issue15513 closed by serhiy.storchaka #19858: Make pickletools.optimize aware of the MEMOIZE opcode. http://bugs.python.org/issue19858 closed by serhiy.storchaka #20577: IDLE: Remove FormatParagraph's width setting from config dialo http://bugs.python.org/issue20577 closed by terry.reedy #21236: patch to use cabinet.lib instead of fci.lib (fixes build with http://bugs.python.org/issue21236 closed by steve.dower #22733: MSVC ffi_prep_args doesn't handle 64-bit arguments properly http://bugs.python.org/issue22733 closed by steve.dower #22777: Test pickling with all protocols http://bugs.python.org/issue22777 closed by serhiy.storchaka #22783: Pickle: use NEWOBJ instead of NEWOBJ_EX if possible http://bugs.python.org/issue22783 closed by serhiy.storchaka #22823: Use set literals instead of creating a set from a list http://bugs.python.org/issue22823 closed by serhiy.storchaka #22875: asyncio: call_soon() documentation unclear on timing http://bugs.python.org/issue22875 closed by haypo #22919: Update PCBuild for VS 2015 http://bugs.python.org/issue22919 closed by steve.dower #22945: Ctypes inconsistent between Linux and OS X http://bugs.python.org/issue22945 closed by Daniel.Standage #23011: Duplicate Paragraph in documentation for json module http://bugs.python.org/issue23011 closed by terry.reedy #23015: Improve test_uuid http://bugs.python.org/issue23015 closed by serhiy.storchaka #23030: lru_cache manual get/put http://bugs.python.org/issue23030 closed by rhettinger #23031: pdb crashes when jumping over "with" statement http://bugs.python.org/issue23031 closed by DSP #23044: incorrect addition of floating point numbers http://bugs.python.org/issue23044 closed by benjamin.peterson #23045: json data iteration through loop in python http://bugs.python.org/issue23045 closed by steven.daprano #23047: typo in pyporting.rst http://bugs.python.org/issue23047 closed by berker.peksag #23048: abort when jumping out of a loop http://bugs.python.org/issue23048 closed by python-dev #23049: Fix functools.reduce code equivalent. http://bugs.python.org/issue23049 closed by rhettinger #23052: python2.7.9 [SSL: CERTIFICATE_VERIFY_FAILED] certificate verif http://bugs.python.org/issue23052 closed by berker.peksag #23053: test_urllib2_localnet fails without ssl http://bugs.python.org/issue23053 closed by python-dev #23064: pep8 asyncore.py http://bugs.python.org/issue23064 closed by r.david.murray #23066: re.match hang http://bugs.python.org/issue23066 closed by gvanrossum #23070: Error in Tutorial comment http://bugs.python.org/issue23070 closed by berker.peksag #23073: Broken turtle example in Cmd documentation http://bugs.python.org/issue23073 closed by ethan.furman #23074: asyncio: get_event_loop() must always raise an exception, even http://bugs.python.org/issue23074 closed by haypo #23083: sys.exit with bool parameter http://bugs.python.org/issue23083 closed by rhettinger #23084: Expose C struct timespec (nanosecond resolution) in time modul http://bugs.python.org/issue23084 closed by belopolsky From jimjjewett at gmail.com Mon Dec 22 22:49:19 2014 From: jimjjewett at gmail.com (Jim J. Jewett) Date: Mon, 22 Dec 2014 13:49:19 -0800 (PST) Subject: [Python-Dev] libffi embedded in CPython In-Reply-To: Message-ID: <549891df.c332e00a.54c7.34f8@mx.google.com> On Thu, Dec 18, 2014, at 14:13, Maciej Fijalkowski wrote: > ... http://bugs.python.org/issue23085 ... > is there any reason any more for libffi being included in CPython? Paul Moore wrote: > Probably the easiest way of moving this forward would be for someone > to identify the CPython-specific patches in the current version ... Christian Heimes wrote: > That's easy. All patches are tracked in the diff file > https://hg.python.org/cpython/file/3de678cd184d/Modules/_ctypes/libffi.diff That (200+ lines) doesn't seem to have all the C changes, such as the win64 sizeof changes from issue 11835. Besides http://bugs.python.org/issue23085, there is at least http://bugs.python.org/issue22733 http://bugs.python.org/issue20160 http://bugs.python.org/issue11835 which sort of drives home the point that making sure we have a good merge isn't trivial, and this isn't an area where we should just assume that tests will catch everything. I don't think it is just a quicky waiting on permission. I've no doubt that upstream libffi is better in many ways, but those are ways people have already learned to live with. That said, I haven't seen any objections in principle, except perhaps from Steve Dower in the issues. (I *think* he was just saying "not worth the time to me", but it was ambiguous.) I do believe that Christian or Maciej *could* sort things out well enough; I have no insight into whether they have (or someone else has) the time to actually do so. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ From benjamin at python.org Wed Dec 24 23:07:37 2014 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 24 Dec 2014 17:07:37 -0500 Subject: [Python-Dev] [Python-checkins] cpython (3.4): improve incorrect French (#23109) In-Reply-To: <549B2919.4040206@udel.edu> References: <20141224195859.71899.4611@psf.io> <549B2919.4040206@udel.edu> Message-ID: <1419458857.3760907.206564489.7C3DEC6D@webmail.messagingengine.com> On Wed, Dec 24, 2014, at 15:59, Terry Reedy wrote: > On 12/24/2014 2:59 PM, benjamin.peterson wrote: > > https://hg.python.org/cpython/rev/2c87dd2d821e > > changeset: 93958:2c87dd2d821e > > branch: 3.4 > > parent: 93955:08972a47f710 > > user: Benjamin Peterson > > date: Wed Dec 24 13:58:05 2014 -0600 > > summary: > > improve incorrect French (#23109) > > > > Following suggestions from Cl?ment. > > > > files: > > Doc/howto/unicode.rst | 4 ++-- > > 1 files changed, 2 insertions(+), 2 deletions(-) > > > > > > diff --git a/Doc/howto/unicode.rst b/Doc/howto/unicode.rst > > --- a/Doc/howto/unicode.rst > > +++ b/Doc/howto/unicode.rst > > @@ -32,8 +32,8 @@ > > In the mid-1980s an Apple II BASIC program written by a French speaker > > might have lines like these:: > > > > - PRINT "FICHIER EST COMPLETE." > > - PRINT "CARACTERE NON ACCEPTE." > > + PRINT "MISE A JOUR TERMINEE" > > + PRINT "PARAMETRES ENREGISTRES" > > > > Those messages should contain accents (complet?, caract?re, accept?), > > It seems that this list should have been changed also, to the words that > need accents in the replacement. Good point. Thank you. From sky.kok at speaklikeaking.com Thu Dec 25 04:56:11 2014 From: sky.kok at speaklikeaking.com (Sky Kok) Date: Thu, 25 Dec 2014 10:56:11 +0700 Subject: [Python-Dev] Email from Rietveld Code Review Tool is classified as spam Message-ID: Dear comrades, Merry Christmas for you who celebrates Christmas! Happy holidays for you who don't. Anyway, sometimes when people review my patches for CPython, they send me a notice through Rietveld Code Review Tool which later will send an email to me. However, my GMail spam filter is aggressive so the email will always be classified as spam because it fails spf checking. So if Taylor Swift clicks 'send email' in Rietveld after reviewing my patch, Rietveld will send email to me but the email pretends as if it is sent from taylor at swift.com. Hence, failing spf checking. Take an example where R. David Murray commented on my patch, I wouldn't know about it if I did not click Spam folder out of the blue. I remember in the past I had ignored Serhiy Storchaka's advice for months because his message was buried in spam folder. Maybe we shouldn't pretend as someone else when sending email through Rietveld? Cheers, Vajrasky Kok From rosuav at gmail.com Thu Dec 25 05:26:23 2014 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 25 Dec 2014 15:26:23 +1100 Subject: [Python-Dev] Email from Rietveld Code Review Tool is classified as spam In-Reply-To: References: Message-ID: On Thu, Dec 25, 2014 at 2:56 PM, Sky Kok wrote: > Anyway, sometimes when people review my patches for CPython, they send > me a notice through Rietveld Code Review Tool which later will send an > email to me. However, my GMail spam filter is aggressive so the email > will always be classified as spam because it fails spf checking. So if > Taylor Swift clicks 'send email' in Rietveld after reviewing my patch, > Rietveld will send email to me but the email pretends as if it is sent > from taylor at swift.com. Hence, failing spf checking. > > Maybe we shouldn't pretend as someone else when sending email through Rietveld? That's not the fault of Gmail, except perhaps in that no rejection will have gone to the originating server. The solution is exactly as you say. The "From" can still say taylor at swift.com, but the envelope-from (the "MAIL FROM:" at protocol level) should be an address that can cope with bounces. ChrisA From storchaka at gmail.com Thu Dec 25 07:53:30 2014 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 25 Dec 2014 08:53:30 +0200 Subject: [Python-Dev] Email from Rietveld Code Review Tool is classified as spam In-Reply-To: References: Message-ID: On 25.12.14 05:56, Sky Kok wrote: > Anyway, sometimes when people review my patches for CPython, they send > me a notice through Rietveld Code Review Tool which later will send an > email to me. However, my GMail spam filter is aggressive so the email > will always be classified as spam because it fails spf checking. So if > Taylor Swift clicks 'send email' in Rietveld after reviewing my patch, > Rietveld will send email to me but the email pretends as if it is sent > from taylor at swift.com. Hence, failing spf checking. > > Take an example where R. David Murray commented on my patch, I > wouldn't know about it if I did not click Spam folder out of the blue. > I remember in the past I had ignored Serhiy Storchaka's advice for > months because his message was buried in spam folder. > > Maybe we shouldn't pretend as someone else when sending email through Rietveld? http://psf.upfronthosting.co.za/roundup/meta/issue554 From status at bugs.python.org Fri Dec 26 18:08:11 2014 From: status at bugs.python.org (Python tracker) Date: Fri, 26 Dec 2014 18:08:11 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20141226170811.B20145613F@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2014-12-19 - 2014-12-26) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 4691 ( +8) closed 30186 (+18) total 34877 (+26) Open issues with patches: 2198 Issues opened (17) ================== #23094: Unpickler failing with PicklingError at frame end on readline http://bugs.python.org/issue23094 opened by CensoredUsername #23095: asyncio: race condition in IocpProactor.wait_for_handle() http://bugs.python.org/issue23095 opened by haypo #23096: Implementation-depended pickling floats with protocol 0 http://bugs.python.org/issue23096 opened by serhiy.storchaka #23097: unittest can unnecessarily modify sys.path (and with the wrong http://bugs.python.org/issue23097 opened by chris.jerdonek #23098: mknod devices can be >32 bits http://bugs.python.org/issue23098 opened by jcea #23099: BytesIO and StringIO values unavailable when closed http://bugs.python.org/issue23099 opened by vadmium #23100: multiprocessing doc organization impedes understanding http://bugs.python.org/issue23100 opened by davin #23102: distutils: tip-toe around quirks owing to setuptools monkey-pa http://bugs.python.org/issue23102 opened by gmt #23103: ipaddress should be Flyweight http://bugs.python.org/issue23103 opened by sbromberger #23104: [Windows x86-64] ctypes: Incorrect function call http://bugs.python.org/issue23104 opened by ????????????.?????????????????? #23105: os.O_SHLOCK and os.O_EXLOCK are not available on Linux http://bugs.python.org/issue23105 opened by Sworddragon #23106: Remove smalltable from set objects http://bugs.python.org/issue23106 opened by rhettinger #23107: Tighten-up search loops in sets http://bugs.python.org/issue23107 opened by rhettinger #23109: French quotes in the documentation are often ungrammatical http://bugs.python.org/issue23109 opened by cpitcla #23111: ftplib.FTP_TLS's default constructor does not work with TLSv1. http://bugs.python.org/issue23111 opened by varde #23114: "dist must be a Distribution instance" check fails with setupt http://bugs.python.org/issue23114 opened by scoder #23115: Backport #22585 -- getentropy for urandom to Python 2.7 http://bugs.python.org/issue23115 opened by alex Most recent 15 issues with no replies (15) ========================================== #23115: Backport #22585 -- getentropy for urandom to Python 2.7 http://bugs.python.org/issue23115 #23114: "dist must be a Distribution instance" check fails with setupt http://bugs.python.org/issue23114 #23111: ftplib.FTP_TLS's default constructor does not work with TLSv1. http://bugs.python.org/issue23111 #23107: Tighten-up search loops in sets http://bugs.python.org/issue23107 #23106: Remove smalltable from set objects http://bugs.python.org/issue23106 #23102: distutils: tip-toe around quirks owing to setuptools monkey-pa http://bugs.python.org/issue23102 #23097: unittest can unnecessarily modify sys.path (and with the wrong http://bugs.python.org/issue23097 #23095: asyncio: race condition in IocpProactor.wait_for_handle() http://bugs.python.org/issue23095 #23086: Add start and stop parameters to the Sequence.index() ABC mixi http://bugs.python.org/issue23086 #23081: Document PySequence_List(o) as equivalent to list(o) http://bugs.python.org/issue23081 #23078: unittest.mock patch autospec doesn't work on staticmethods http://bugs.python.org/issue23078 #23077: PEP 1: Allow Provisional status for PEPs http://bugs.python.org/issue23077 #23075: Mock backport in 2.7 relies on implementation defined behavior http://bugs.python.org/issue23075 #23067: Export readline forced_update_display http://bugs.python.org/issue23067 #23029: test_warnings produces extra output in quiet mode http://bugs.python.org/issue23029 Most recent 15 issues waiting for review (15) ============================================= #23115: Backport #22585 -- getentropy for urandom to Python 2.7 http://bugs.python.org/issue23115 #23107: Tighten-up search loops in sets http://bugs.python.org/issue23107 #23106: Remove smalltable from set objects http://bugs.python.org/issue23106 #23103: ipaddress should be Flyweight http://bugs.python.org/issue23103 #23102: distutils: tip-toe around quirks owing to setuptools monkey-pa http://bugs.python.org/issue23102 #23099: BytesIO and StringIO values unavailable when closed http://bugs.python.org/issue23099 #23098: mknod devices can be >32 bits http://bugs.python.org/issue23098 #23094: Unpickler failing with PicklingError at frame end on readline http://bugs.python.org/issue23094 #23089: Update libffi config files http://bugs.python.org/issue23089 #23088: Document that PyUnicode_AsUTF8() returns a null-terminated str http://bugs.python.org/issue23088 #23085: update internal libffi copy to 3.2.1 http://bugs.python.org/issue23085 #23081: Document PySequence_List(o) as equivalent to list(o) http://bugs.python.org/issue23081 #23080: BoundArguments.arguments should be unordered http://bugs.python.org/issue23080 #23075: Mock backport in 2.7 relies on implementation defined behavior http://bugs.python.org/issue23075 #23067: Export readline forced_update_display http://bugs.python.org/issue23067 Top 10 most discussed issues (10) ================================= #23103: ipaddress should be Flyweight http://bugs.python.org/issue23103 17 msgs #21279: str.translate documentation incomplete http://bugs.python.org/issue21279 8 msgs #23098: mknod devices can be >32 bits http://bugs.python.org/issue23098 5 msgs #22896: Don't use PyObject_As*Buffer() functions http://bugs.python.org/issue22896 4 msgs #23061: Update pep8 to specify explicitly 'module level' imports at to http://bugs.python.org/issue23061 4 msgs #19548: 'codecs' module docs improvements http://bugs.python.org/issue19548 3 msgs #22836: Broken "Exception ignored in:" message on exceptions in __repr http://bugs.python.org/issue22836 3 msgs #22926: asyncio: raise an exception when called from the wrong thread http://bugs.python.org/issue22926 3 msgs #23043: doctest ignores "from __future__ import print_function" http://bugs.python.org/issue23043 3 msgs #23094: Unpickler failing with PicklingError at frame end on readline http://bugs.python.org/issue23094 3 msgs Issues closed (17) ================== #19104: pprint produces invalid output for long strings http://bugs.python.org/issue19104 closed by serhiy.storchaka #19539: The 'raw_unicode_escape' codec buggy + not appropriate for Pyt http://bugs.python.org/issue19539 closed by zuo #20069: Add unit test for os.chown http://bugs.python.org/issue20069 closed by r.david.murray #21793: httplib client/server status refactor http://bugs.python.org/issue21793 closed by serhiy.storchaka #22585: os.urandom() should use getentropy() of OpenBSD 5.6 http://bugs.python.org/issue22585 closed by haypo #23040: Better documentation for the urlencode safe parameter http://bugs.python.org/issue23040 closed by r.david.murray #23071: codecs.__all__ incomplete http://bugs.python.org/issue23071 closed by serhiy.storchaka #23087: Exec variable not found error http://bugs.python.org/issue23087 closed by terry.reedy #23090: fix test_doctest relying on refcounting to close files http://bugs.python.org/issue23090 closed by python-dev #23091: unpacked keyword arguments are not unicode normalized http://bugs.python.org/issue23091 closed by benjamin.peterson #23092: Python 2.7.9 test_readline regression on CentOS 6 http://bugs.python.org/issue23092 closed by berker.peksag #23093: repr() on detached stream objects fails http://bugs.python.org/issue23093 closed by python-dev #23101: bleh, sorry, my cat reported this non-bug :) http://bugs.python.org/issue23101 closed by gmt #23108: pysha3 fails with obscure internal error http://bugs.python.org/issue23108 closed by benjamin.peterson #23110: Document if argument to Py_SetPath requires static storage. http://bugs.python.org/issue23110 closed by python-dev #23112: SimpleHTTPServer/http.server adds trailing slash after query s http://bugs.python.org/issue23112 closed by python-dev #23113: Compiler doesn't recognize qualified exec('', {}) http://bugs.python.org/issue23113 closed by johnf From storchaka at gmail.com Wed Dec 31 14:12:58 2014 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 31 Dec 2014 15:12:58 +0200 Subject: [Python-Dev] More compact dictionaries with faster iteration In-Reply-To: <9BD2AD6A-125D-4A34-B6BF-A99B167554B6@gmail.com> References: <9BD2AD6A-125D-4A34-B6BF-A99B167554B6@gmail.com> Message-ID: <54A3F65A.1060406@gmail.com> On 10.12.12 03:44, Raymond Hettinger wrote: > The current memory layout for dictionaries is > unnecessarily inefficient. It has a sparse table of > 24-byte entries containing the hash value, key pointer, > and value pointer. > > Instead, the 24-byte entries should be stored in a > dense table referenced by a sparse table of indices. FYI PHP 7 will use this technique [1]. In conjunction with other optimizations this will decrease memory consumption of PHP hashtables up to 4 times. [1] http://nikic.github.io/2014/12/22/PHPs-new-hashtable-implementation.html From storchaka at gmail.com Wed Dec 31 14:12:58 2014 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 31 Dec 2014 15:12:58 +0200 Subject: [Python-Dev] More compact dictionaries with faster iteration In-Reply-To: <9BD2AD6A-125D-4A34-B6BF-A99B167554B6@gmail.com> References: <9BD2AD6A-125D-4A34-B6BF-A99B167554B6@gmail.com> Message-ID: <54A3F65A.1060406@gmail.com> On 10.12.12 03:44, Raymond Hettinger wrote: > The current memory layout for dictionaries is > unnecessarily inefficient. It has a sparse table of > 24-byte entries containing the hash value, key pointer, > and value pointer. > > Instead, the 24-byte entries should be stored in a > dense table referenced by a sparse table of indices. FYI PHP 7 will use this technique [1]. In conjunction with other optimizations this will decrease memory consumption of PHP hashtables up to 4 times. [1] http://nikic.github.io/2014/12/22/PHPs-new-hashtable-implementation.html From techtonik at gmail.com Fri Dec 19 12:44:02 2014 From: techtonik at gmail.com (anatoly techtonik) Date: Fri, 19 Dec 2014 11:44:02 -0000 Subject: [Python-Dev] python 2.7.9 regression in argparse? Message-ID: https://github.com/nickstenning/honcho/pull/121 -- anatoly t.