From ncoghlan at gmail.com Sun Feb 1 00:59:57 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 1 Feb 2015 09:59:57 +1000 Subject: [Distutils] binary wheels, linux and ucs2/4 In-Reply-To: References: <54CA1FBF.4010902@simplistix.co.uk> Message-ID: On 30 January 2015 at 04:27, Daniel Holth wrote: > On Thu, Jan 29, 2015 at 8:37 AM, Nick Coghlan wrote: >> If we're on CPython 2.x and sysconfig.get_config_var('SOABI') returns >> None, then we can calculate a synthetic SOABI tag as: >> >> * the start of the SOABI tag should be "cpython-" >> * the next two digits will be the major/minor of the release (i.e. 26 or 27) >> * the next character will be 'd' if sys.pydebug is set (I'm fairly >> sure, but double check this) >> * we can assume 'm' (for using pymalloc) safely enough >> * the final character will be 'u' if sys.maxunicode == 0x10ffff >> >> We're not going to add a new kind of ABI variation to Python 2 at this >> stage of its lifecycle, so that calculation should remain valid for as >> long as Python 2 remains in use. > > I would be very happy to receive a patch for: > > https://bitbucket.org/pypa/wheel/src/bdf053a70200c5857c250c2044a2d91da23db4a9/wheel/bdist_wheel.py?at=default#cl-150 > > abi_tag = sysconfig.get_config_vars().get('SOABI', 'none') > if abi_tag.startswith('cpython-'): > abi_tag = 'cp' + abi_tag.rsplit('-', 1)[-1] > > It would have to have a Python 2 implementation as Nick described. > > and this file which controls what can be installed (duplicated in pip) > but also only works on py3: > > https://bitbucket.org/pypa/wheel/src/bdf053a70200c5857c250c2044a2d91da23db4a9/wheel/pep425tags.py?at=default#cl-65 > > abi3s = set() > import imp > for suffix in imp.get_suffixes(): > if suffix[0].startswith('.abi'): > abi3s.add(suffix[0].split('.', 2)[1]) Thanks, I've transferred the relevant details from this thread to https://bitbucket.org/pypa/wheel/issue/63/backport-soabi-to-python-2 Any takers with the roundtuits to put together a patch? (Ideally with tests, but given the nature of the code being added, I'm not sure how feasible that will be) Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun Feb 1 01:11:49 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 1 Feb 2015 10:11:49 +1000 Subject: [Distutils] =?utf-8?q?Python_module_for_use_in_=E2=80=98setup=2Ep?= =?utf-8?q?y=E2=80=99_but_not_to_install?= In-Reply-To: <54CD38DB.9030405@stoneleaf.us> References: <85a91gud80.fsf@benfinney.id.au> <54CB0012.2020203@stoneleaf.us> <85mw50j241.fsf@benfinney.id.au> <54CD38DB.9030405@stoneleaf.us> Message-ID: On 1 February 2015 at 06:19, Ethan Furman wrote: > On 01/29/2015 08:58 PM, Ben Finney wrote: >> Ethan Furman writes: >> >>> However, I feel that requiring a dependency simply for the >>> installation of the main package (the main package doesn't actually >>> use this install-dependency, correct?) is heavy-handed and should be >>> avoided. >> >> It is ideally a build-time dependency and not an install-time >> dependency, but I'm having a difficult time figuring out how to >> distinguish those so Setuptools will pay attention. > > Ah, so it's needed with the sdist command and not the install command? Yeah, you definitely have my sympathies with > that one. Too bad there's no easy way to specify behavior based on the command used... (hopefully someone will now tell > how I am wrong about that ;) . You're right, which is why PEP 426 has a more fine-grained dependency specification model (separating runtime, build, test and development dependencies). Other things are higher on the todo list right now than pushing that forward, but we'll get there eventually. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ethan at stoneleaf.us Sun Feb 1 01:23:00 2015 From: ethan at stoneleaf.us (Ethan Furman) Date: Sat, 31 Jan 2015 16:23:00 -0800 Subject: [Distutils] =?utf-8?q?Python_module_for_use_in_=E2=80=98setup=2Ep?= =?utf-8?q?y=E2=80=99_but_not_to_install?= In-Reply-To: References: <85a91gud80.fsf@benfinney.id.au> <54CB0012.2020203@stoneleaf.us> <85mw50j241.fsf@benfinney.id.au> <54CD38DB.9030405@stoneleaf.us> Message-ID: <54CD71E4.40800@stoneleaf.us> On 01/31/2015 04:11 PM, Nick Coghlan wrote: > On 1 February 2015 at 06:19, Ethan Furman wrote: >> >> Ah, so it's needed with the sdist command and not the install command? Yeah, you definitely have my sympathies with >> that one. Too bad there's no easy way to specify behavior based on the command used... (hopefully someone will now tell >> how I am wrong about that ;) . > > You're right, which is why PEP 426 has a more fine-grained dependency > specification model (separating runtime, build, test and development > dependencies). > > Other things are higher on the todo list right now than pushing that > forward, but we'll get there eventually. I guess in the mean time we can do things like: import sys if 'sdist' in sys.argv: import sdist_dependency -- ~Ethan~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From ncoghlan at gmail.com Sun Feb 1 01:43:07 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 1 Feb 2015 10:43:07 +1000 Subject: [Distutils] Idea: allow PyPI projects to link to DockerHub container images Message-ID: One of the recurring problems folks mention here is how to deal with the complexities of handling Linux ABI compatibility issues. That's a genuinely hard problem, and not one that *anyone* has solved well - it's one of the reasons being an independent software vendor for Linux in general (rather than just certifying with the major commercial distros) is a pain. When folks do it, they tend to take the "bundle everything you need and drop it somewhere in /opt" approach which (quite rightly) makes professional system administrators very unhappy. On the distro side, this is one of the big factors driving the popularity of the "bundle all the things" container image model: it does the bundling in such a way that it's amenable to programmatic introspection, and it still reduces the runtime ABI compatibility question to just the kernel ABI. This tends to work really when in the case of dynamic languages like Python, as the language runtime is likely to deal with most kernel compatibility issues for you. (ABI incompatibilities can still bite you if you're using system libraries inside the container and your base image doesn't match your runtime kernel, but the bug surface is still much smaller than when you use the end user's system libraries directly) It seems to me that, at least for web services published via PyPI (like Kallithea), "use our recommended container", is likely to be the easiest way to get folks on Linux up and running quickly with the service. Folks may still want to take the image apart later and roll their own (e.g. to switch to running on a different web server or a different base image), but they wouldn't have to do their own integration work just to get started. The other advantage of nudging folks in the direction of Linux containers to address their ABI compatibility woes is that this is tech that already (mostly) works, and has a broader management ecosystem growing around it (including both the major open source platform-as-a-service offerings in OpenShift and Cloud Foundry). Inventing our own way of abstracting away the Linux ABI compatibility problem would be an awful lot of work, and likely leave us with an end result that isn't pre-integrated with anything else. Regards, Nick. P.S. Full disclosure: for Fedora's developer experience work for web service developers, we're heading heavily in the direction of containers+Vagrant for local dev workstations, to allow common dev workflows across Linux, Mac OS X and Windows, and then pushing the containers through Linux based CI and independent QE workflows, into container based production Linux environments, including the Google & Red Hat backed Kubernetes container orchestration framework and OpenStack's Project Solum. In my day job, this is also the direction we're taking Red Hat's internal infrastructure since it systematically solves a variety of problems for us (like how to most effectively allow folks to develop on Fedora while deploying on RHEL). -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Sun Feb 1 02:01:01 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 31 Jan 2015 20:01:01 -0500 Subject: [Distutils] Idea: allow PyPI projects to link to DockerHub container images In-Reply-To: References: Message-ID: <9929673A-172D-43B9-BBD3-69FDDA109474@stufft.io> > On Jan 31, 2015, at 7:43 PM, Nick Coghlan wrote: > > One of the recurring problems folks mention here is how to deal with > the complexities of handling Linux ABI compatibility issues. > > That's a genuinely hard problem, and not one that *anyone* has solved > well - it's one of the reasons being an independent software vendor > for Linux in general (rather than just certifying with the major > commercial distros) is a pain. When folks do it, they tend to take the > "bundle everything you need and drop it somewhere in /opt" approach > which (quite rightly) makes professional system administrators very > unhappy. > > On the distro side, this is one of the big factors driving the > popularity of the "bundle all the things" container image model: it > does the bundling in such a way that it's amenable to programmatic > introspection, and it still reduces the runtime ABI compatibility > question to just the kernel ABI. This tends to work really when in the > case of dynamic languages like Python, as the language runtime is > likely to deal with most kernel compatibility issues for you. (ABI > incompatibilities can still bite you if you're using system libraries > inside the container and your base image doesn't match your runtime > kernel, but the bug surface is still much smaller than when you use > the end user's system libraries directly) > > It seems to me that, at least for web services published via PyPI > (like Kallithea), "use our recommended container", is likely to be the > easiest way to get folks on Linux up and running quickly with the > service. Folks may still want to take the image apart later and roll > their own (e.g. to switch to running on a different web server or a > different base image), but they wouldn't have to do their own > integration work just to get started. > > The other advantage of nudging folks in the direction of Linux > containers to address their ABI compatibility woes is that this is > tech that already (mostly) works, and has a broader management > ecosystem growing around it (including both the major open source > platform-as-a-service offerings in OpenShift and Cloud Foundry). > Inventing our own way of abstracting away the Linux ABI compatibility > problem would be an awful lot of work, and likely leave us with an end > result that isn't pre-integrated with anything else. > > Regards, > Nick. > > P.S. Full disclosure: for Fedora's developer experience work for web > service developers, we're heading heavily in the direction of > containers+Vagrant for local dev workstations, to allow common dev > workflows across Linux, Mac OS X and Windows, and then pushing the > containers through Linux based CI and independent QE workflows, into > container based production Linux environments, including the Google & > Red Hat backed Kubernetes container orchestration framework and > OpenStack's Project Solum. In my day job, this is also the direction > we're taking Red Hat's internal infrastructure since it systematically > solves a variety of problems for us (like how to most effectively > allow folks to develop on Fedora while deploying on RHEL). Do you expect some automated tool to take advantage of this link? In other words, what?s the benefit over just having a link to the docker container in the long_description or in the metadata 2.0 project urls? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ncoghlan at gmail.com Sun Feb 1 04:54:53 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 1 Feb 2015 13:54:53 +1000 Subject: [Distutils] Idea: allow PyPI projects to link to DockerHub container images In-Reply-To: <9929673A-172D-43B9-BBD3-69FDDA109474@stufft.io> References: <9929673A-172D-43B9-BBD3-69FDDA109474@stufft.io> Message-ID: On 1 February 2015 at 11:01, Donald Stufft wrote: > Do you expect some automated tool to take advantage of this link? > > In other words, what?s the benefit over just having a link to the docker > container in the long_description or in the metadata 2.0 project urls? I agree that from an implementation perspective, this could just be a new recommended URL in the project URLs metadata (e.g. "Reference Container Images"). If folks don't think the idea sounds horrible, I'll make that update to the PEP 459 draft. However, the bigger picture I'm mostly interested in is consistency of presentation in the PyPI web UI (probably at some point after the migration to Warehouse - I originally started writing this idea up as a Warehouse RFE), and in making providing reference Docker images something we explicitly recommend doing in the Python Package User Guide for PyPI published web service projects that support deploying on Linux (even Microsoft are aiming to make it easy to deploy Docker based containers on their Azure public cloud by way of Linux VMs [1]). (Longer term, xdg-app looks promising for rich client Linux applications, but that's a significantly less mature piece of technology, which says a lot given the relative immaturity of the container image based approach to Linux service deployment). Folks are certainly already free to point their users at prebuilt container images if they want to, so this idea would specifically be about shifting to explicitly recommending container images as a good approach to dealing with the challenges created by the lack of a good cross-distro way to described Linux ABI compatibility requirements. When even Red Hat, SUSE and Canonical are saying "Yeah, you know what? Just use containers and we'll take care of figuring out a way to deal with the resulting security, auditing and integrated system qualification consequences", that's a pretty strong hint that pursuing the "declarative platform requirements" approach may not actually be viable in the case of Linux. Such a feature wouldn't need to be specifically about linking to DockerHub (we could refer to something more generic like "reference Linux container images"), DockerHub is just currently the easiest way to publish them (Docker's a fully open source company, so external hosting is already possible for folks that are particularly keen to do so, but being on DockerHub integrates most easily with the standard docker client tools - similar to the situation with pip and other language ecosystem specific tools). This was a connection my brain made this morning as a result of the recent thread on Linux ABI compatibility declarations - if even the commercial Linux distros are essentially giving up on the "programmatically declare your platform ABI compatibility requirements" approach at the application layer in favour of "bundle-your-dependencies-while-still-supporting-a-decent-auditing-mechanism" (i.e. container images on backend servers or the xdg-app approach [2] for rich client applications), we might be well advised to follow their lead. I also think this is one of the key lessons being learned on the commercial Linux vendor side from the rise of Android as the dominant (by far) Linux client OS: the combination of "independently updated siloed applications" with "an integrated platform for running and updating siloed applications and allowing them to interoperate in a controlled fashion" is a model that really works well for a wide range of users, a much wider range than the "do-your-own-integration" adventure that has been the traditional Linux experience outside the world of formal commercial compatibility certification programs. Cheers, Nick. [1] http://azure.microsoft.com/blog/2015/01/08/introducing-docker-in-microsoft-azure-marketplace/ [2] https://wiki.gnome.org/Projects/SandboxedApps -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From qwcode at gmail.com Sun Feb 1 18:04:59 2015 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 1 Feb 2015 09:04:59 -0800 Subject: [Distutils] Idea: allow PyPI projects to link to DockerHub container images In-Reply-To: References: <9929673A-172D-43B9-BBD3-69FDDA109474@stufft.io> Message-ID: > I agree that from an implementation perspective, this could just be a > new recommended URL in the project URLs metadata (e.g. "Reference > Container Images"). If folks don't think the idea sounds horrible, > I'll make that update to the PEP 459 draft. > wouldn't this just be a use case for a custom "Metadata Extension", and not something new to put into PEP459? I'm just imagining you won't cover everything that might be needed for automation down the road, and it will end up being an unused field in a few years. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Feb 2 08:30:14 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 2 Feb 2015 17:30:14 +1000 Subject: [Distutils] Idea: allow PyPI projects to link to DockerHub container images In-Reply-To: References: <9929673A-172D-43B9-BBD3-69FDDA109474@stufft.io> Message-ID: On 2 February 2015 at 03:04, Marcus Smith wrote: > >> I agree that from an implementation perspective, this could just be a >> new recommended URL in the project URLs metadata (e.g. "Reference >> Container Images"). If folks don't think the idea sounds horrible, >> I'll make that update to the PEP 459 draft. > > > wouldn't this just be a use case for a custom "Metadata Extension", and not > something new to put into PEP459? > I'm just imagining you won't cover everything that might be needed for > automation down the road, and it will end up being an unused field in a few > years. Yeah, that actually occurred to me this morning - nothing to change on the metadata front for now, experiment with metadata extensions once we get 426 out the door (I think the Warehouse migration and the PEP 458 package signing proposal are higher priority near term, with finalising 426 next on the todo list after that). As a possible interim approach to improving the situation, what do you think of my writing up a "Binary distribution for Linux" advanced topic? That could cover not only containers, but also the technique of "bundle a /opt virtualenv in a platform binary package" as well as actually creating native system packages (with varying degrees of distro policy compliance). Scientific Python users & publishers could also be nudged in the direction of the conda/binstar.org ecosystem. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guettliml at thomas-guettler.de Mon Feb 2 11:37:57 2015 From: guettliml at thomas-guettler.de (=?UTF-8?B?VGhvbWFzIEfDvHR0bGVy?=) Date: Mon, 02 Feb 2015 11:37:57 +0100 Subject: [Distutils] Docs: allowed characters in setuptools.setup(name='.....') Message-ID: <54CF5385.4070207@thomas-guettler.de> Hi, I could not find documentation about the allowed characters here: setuptools.setup(name='.....') Someone told me it here: https://github.com/pypa/pip/issues/2383#issuecomment-72034990 {{{ .... The precise rules on what's a valid egg_name aren't documented anywhere particularly obvious, unfortunately. }}} Since docs are important, I want this to change. Where should be the docs about the allowed characters in this place? Please don't provide details about which characters are allowed or not. This issue is about where the docs should be. Details later :-) Thomas G?ttler From donald at stufft.io Mon Feb 2 13:36:33 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 2 Feb 2015 07:36:33 -0500 Subject: [Distutils] Allowed characters in setuptools.setup(name='????') In-Reply-To: <54CB3972.6040800@thomas-guettler.de> References: <54CB3972.6040800@thomas-guettler.de> Message-ID: > On Jan 30, 2015, at 2:57 AM, Thomas G?ttler wrote: > > Hi, > > where is the reference of the allowed characters in the name argument of > setuptool.setup()? > > I could not find it. > It depends on what you mean by allowed. Previously the "name" was defined as allowing anything however setuptools "normalizes" non ascii numeric characters to "-". There is an unaccepted PEP[1] which says names are constrained to: * ASCII letters ( [a-zA-Z] ) * ASCII digits ( [0-9] ) * underscores ( _ ) * hyphens ( - ) * periods ( . ) Now in PyPI previously names were allowed to be anything that didn't contain "/", however we've since changed that to match the constraints in the PEP. [1] https://www.python.org/dev/peps/pep-0426/#name --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Mon Feb 2 14:11:59 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 2 Feb 2015 13:11:59 +0000 Subject: [Distutils] Docs: allowed characters in setuptools.setup(name='.....') In-Reply-To: <54CF5385.4070207@thomas-guettler.de> References: <54CF5385.4070207@thomas-guettler.de> Message-ID: On 2 February 2015 at 10:37, Thomas G?ttler wrote: > I could not find documentation about the allowed characters here: > > setuptools.setup(name='.....') > > Someone told me it here: > https://github.com/pypa/pip/issues/2383#issuecomment-72034990 > > {{{ > .... > The precise rules on what's a valid egg_name aren't documented anywhere > particularly obvious, unfortunately. > }}} > > Since docs are important, I want this to change. > > Where should be the docs about the allowed characters in this place? > > Please don't provide details about which characters are allowed or not. > This issue is about where the docs should be. Details later :-) Wait - the documentation on what's a valid name *does* exist, it's in the metadata PEPs. I think you misunderstood what I said on that issue. The "name" argument for setup() is a project name, which is defined in PEP 426 and in https://packaging.python.org/en/latest/distributing.html#name The "#egg=" fragment in a URL is not very clearly documented, but I pointed you to the setuptools docs that cover it, which imply that using underscore in a #egg fragment won't work. Egg fragments are somewhat of a legacy feature (although there's no replacement at the moment AFAIK) so there's no PEP for them. As long as you use something in #egg that is equivalent to (in the PEP 426 sense) your project name from setup() you should be fine. You may want to pick a canonical form that you will use consistently wherever you use the project name, but nothing requires you to do that. There's also no clearly defined "lowest common denominator" canonical form that you should use everywhere. The best I've ever found is various of the setuptools safe_name and similar functions. It might be nice to have a documented "recommended canonical form" that said things like "always use underscores rather than dashes" or "use all lowercase". But there isn't one, and I'm not sure there's even a consensus. If there was, the section in the packaging user guide that I pointed to above would be a good place for it, I guess. The various historical uses of project names, and the different but related terminologies and usages are confusing, certainly. Also, some areas are to a greater or lesser extent "legacy" and so don't follow the Metadata 2.0 ideas particularly closely (and never will), which makes things even worse. Apologies if I made any of this more confusing rather than less. Paul From p.f.moore at gmail.com Mon Feb 2 14:15:27 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 2 Feb 2015 13:15:27 +0000 Subject: [Distutils] Allowed characters in setuptools.setup(name='????') In-Reply-To: <54CB3972.6040800@thomas-guettler.de> References: <54CB3972.6040800@thomas-guettler.de> Message-ID: On 30 January 2015 at 07:57, Thomas G?ttler wrote: > where is the reference of the allowed characters in the name argument of > setuptool.setup()? > > I could not find it. https://packaging.python.org/en/latest/distributing.html#name which refers to PEP 426 (which Donald quoted). Paul From guettliml at thomas-guettler.de Mon Feb 2 16:22:12 2015 From: guettliml at thomas-guettler.de (=?UTF-8?B?VGhvbWFzIEfDvHR0bGVy?=) Date: Mon, 02 Feb 2015 16:22:12 +0100 Subject: [Distutils] Allowed characters in setuptools.setup(name='????') In-Reply-To: References: <54CB3972.6040800@thomas-guettler.de> Message-ID: <54CF9624.5070307@thomas-guettler.de> Am 02.02.2015 um 14:15 schrieb Paul Moore: > On 30 January 2015 at 07:57, Thomas G?ttler > wrote: >> where is the reference of the allowed characters in the name argument of >> setuptool.setup()? >> >> I could not find it. > > https://packaging.python.org/en/latest/distributing.html#name which > refers to PEP 426 (which Donald quoted). > Paul > Thank you very much for this link. I searched for the docs here, since it is **setuptools**.setup() https://pythonhosted.org/setuptools/setuptools.html Is your link the official documentation of the method setuptools.setup()? Thomas From p.f.moore at gmail.com Mon Feb 2 16:37:54 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 2 Feb 2015 15:37:54 +0000 Subject: [Distutils] Allowed characters in setuptools.setup(name='????') In-Reply-To: <54CF9624.5070307@thomas-guettler.de> References: <54CB3972.6040800@thomas-guettler.de> <54CF9624.5070307@thomas-guettler.de> Message-ID: On 2 February 2015 at 15:22, Thomas G?ttler wrote: > Thank you very much for this link. I searched for the docs here, since it is > **setuptools**.setup() > > https://pythonhosted.org/setuptools/setuptools.html > > Is your link the official documentation of the method setuptools.setup()? The Packaging User Guide is intended as the starting point for any packaging questions, and should be definitive in what it says. But it's not intended to be complete, and for much of the detail you need to refer to individual projects' documentation. That's what the comment For more reference material, see Building and Distributing Packages in the setuptools docs, but note that some advisory content there may be outdated. In the event of conflicts, prefer the advice in the Python Packaging User Guide. in https://packaging.python.org/en/latest/distributing.html is intended to convey. Paul From erik.m.bray at gmail.com Mon Feb 2 23:10:28 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Mon, 2 Feb 2015 17:10:28 -0500 Subject: [Distutils] Closing the Delete File + Re-upload File Loophole. In-Reply-To: <7201094C-8CC0-4BF7-8E4E-DCC0E5AD7C34@gmail.com> References: <255CD6EE-CC48-4C5B-A417-4191D9DFEDF4@stufft.io> <7201094C-8CC0-4BF7-8E4E-DCC0E5AD7C34@gmail.com> Message-ID: On Sat, Jan 24, 2015 at 1:53 PM, Marc Abramowitz wrote: >> You can re-run register as many times as you want which is all you need to adjust the README. > > Maybe true but it would be pretty awesome to solve https://bitbucket.org/pypa/pypi/issue/161/rest-formatting-fails-and-there-is-no-way because repeatedly registering and doing trial and error is also not a great experience and it wastes PyPI resources. I usually register on testpypi.python.org to assuage this fear. Granted it's a little annoying to do, because last I checked the only way to do this (may this is has been fixed--someone can correct me) is to change your .pypirc with credentials to testpypi. Has .pypirc been done away with yet? Erik From erik.m.bray at gmail.com Mon Feb 2 23:19:10 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Mon, 2 Feb 2015 17:19:10 -0500 Subject: [Distutils] =?utf-8?q?Python_module_for_use_in_=E2=80=98setup=2Ep?= =?utf-8?q?y=E2=80=99_but_not_to_install?= In-Reply-To: <54CD38DB.9030405@stoneleaf.us> References: <85a91gud80.fsf@benfinney.id.au> <54CB0012.2020203@stoneleaf.us> <85mw50j241.fsf@benfinney.id.au> <54CD38DB.9030405@stoneleaf.us> Message-ID: On Sat, Jan 31, 2015 at 3:19 PM, Ethan Furman wrote: > On 01/29/2015 08:58 PM, Ben Finney wrote: >> Ethan Furman writes: >> >>> However, I feel that requiring a dependency simply for the >>> installation of the main package (the main package doesn't actually >>> use this install-dependency, correct?) is heavy-handed and should be >>> avoided. >> >> It is ideally a build-time dependency and not an install-time >> dependency, but I'm having a difficult time figuring out how to >> distinguish those so Setuptools will pay attention. > > Ah, so it's needed with the sdist command and not the install command? Yeah, you definitely have my sympathies with > that one. Too bad there's no easy way to specify behavior based on the command used... (hopefully someone will now tell > how I am wrong about that ;) . We just subclass the command in question to do the relevant tasks (like generate some source files when running sdist). The subclassed command can just try to import the dependencies and raise a (useful) exception message if they are not available. For example Astropy recently grew Jinja2 is a dependency to build some source files. The built files are included in the sdist so casual users don't have to worry about it. Developers on the other hand will be nagged that they need Jinja2, and it isn't automatically installed, which sucks. But for the most part the only people who will need to worry about that can figure out how to install build dependencies. I'm less worried about it for them than I am for the casual user trying to run `python setup.py install` (or pip install for that matter). Erik From xav.fernandez at gmail.com Mon Feb 2 23:35:58 2015 From: xav.fernandez at gmail.com (Xavier Fernandez) Date: Mon, 2 Feb 2015 23:35:58 +0100 Subject: [Distutils] Closing the Delete File + Re-upload File Loophole. In-Reply-To: References: <255CD6EE-CC48-4C5B-A417-4191D9DFEDF4@stufft.io> <7201094C-8CC0-4BF7-8E4E-DCC0E5AD7C34@gmail.com> Message-ID: Not sure if it fits your bill (or if it works, since I did not know how testpypi) but you can put something like that in your .pypirc: [distutils] index-servers = pypi testpypi [pypi] username:public_pypi_login_if_needed password:public_pypi_password_if_needed [testpypi] repository:https://testpypi.python.org username:testpypi_login password:testpypi_password And specify which repository to use when registering/uploading via the --repository (-r) option. On Mon, Feb 2, 2015 at 11:10 PM, Erik Bray wrote: > On Sat, Jan 24, 2015 at 1:53 PM, Marc Abramowitz > wrote: > >> You can re-run register as many times as you want which is all you need > to adjust the README. > > > > Maybe true but it would be pretty awesome to solve > https://bitbucket.org/pypa/pypi/issue/161/rest-formatting-fails-and-there-is-no-way > because repeatedly registering and doing trial and error is also not a > great experience and it wastes PyPI resources. > > I usually register on testpypi.python.org to assuage this fear. > Granted it's a little annoying to do, because last I checked the only > way to do this (may this is has been fixed--someone can correct me) is > to change your .pypirc with credentials to testpypi. > > Has .pypirc been done away with yet? > > Erik > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Tue Feb 3 07:01:36 2015 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 2 Feb 2015 22:01:36 -0800 Subject: [Distutils] Idea: allow PyPI projects to link to DockerHub container images In-Reply-To: References: <9929673A-172D-43B9-BBD3-69FDDA109474@stufft.io> Message-ID: > As a possible interim approach to improving the situation, what do you > think of my writing up a "Binary distribution for Linux" advanced > topic? That could cover not only containers, but also the technique of > "bundle a /opt virtualenv in a platform binary package" as well as > actually creating native system packages (with varying degrees of > distro policy compliance). sounds great to me fwiw, I have a project *idea* going to address this problem as well, although it's still vaporware right now. http://pyospkg.readthedocs.org/en/latest/overview.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Feb 3 09:19:05 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 3 Feb 2015 18:19:05 +1000 Subject: [Distutils] Idea: allow PyPI projects to link to DockerHub container images In-Reply-To: References: <9929673A-172D-43B9-BBD3-69FDDA109474@stufft.io> Message-ID: On 3 February 2015 at 16:01, Marcus Smith wrote: > >> As a possible interim approach to improving the situation, what do you >> think of my writing up a "Binary distribution for Linux" advanced >> topic? That could cover not only containers, but also the technique of >> "bundle a /opt virtualenv in a platform binary package" as well as >> actually creating native system packages (with varying degrees of >> distro policy compliance). > > > sounds great to me > fwiw, I have a project *idea* going to address this problem as well, > although it's still vaporware right now. > > http://pyospkg.readthedocs.org/en/latest/overview.html Oh, very nice. I'm going to ping a few folks about taking a look at that :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Feb 3 09:50:24 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 3 Feb 2015 18:50:24 +1000 Subject: [Distutils] Idea: allow PyPI projects to link to DockerHub container images In-Reply-To: References: Message-ID: On 3 Feb 2015 18:36, "Ionel Cristian M?rie?" wrote: > > Wouldn't this be a good time to discuss other types of URLs? > > Eg: > > * Documentation > * CI status (in the many different flavors) > * Issue tracker > * Forum/mailing list > * Support place > > After all, there's nothing special about docker images - every project is going to have a different special thing at some URL. That's already part of the proposed project metadata extension in PEP 459. Cheers, Nick. > > > Thanks, > -- Ionel Cristian M?rie?, blog.ionelmc.ro > > On Sun, Feb 1, 2015 at 2:43 AM, Nick Coghlan wrote: >> >> One of the recurring problems folks mention here is how to deal with >> the complexities of handling Linux ABI compatibility issues. >> >> That's a genuinely hard problem, and not one that *anyone* has solved >> well - it's one of the reasons being an independent software vendor >> for Linux in general (rather than just certifying with the major >> commercial distros) is a pain. When folks do it, they tend to take the >> "bundle everything you need and drop it somewhere in /opt" approach >> which (quite rightly) makes professional system administrators very >> unhappy. >> >> On the distro side, this is one of the big factors driving the >> popularity of the "bundle all the things" container image model: it >> does the bundling in such a way that it's amenable to programmatic >> introspection, and it still reduces the runtime ABI compatibility >> question to just the kernel ABI. This tends to work really when in the >> case of dynamic languages like Python, as the language runtime is >> likely to deal with most kernel compatibility issues for you. (ABI >> incompatibilities can still bite you if you're using system libraries >> inside the container and your base image doesn't match your runtime >> kernel, but the bug surface is still much smaller than when you use >> the end user's system libraries directly) >> >> It seems to me that, at least for web services published via PyPI >> (like Kallithea), "use our recommended container", is likely to be the >> easiest way to get folks on Linux up and running quickly with the >> service. Folks may still want to take the image apart later and roll >> their own (e.g. to switch to running on a different web server or a >> different base image), but they wouldn't have to do their own >> integration work just to get started. >> >> The other advantage of nudging folks in the direction of Linux >> containers to address their ABI compatibility woes is that this is >> tech that already (mostly) works, and has a broader management >> ecosystem growing around it (including both the major open source >> platform-as-a-service offerings in OpenShift and Cloud Foundry). >> Inventing our own way of abstracting away the Linux ABI compatibility >> problem would be an awful lot of work, and likely leave us with an end >> result that isn't pre-integrated with anything else. >> >> Regards, >> Nick. >> >> P.S. Full disclosure: for Fedora's developer experience work for web >> service developers, we're heading heavily in the direction of >> containers+Vagrant for local dev workstations, to allow common dev >> workflows across Linux, Mac OS X and Windows, and then pushing the >> containers through Linux based CI and independent QE workflows, into >> container based production Linux environments, including the Google & >> Red Hat backed Kubernetes container orchestration framework and >> OpenStack's Project Solum. In my day job, this is also the direction >> we're taking Red Hat's internal infrastructure since it systematically >> solves a variety of problems for us (like how to most effectively >> allow folks to develop on Fedora while deploying on RHEL). >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Feb 3 10:10:14 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 3 Feb 2015 09:10:14 +0000 Subject: [Distutils] Idea: allow PyPI projects to link to DockerHub container images In-Reply-To: References: <9929673A-172D-43B9-BBD3-69FDDA109474@stufft.io> Message-ID: On 1 February 2015 at 03:54, Nick Coghlan wrote: > I agree that from an implementation perspective, this could just be a > new recommended URL in the project URLs metadata (e.g. "Reference > Container Images"). If folks don't think the idea sounds horrible, > I'll make that update to the PEP 459 draft. Unless I'm missing something, there aren't any particular recommended URLs in PEP 459 (unless you mean the ones in the example given in the PEP). But yeah, I guess this is a reasonable idea (meaningless to me as a Windows user, but I've no objections :-)) Paul From ncoghlan at gmail.com Tue Feb 3 12:08:46 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 3 Feb 2015 21:08:46 +1000 Subject: [Distutils] Idea: allow PyPI projects to link to DockerHub container images In-Reply-To: References: <9929673A-172D-43B9-BBD3-69FDDA109474@stufft.io> Message-ID: On 3 Feb 2015 19:10, "Paul Moore" wrote: > > On 1 February 2015 at 03:54, Nick Coghlan wrote: > > I agree that from an implementation perspective, this could just be a > > new recommended URL in the project URLs metadata (e.g. "Reference > > Container Images"). If folks don't think the idea sounds horrible, > > I'll make that update to the PEP 459 draft. > > Unless I'm missing something, there aren't any particular recommended > URLs in PEP 459 (unless you mean the ones in the example given in the > PEP). But yeah, I guess this is a reasonable idea (meaningless to me > as a Windows user, but I've no objections :-)) Microsoft actually support Linux containers on Azure and are working with Docker to support Windows containers on Windows. Everybody gets containers! :) Cheers, Nick. > > Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Feb 3 12:28:31 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 3 Feb 2015 11:28:31 +0000 Subject: [Distutils] Idea: allow PyPI projects to link to DockerHub container images In-Reply-To: References: <9929673A-172D-43B9-BBD3-69FDDA109474@stufft.io> Message-ID: On 3 February 2015 at 11:08, Nick Coghlan wrote: > Microsoft actually support Linux containers on Azure and are working with > Docker to support Windows containers on Windows. Everybody gets containers! > :) Cool, can't wait :-) At the moment, the Docker install for Windows includes VirtualBox and runs stuff in that, which seems a bit pointless to me (why not just install a Linux VM and run docker natively on that?) So I haven't looked much at Docker. But if it runs natively on Windows, it might be worth a look. Paul From ben+python at benfinney.id.au Wed Feb 4 10:41:08 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 04 Feb 2015 20:41:08 +1100 Subject: [Distutils] Upload signature (and signing key) after package upload Message-ID: <85a90uggjv.fsf@benfinney.id.au> Howdy all, How can I upload an OpenPGP signature (and the signing key) for a version, after the upload of the distribution is complete? I have recently been informed of the ?--sign? and ?--identity? options to the ?upload? command. As described here: Signing a package is easy and it is done as part of the upload process to PyPI. [?] Can it be done, not ?as part of the upload process?, but subsequent to the upload of the distribution? How? -- \ ?Try adding ?as long as you don't breach the terms of service ? | `\ according to our sole judgement? to the end of any cloud | _o__) computing pitch.? ?Simon Phipps, 2010-12-11 | Ben Finney From contact at ionelmc.ro Tue Feb 3 09:36:03 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Tue, 3 Feb 2015 10:36:03 +0200 Subject: [Distutils] Idea: allow PyPI projects to link to DockerHub container images In-Reply-To: References: Message-ID: Wouldn't this be a good time to discuss other types of URLs? Eg: * Documentation * CI status (in the many different flavors) * Issue tracker * Forum/mailing list * Support place After all, there's nothing special about docker images - every project is going to have a different special thing at some URL. Thanks, -- Ionel Cristian M?rie?, blog.ionelmc.ro On Sun, Feb 1, 2015 at 2:43 AM, Nick Coghlan wrote: > One of the recurring problems folks mention here is how to deal with > the complexities of handling Linux ABI compatibility issues. > > That's a genuinely hard problem, and not one that *anyone* has solved > well - it's one of the reasons being an independent software vendor > for Linux in general (rather than just certifying with the major > commercial distros) is a pain. When folks do it, they tend to take the > "bundle everything you need and drop it somewhere in /opt" approach > which (quite rightly) makes professional system administrators very > unhappy. > > On the distro side, this is one of the big factors driving the > popularity of the "bundle all the things" container image model: it > does the bundling in such a way that it's amenable to programmatic > introspection, and it still reduces the runtime ABI compatibility > question to just the kernel ABI. This tends to work really when in the > case of dynamic languages like Python, as the language runtime is > likely to deal with most kernel compatibility issues for you. (ABI > incompatibilities can still bite you if you're using system libraries > inside the container and your base image doesn't match your runtime > kernel, but the bug surface is still much smaller than when you use > the end user's system libraries directly) > > It seems to me that, at least for web services published via PyPI > (like Kallithea), "use our recommended container", is likely to be the > easiest way to get folks on Linux up and running quickly with the > service. Folks may still want to take the image apart later and roll > their own (e.g. to switch to running on a different web server or a > different base image), but they wouldn't have to do their own > integration work just to get started. > > The other advantage of nudging folks in the direction of Linux > containers to address their ABI compatibility woes is that this is > tech that already (mostly) works, and has a broader management > ecosystem growing around it (including both the major open source > platform-as-a-service offerings in OpenShift and Cloud Foundry). > Inventing our own way of abstracting away the Linux ABI compatibility > problem would be an awful lot of work, and likely leave us with an end > result that isn't pre-integrated with anything else. > > Regards, > Nick. > > P.S. Full disclosure: for Fedora's developer experience work for web > service developers, we're heading heavily in the direction of > containers+Vagrant for local dev workstations, to allow common dev > workflows across Linux, Mac OS X and Windows, and then pushing the > containers through Linux based CI and independent QE workflows, into > container based production Linux environments, including the Google & > Red Hat backed Kubernetes container orchestration framework and > OpenStack's Project Solum. In my day job, this is also the direction > we're taking Red Hat's internal infrastructure since it systematically > solves a variety of problems for us (like how to most effectively > allow folks to develop on Fedora while deploying on RHEL). > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmiscml at gmail.com Sat Feb 7 15:15:23 2015 From: pmiscml at gmail.com (Paul Sokolovsky) Date: Sat, 7 Feb 2015 16:15:23 +0200 Subject: [Distutils] Distribution latest version permalink? Message-ID: <20150207161523.7cba6832@x230> Hello, Knowing a distribution name, I'd like to download its latest release file - as a single step, for easy shell scripting. Is that possible? For example: https://pypi.python.org/pypi/dry/ gives a link to https://pypi.python.org/packages/source/d/dry/dry-0.0.4.tar.gz . Any chance to have a URL like https://pypi.python.org/packages/source/d/dry/dry.tar.gz which would download whatever is set as current release? Thanks, Paul mailto:pmiscml at gmail.com From techtonik at gmail.com Thu Feb 5 08:28:43 2015 From: techtonik at gmail.com (anatoly techtonik) Date: Thu, 5 Feb 2015 10:28:43 +0300 Subject: [Distutils] Google Auth is broken for PyPI Message-ID: Hi, Attempt to authorize with Google gives "Something wen wrong" page shown at https://pypi.python.org/pypi?:action=login&provider=Google Please, CC. -- anatoly t. From ben+python at benfinney.id.au Sat Feb 7 21:38:27 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Sun, 08 Feb 2015 07:38:27 +1100 Subject: [Distutils] Distribution latest version permalink? References: <20150207161523.7cba6832@x230> Message-ID: <85y4o9e9to.fsf@benfinney.id.au> Paul Sokolovsky writes: > Knowing a distribution name, I'd like to download its latest release > file - as a single step, for easy shell scripting. Is that possible? There's not really such a thing as ?the latest release file?. Each distribution can comprise many files; various wheels, eggs, the source in various formats, etc. Which files are available is different between distributions. So a program can't know, just based on a distribution's name, which files will be available for download. You need to query PyPI and then choose a URL from what's available. There is a page for each distribution with links to each file for download. (This is known as the ?simple? PyPI API.) For example, given the distribution name ?pip?, you can get the page and parse that for links. You'll need to decide what the ?latest? is according to the same rules as PyPI itself uses. Python's Setuptools library has functions to compare version strings to determine which is later. Alternatively, you can parse the user-facing page at which shows the latest version of the ?pip? distribution. The table listing the files for download doesn't have semantic markup, and I'm fairly sure no promises are made to support web scraping. But it's feasible without much effort. I would recommend using one of the PyPI API pages (the ?simple? API is what I described, I don't know how to use any of the others) since at least some guarantee is made of their stability. -- \ ?I have a map of the United States; it's actual size. It says | `\ ?1 mile equals 1 mile?. Last summer, I folded it.? ?Steven | _o__) Wright | Ben Finney From p.f.moore at gmail.com Sat Feb 7 22:17:58 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 7 Feb 2015 21:17:58 +0000 Subject: [Distutils] Distribution latest version permalink? In-Reply-To: <85y4o9e9to.fsf@benfinney.id.au> References: <20150207161523.7cba6832@x230> <85y4o9e9to.fsf@benfinney.id.au> Message-ID: On 7 February 2015 at 20:38, Ben Finney wrote: > I would recommend using one of the PyPI API pages (the ?simple? API is > what I described, I don't know how to use any of the others) since at > least some guarantee is made of their stability. There's also the JSON page at https://pypi.python.org/pypi//json which includes details of all available versions and files. Paul From richard at python.org Mon Feb 9 00:37:01 2015 From: richard at python.org (Richard Jones) Date: Sun, 08 Feb 2015 23:37:01 +0000 Subject: [Distutils] Google Auth is broken for PyPI References: Message-ID: Google has discontinued support for OpenID, so we're not going to be putting any effort into debugging this issue. Richard On Sun Feb 08 2015 at 6:09:22 AM anatoly techtonik wrote: > Hi, > > Attempt to authorize with Google gives "Something wen wrong" page > shown at https://pypi.python.org/pypi?:action=login&provider=Google > > Please, CC. > -- > anatoly t. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From randy at thesyrings.us Mon Feb 9 14:55:03 2015 From: randy at thesyrings.us (Randy Syring) Date: Mon, 09 Feb 2015 08:55:03 -0500 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: References: Message-ID: <54D8BC37.2040808@thesyrings.us> How about removing the Google icon from the UI so as to avoid confusion? *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ On 02/08/2015 06:37 PM, Richard Jones wrote: > Google has discontinued support for OpenID, so we're not going to be > putting any effort into debugging this issue. > > > Richard > > On Sun Feb 08 2015 at 6:09:22 AM anatoly techtonik > > wrote: > > Hi, > > Attempt to authorize with Google gives "Something wen wrong" page > shown at https://pypi.python.org/pypi?:action=login&provider=Google > > Please, CC. > -- > anatoly t. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Mon Feb 9 07:02:35 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Mon, 9 Feb 2015 08:02:35 +0200 Subject: [Distutils] Distribution latest version permalink? In-Reply-To: References: <20150207161523.7cba6832@x230> <85y4o9e9to.fsf@benfinney.id.au> Message-ID: I suppose it wouldn't be too hard to make a redirection service that reads up the json and redirects to the file you want. Eg, you could use appengine for free (I suppose you'll be under the quota for few http requests). Thanks, -- Ionel Cristian M?rie?, blog.ionelmc.ro On Sat, Feb 7, 2015 at 11:17 PM, Paul Moore wrote: > On 7 February 2015 at 20:38, Ben Finney > wrote: > > I would recommend using one of the PyPI API pages (the ?simple? API is > > what I described, I don't know how to use any of the others) since at > > least some guarantee is made of their stability. > > There's also the JSON page at > https://pypi.python.org/pypi//json which includes details of > all available versions and files. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokoproject at gmail.com Mon Feb 9 05:48:36 2015 From: gokoproject at gmail.com (John Wong) Date: Sun, 8 Feb 2015 23:48:36 -0500 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: References: Message-ID: What if we just remove that as an option? > *Important:* Google has deprecated OpenID 2.0 and will shut it down after a migration period. If your app uses OpenID 2.0, you must migrate your app by the shutdown date April 20, 2015 But what happen to existing users who registered with OpenID? Is PyPI planning to migrate to OAuth 2.0 (and what about warehouse?) Thanks. John On Sun, Feb 8, 2015 at 6:37 PM, Richard Jones wrote: > Google has discontinued support for OpenID, so we're not going to be > putting any effort into debugging this issue. > > > Richard > > On Sun Feb 08 2015 at 6:09:22 AM anatoly techtonik > wrote: > >> Hi, >> >> Attempt to authorize with Google gives "Something wen wrong" page >> shown at https://pypi.python.org/pypi?:action=login&provider=Google >> >> Please, CC. >> -- >> anatoly t. >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Feb 9 20:42:08 2015 From: barry at python.org (Barry Warsaw) Date: Mon, 9 Feb 2015 14:42:08 -0500 Subject: [Distutils] Google Auth is broken for PyPI References: Message-ID: <20150209144208.760f23b6@anarchist.wooz.org> On Feb 08, 2015, at 11:37 PM, Richard Jones wrote: >Google has discontinued support for OpenID, so we're not going to be >putting any effort into debugging this issue. I hope you'll continue to support other OpenID providers, e.g. Launchpad. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From brett at python.org Tue Feb 10 15:27:47 2015 From: brett at python.org (Brett Cannon) Date: Tue, 10 Feb 2015 14:27:47 +0000 Subject: [Distutils] Google Auth is broken for PyPI References: <20150209144208.760f23b6@anarchist.wooz.org> Message-ID: On Mon Feb 09 2015 at 2:42:46 PM Barry Warsaw wrote: > On Feb 08, 2015, at 11:37 PM, Richard Jones wrote: > > >Google has discontinued support for OpenID, so we're not going to be > >putting any effort into debugging this issue. > > I hope you'll continue to support other OpenID providers, e.g. Launchpad. > Does Launchpad support OpenID Connect? If so then migrating to that would solve this. Otherwise it may be time to have a serious look at our federated login support and consider adding GitHub and Bitbucket for wider reach as OpenID 1.0/2.0 is extremely niche at this point. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Feb 10 15:36:13 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 10 Feb 2015 09:36:13 -0500 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: References: <20150209144208.760f23b6@anarchist.wooz.org> Message-ID: <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> > On Feb 10, 2015, at 9:27 AM, Brett Cannon wrote: > > > > On Mon Feb 09 2015 at 2:42:46 PM Barry Warsaw > wrote: > On Feb 08, 2015, at 11:37 PM, Richard Jones wrote: > > >Google has discontinued support for OpenID, so we're not going to be > >putting any effort into debugging this issue. > > I hope you'll continue to support other OpenID providers, e.g. Launchpad. > > Does Launchpad support OpenID Connect? If so then migrating to that would solve this. > > Otherwise it may be time to have a serious look at our federated login support and consider adding GitHub and Bitbucket for wider reach as OpenID 1.0/2.0 is extremely niche at this point. Honestly, I?d rather have less federated login not more. I wish the current OpenID support had never been added. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Tue Feb 10 16:46:45 2015 From: barry at python.org (Barry Warsaw) Date: Tue, 10 Feb 2015 10:46:45 -0500 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: References: <20150209144208.760f23b6@anarchist.wooz.org> Message-ID: <20150210104645.59c2d774@limelight.wooz.org> On Feb 10, 2015, at 02:27 PM, Brett Cannon wrote: >Does Launchpad support OpenID Connect? If so then migrating to that would >solve this. No, I don't believe so. I've just filed this bug: https://bugs.launchpad.net/launchpad/+bug/1420348 >Otherwise it may be time to have a serious look at our federated login >support and consider adding GitHub and Bitbucket for wider reach as OpenID >1.0/2.0 is extremely niche at this point. I'd prefer not to *remove* OpenID support if it's still working, which it is. And while I think GH and BB support might be useful, it's important to remember that those are both closed systems, so I would put them on par with Google, Facebook, or Yahoo! SSO. Launchpad at least has the advantage of being free software, so anyone with sufficient motivation and resources could contribution the needed support. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From martin at v.loewis.de Tue Feb 10 17:23:21 2015 From: martin at v.loewis.de (=?windows-1252?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 10 Feb 2015 17:23:21 +0100 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> References: <20150209144208.760f23b6@anarchist.wooz.org> <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> Message-ID: <54DA3079.6030707@v.loewis.de> Am 10.02.15 um 15:36 schrieb Donald Stufft: > Honestly, I?d rather have less federated login not more. I wish the current OpenID support had never been added. > Can you please elaborate on that position? Why is it useful to have separate accounts on separate systems? Regards, Martin From donald at stufft.io Tue Feb 10 18:33:29 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 10 Feb 2015 12:33:29 -0500 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: <54DA3079.6030707@v.loewis.de> References: <20150209144208.760f23b6@anarchist.wooz.org> <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> <54DA3079.6030707@v.loewis.de> Message-ID: > On Feb 10, 2015, at 11:23 AM, Martin v. L?wis wrote: > > Am 10.02.15 um 15:36 schrieb Donald Stufft: >> Honestly, I?d rather have less federated login not more. I wish the current OpenID support had never been added. >> > > Can you please elaborate on that position? Why is it useful to have > separate accounts on separate systems? Sure. So the basic premise behind federated auth is that you can get a single set of credentials on all (or most) of your sites and eliminate the need to have a password for each site you visit. My opinion is basically influenced by a number of factors: 1. I feel like the goal of federated auth has failed in general and is unlikely to ever succeed. As a user of websites I have over 400 different entries in my password manager, even if 50% of them implement federated auth (which I feel like is a high number but that's not backed by math, just gut feeling) that's still over 200 entries I need to maintain in my password manager. In this case federated auth has not meaningfully reduced the burden of maintaining password for me since maintaining 200 isn't any easier than 400 and instead it just complicates my login flow 2. As a site operator I feel like authentication is a core part of the experience of using my site and by allowing federated auth on my site I'm giving up control over that user flow. A relevant example from PyPI is that a number of users signed up using MyOpenID which is no longer being maintained. This means that either PyPI has to tell those people "tough shit" or PyPI needs to figure out a mitigation tactic against that. Another example is that launchpad randomly starts failing for people, and it'll fail consistently for the same person until it just stops failing for them. I'm unable to actually reproduce this error so it's extremely hard for me to do anything else but shrug and tell them not to use it. 3. I feel like unless you solely rely on federated auth, then federated auth is always going to be a second class citizen for any particular website. For instance Travis CI uses federated auth via Github only, but that's the only thing they support for authentication so everything works well with that. On the other hand a number of sites support federated auth ontop of local accounts and federated auth is almost always worse in some ways, sometimes as simple as the username you get is kinda crappy (dstufft_) sometimes some features don't work (or don't work very well) at all like on PyPI where we need to authenticate people outside of a web context so if we don't have usernames/passwords then we end up needing to require the user to register a secondary "api password" or API key. 4. I feel like none of the current solutions to federated auth are very good. OpenID relies on using an URL as your "personal identifier" which I feel like is a strange and foreign concept to most users. The way around this is often to just hardcode a list of sites, but then as a site operator you're implicitly recommending that users go sign up for one of those sites and use them on your site to login. This is creating an explicit relationship between your site and the other site, a relationship in which you often have no power (for instance, Google <-> PyPI, we're powerless to do anything about them deprecating OpenID other than just sucking it up and dealing with it). Persona did offer a way around this, but persona had other failings like relying on the domain that you happened to be using for your email to implement a persona IdP or otherwise falling back to an implicit relationship with the fallback provider, again one where you're more or less powerless to the operators of that service. Overall I think that the use of federated auth, as a site operator, is really only worth it over the loss of control in two scenarios: A. When your site is already entwined with another site and relying on them for authentication is simply increasing that. An example of this from above is Travis CI where they only work with things hosted on GitHub so also relying on GitHub for authentication isn't that big of a deal and actually makes things better since they can then integrate with GitHub's permissions to check if you have commit on a particular repository. B. When creating an account is likely to be enough of a burden to make people decide not to interact with your site. This category is basically completely comprised of sites that do not have long standing relationships with their users. The only real example I can think of this of the top of my head is sites with comments enabled like blogs, news sites, etc. The commentors are unlikely to have or want a long standing relationship with your site, they just want to make a quick one off comment and then possibly never come back. Sites like PyPI otherhand the cost of creating an account is small compared to the life time of majority of our user base's interaction with us. A key thing to me, as a site operator, is keeping as much control over the experience of my users as I can. Obviously I have to outsource some things because It's not reasonable for me to make my own hardware, write my own drivers, my own kernel, my own OS, my own webserver etc. A good example of a major outsourcing that I was involved in was moving things behind Fastly. However a key difference between that outsourcing and this outsourcing is that if things go sour with Fastly or we need to migrate away from them for one reason or another we can do that without end users needing to change much or anything. However if something like Google dropping OpenID supports happens then the users who relied on that are out of luck and our ability to shield them from the fallout of that is limited. At this point we already have it enabled, so unless someone comes up with a really good migration strategy I doubt we'll be able to get rid of it. However for the reasons above I'm pretty much against adding *additional* federated auth things and I think that we should treat it more of a legacy thing and downplay the fact we have support for it. Bitbucket has downplayed support for random OpenID as well, when you go to their login pages it shows a login form that looks like http://d.stufft.io/image/1O2l2g073h0h, which still lets you login with OpenID but it's muted and downplayed. In a slightly hypocritical view point, I actually think that at some point we should get something like id.python.org which is an IdP and switch all of the *.python.org sites to authenticate against that instead of keeping local user accounts. This would reduce the number of passwords that Python inflicts on people but it still keeps authentication within our (PSF/Python/whatever)'s control. This is more along the lines of implementing SSO using a federated auth technology than actual federated auth though. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ethan at stoneleaf.us Tue Feb 10 19:00:45 2015 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 10 Feb 2015 10:00:45 -0800 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: References: <20150209144208.760f23b6@anarchist.wooz.org> <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> <54DA3079.6030707@v.loewis.de> Message-ID: <54DA474D.7000601@stoneleaf.us> On 02/10/2015 09:33 AM, Donald Stufft wrote: > > In a slightly hypocritical view point, I actually think that at some point we > should get something like id.python.org which is an IdP and switch all of the > *.python.org sites to authenticate against that instead of keeping local > user accounts. I don't think this is hyprocritical at all, because -- > [...] it still keeps authentication within our (PSF/Python/whatever)'s > control. And implementing our own global login system is vastly different from relying on XYZ's authentication services. And, yes, it would be nice to have (and, no, I'm not voluteering ;) . -- ~Ethan~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From martin at v.loewis.de Tue Feb 10 19:06:44 2015 From: martin at v.loewis.de (=?windows-1252?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 10 Feb 2015 19:06:44 +0100 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: References: <20150209144208.760f23b6@anarchist.wooz.org> <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> <54DA3079.6030707@v.loewis.de> Message-ID: <54DA48B4.3090607@v.loewis.de> Am 10.02.15 um 18:33 schrieb Donald Stufft: >> Can you please elaborate on that position? Why is it useful to have >> separate accounts on separate systems? > > Sure. Thanks! Just one comment - without the desire to get into a long-winded discussion. > 1. I feel like the goal of federated auth has failed in general and is unlikely > to ever succeed. As a user of websites I have over 400 different entries in > my password manager, even if 50% of them implement federated auth (which I > feel like is a high number but that's not backed by math, just gut feeling) > that's still over 200 entries I need to maintain in my password manager. In > this case federated auth has not meaningfully reduced the burden of > maintaining password for me since maintaining 200 isn't any easier than 400 > and instead it just complicates my login flow I think this is your personal usage primarily. A lot of user just avoid having to use a password manager, and use the same password on many systems. (Of course, many people also *do* use different passwords, and some also use passwords managers) Regards, Martin From donald at stufft.io Tue Feb 10 19:54:00 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 10 Feb 2015 13:54:00 -0500 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: <54DA48B4.3090607@v.loewis.de> References: <20150209144208.760f23b6@anarchist.wooz.org> <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> <54DA3079.6030707@v.loewis.de> <54DA48B4.3090607@v.loewis.de> Message-ID: > On Feb 10, 2015, at 1:06 PM, Martin v. L?wis wrote: > > Am 10.02.15 um 18:33 schrieb Donald Stufft: >>> Can you please elaborate on that position? Why is it useful to have >>> separate accounts on separate systems? >> >> Sure. > > Thanks! Just one comment - without the desire to get into a long-winded > discussion. > >> 1. I feel like the goal of federated auth has failed in general and is unlikely >> to ever succeed. As a user of websites I have over 400 different entries in >> my password manager, even if 50% of them implement federated auth (which I >> feel like is a high number but that's not backed by math, just gut feeling) >> that's still over 200 entries I need to maintain in my password manager. In >> this case federated auth has not meaningfully reduced the burden of >> maintaining password for me since maintaining 200 isn't any easier than 400 >> and instead it just complicates my login flow > > I think this is your personal usage primarily. A lot of user just avoid > having to use a password manager, and use the same password on many > systems. (Of course, many people also *do* use different passwords, and > some also use passwords managers) Sure! Lots of people do absolutely just re-use passwords. Though I don?t think many of those same users are likely to be (knowingly at least) using OpenID. They?re more likely to use the ?Sign in With X? buttons where X is something like Google, Facebook, Twitter, etc. Which I dislike (except in cases where you need to optimize for low impact user accounts like blog comments) because they are an explicit relationship with another entity without any power to influence what they do with the trust you grant them by letting them control log ins to your site. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From fungi at yuggoth.org Tue Feb 10 19:28:17 2015 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 10 Feb 2015 18:28:17 +0000 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: References: <20150209144208.760f23b6@anarchist.wooz.org> <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> <54DA3079.6030707@v.loewis.de> Message-ID: <20150210182816.GN2497@yuggoth.org> On 2015-02-10 12:33:29 -0500 (-0500), Donald Stufft wrote: [...] > In a slightly hypocritical view point, I actually think that at > some point we should get something like id.python.org which is an > IdP and switch all of the *.python.org sites to authenticate > against that instead of keeping local user accounts. This would > reduce the number of passwords that Python inflicts on people but > it still keeps authentication within our (PSF/Python/whatever)'s > control. This is more along the lines of implementing SSO using a > federated auth technology than actual federated auth though. The OpenStack community is in the process of doing this already (for exactly all of the same reasons you stated), so I'm happy to discuss details or point you to relevant ML/IRC conversations and software if it helps in any way. -- Jeremy Stanley From barry at python.org Tue Feb 10 20:56:42 2015 From: barry at python.org (Barry Warsaw) Date: Tue, 10 Feb 2015 14:56:42 -0500 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: <54DA48B4.3090607@v.loewis.de> References: <20150209144208.760f23b6@anarchist.wooz.org> <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> <54DA3079.6030707@v.loewis.de> <54DA48B4.3090607@v.loewis.de> Message-ID: <20150210145642.4daab880@limelight.wooz.org> On Feb 10, 2015, at 07:06 PM, Martin v. L?wis wrote: >I think this is your personal usage primarily. A lot of user just avoid >having to use a password manager, and use the same password on many >systems. (Of course, many people also *do* use different passwords, and >some also use passwords managers) Actually, I wouldn't be surprised if most users just use the password manager built into their browser. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From graffatcolmingov at gmail.com Tue Feb 10 21:30:34 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Tue, 10 Feb 2015 14:30:34 -0600 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: <20150210145642.4daab880@limelight.wooz.org> References: <20150209144208.760f23b6@anarchist.wooz.org> <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> <54DA3079.6030707@v.loewis.de> <54DA48B4.3090607@v.loewis.de> <20150210145642.4daab880@limelight.wooz.org> Message-ID: On Tue, Feb 10, 2015 at 1:56 PM, Barry Warsaw wrote: > On Feb 10, 2015, at 07:06 PM, Martin v. L?wis wrote: > >>I think this is your personal usage primarily. A lot of user just avoid >>having to use a password manager, and use the same password on many >>systems. (Of course, many people also *do* use different passwords, and >>some also use passwords managers) > > Actually, I wouldn't be surprised if most users just use the password manager > built into their browser. > > Cheers, > -Barry > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > Which password manager is that? lynx doesn't have a password manager! I want this new fancy feature too! From ncoghlan at gmail.com Wed Feb 11 06:24:35 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 11 Feb 2015 15:24:35 +1000 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: References: <20150209144208.760f23b6@anarchist.wooz.org> <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> <54DA3079.6030707@v.loewis.de> Message-ID: On 11 February 2015 at 03:33, Donald Stufft wrote: > > In a slightly hypocritical view point, I actually think that at some point we > should get something like id.python.org which is an IdP and switch all of the > *.python.org sites to authenticate against that instead of keeping local > user accounts. This would reduce the number of passwords that Python inflicts > on people but it still keeps authentication within our (PSF/Python/whatever)'s > control. This is more along the lines of implementing SSO using a federated > auth technology than actual federated auth though. This is the approach that Fedora uses [1], and it offers a lot of benefits in making it possible to implement infrastructure services as independently updated components, while using role-based access control for authorisation management. One of the key practical benefits is providing a single place for people to register their public SSH keys, while one of the key community building benefits is that reliably federating identity across the various Fedora infrastructure services enables the badge system that allows people to more easily be appropriately credited for their contributions to the project. If you start doing 2FA, then the identity management server becomes the place where you do your token management. If we keep the focus specifically on PyPI, then the benefits of breaking out a separate identity service mostly amount to making it a bit easier to introduce a build service in the future (since PyPI and the build service would be peers using a common identity provider, making it easier to accommodate things like alternate experimental PyPI front ends or separating the upload service from the publishing service). Things start to get a bit more interesting once we consider a world where mail.python.org has been upgraded to Mailman 3, and we actually have the concept of a "user profile" for the mailing list infrastructure (as part of the HyperKitty archiver/web gateway). In that case, then having HyperKitty/Mailman 3/PyPI sharing an identity provider would make it feasible to let user's opt in to listing their PyPI packages on their mail.python.org profile, which may provide useful context in some discussions. And then once we expand our sphere of consideration even further into CPython core development, then I think core developers would gain the most in the near term, as we touch almost all the major identity silos (hg.python.org, pypi.python.org, buildbot.python.org, bugs.python.org, wiki.python.org) regularly, with the release managers also needing to deal with www.python.org. It's also worth noting that both of the current workflow improvement proposals suggest adding yet another identity silo in the form of a forge.python.org service. Longer term, the shared identity model does offer a benefit in terms of reducing barriers to entry: it means that anyone that has created an account on PyPI to distribute software, or on Mailman 3 to subscribe to a mailing list, will already have an account that lets them file bugs on bugs.python.org or submit pull requests on the future forge.python.org. Another aspect to consider is that once you decide to have a single authoritative identity provider *within* the ecosystem, it's also possible to adopt models like the one Stack Overflow uses, where you can *link* other profiles to your account and use them to login once you do, but you're still required to be able to authenticate directly with a password (If I recall correctly, DISQUS uses a similar setup, although I think the "native" password controlled account is optional there). So this approach isn't necessarily about "no social auth allowed" - it's about managing the risk of what happens if an external identity provider goes away at some point in the future. Cheers, Nick. [1] https://admin.fedoraproject.org/accounts/ -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Feb 11 06:26:07 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 11 Feb 2015 15:26:07 +1000 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: <20150210182816.GN2497@yuggoth.org> References: <20150209144208.760f23b6@anarchist.wooz.org> <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> <54DA3079.6030707@v.loewis.de> <20150210182816.GN2497@yuggoth.org> Message-ID: On 11 February 2015 at 04:28, Jeremy Stanley wrote: > On 2015-02-10 12:33:29 -0500 (-0500), Donald Stufft wrote: > [...] >> In a slightly hypocritical view point, I actually think that at >> some point we should get something like id.python.org which is an >> IdP and switch all of the *.python.org sites to authenticate >> against that instead of keeping local user accounts. This would >> reduce the number of passwords that Python inflicts on people but >> it still keeps authentication within our (PSF/Python/whatever)'s >> control. This is more along the lines of implementing SSO using a >> federated auth technology than actual federated auth though. > > The OpenStack community is in the process of doing this already (for > exactly all of the same reasons you stated), so I'm happy to discuss > details or point you to relevant ML/IRC conversations and software > if it helps in any way. Likewise, although in my case, Fedora had the Fedora Account System in place long before I got involved in the project, and in my experience it works well :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guettliml at thomas-guettler.de Fri Feb 13 22:27:58 2015 From: guettliml at thomas-guettler.de (=?UTF-8?B?VGhvbWFzIEfDvHR0bGVy?=) Date: Fri, 13 Feb 2015 22:27:58 +0100 Subject: [Distutils] Parsing requirements, pip has no API ... Message-ID: <54DE6C5E.9020908@thomas-guettler.de> I was told: {{{ Pip does not have a public API and because of that there is no backwards compatibility contract. It's impossible to fully parse every type of requirements.txt without a session so either parse_requirements needs to create one if it doesn't (which means if we forget to pass in a session somewhere it'll use the wrong one) or it needs one passed in. }}} >From https://github.com/pypa/pip/issues/2422#issuecomment-74271718 Up to now we used parse_requirements() of pip, but in new versions you need to pass in a session. If I see changes like this: setup.py - install_requires=[str(req.req) for req in parse_requirements("requirements.txt")], + install_requires=[str(req.req) for req in parse_requirements("requirements.txt", session=uuid.uuid1())], ... I think something is wrong. I am not an expert in python packaging details. I just want it to work. What is wrong here? - You should not use parse_requirements() in setup.py - pip should not change its API. - you should not use pip at all, you should use ...? Regards, Thomas -- http://www.thomas-guettler.de/ From graffatcolmingov at gmail.com Fri Feb 13 22:44:12 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 13 Feb 2015 15:44:12 -0600 Subject: [Distutils] Parsing requirements, pip has no API ... In-Reply-To: <54DE6C5E.9020908@thomas-guettler.de> References: <54DE6C5E.9020908@thomas-guettler.de> Message-ID: pip is a command-line tool. The fact that you can import it doesn't make it a library. If it documents no public API then it has none and you shouldn't be relying on things that you can import from pip. You can import requests from pip but you shouldn't do that either. There seems to be a need for a separate library for this use case but there isn't at the moment. In general, requirements.txt seems to be an anti-pattern. You either have to use likely to break tooling or you'll have to reinvent that from scratch. You're better off putting it directly in setup.py and using setup.py to install dependencies in a virtualenv instead of requirements.txt On Fri, Feb 13, 2015 at 3:27 PM, Thomas G?ttler wrote: > I was told: > > {{{ > Pip does not have a public API and because of that there is no backwards compatibility contract. It's impossible to fully parse every type of requirements.txt without a session so either parse_requirements needs to create one if it doesn't (which means if we forget to pass in a session somewhere it'll use the wrong one) or it needs one passed in. > }}} > From https://github.com/pypa/pip/issues/2422#issuecomment-74271718 > > > Up to now we used parse_requirements() of pip, but in new versions you need to pass in a > session. > > If I see changes like this: > > setup.py > - install_requires=[str(req.req) for req in parse_requirements("requirements.txt")], > + install_requires=[str(req.req) for req in parse_requirements("requirements.txt", session=uuid.uuid1())], > > ... I think something is wrong. > > > I am not an expert in python packaging details. I just want it to work. > > What is wrong here? > > - You should not use parse_requirements() in setup.py > - pip should not change its API. > - you should not use pip at all, you should use ...? > > Regards, > Thomas > > > -- > http://www.thomas-guettler.de/ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From p.f.moore at gmail.com Fri Feb 13 23:04:34 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 13 Feb 2015 22:04:34 +0000 Subject: [Distutils] Parsing requirements, pip has no API ... In-Reply-To: <54DE6C5E.9020908@thomas-guettler.de> References: <54DE6C5E.9020908@thomas-guettler.de> Message-ID: On 13 February 2015 at 21:27, Thomas G?ttler wrote: > - You should not use parse_requirements() in setup.py This one, basically. Ian provided the details. Paul From graffatcolmingov at gmail.com Fri Feb 13 23:05:42 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 13 Feb 2015 16:05:42 -0600 Subject: [Distutils] Parsing requirements, pip has no API ... In-Reply-To: References: <54DE6C5E.9020908@thomas-guettler.de> Message-ID: On Fri, Feb 13, 2015 at 3:59 PM, Marcus Smith wrote: > >> In general, requirements.txt seems to be an >> anti-pattern. You either have to use likely to break tooling or you'll >> have to reinvent that from scratch. You're better off putting it >> directly in setup.py and using setup.py to install dependencies in a >> virtualenv instead of requirements.txt > > > I don't know your context for calling it an anti-pattern, but there are > valid use cases for requirements.txt vs install_requires. > here's what the "Python Packaging User Guide" has on the distinction > > https://packaging.python.org/en/latest/requirements.html > > skipping to the distinctions, it lists four: > > Whereas install_requires defines the dependencies for a single project, > Requirements Files are often used to define the requirements for a complete > python environment. > > Whereas install_requires requirements are minimal, requirements files often > contain an exhaustive listing of pinned versions for the purpose of > achieving repeatable installations of a complete environment. > > Whereas install_requires requirements are ?Abstract?, requirements files > often contain pip options like --index-url or --find-links to make > requirements ?Concrete?. [1] > > Whereas install_requires metadata is automatically analyzed by pip during an > install, requirements files are not, and only are used when a user > specifically installs them using pip install -r. In 90% of the cases I see, requirements.txt are used to define the requirements for the project to function which typically are the exact same requirements necessary when installing the project. People also will then write a test-requirements.txt (or dev-requirements.txt) file to have a complete development environment. In this case, Thomas seems to be using requirements.txt to define the requirements necessary when installing the software they're developing. If requirements.txt were used solely for a development environment, they would look more like | . | dev-requirement-1>=0.1 | # etc. Instead they seem to be used more to define the same requirements someone would define in install_requires. This is the anti-pattern I'm talking about. From qwcode at gmail.com Fri Feb 13 22:59:59 2015 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 13 Feb 2015 13:59:59 -0800 Subject: [Distutils] Parsing requirements, pip has no API ... In-Reply-To: References: <54DE6C5E.9020908@thomas-guettler.de> Message-ID: > In general, requirements.txt seems to be an > anti-pattern. You either have to use likely to break tooling or you'll > have to reinvent that from scratch. You're better off putting it > directly in setup.py and using setup.py to install dependencies in a > virtualenv instead of requirements.txt > I don't know your context for calling it an anti-pattern, but there are valid use cases for requirements.txt vs install_requires. here's what the "Python Packaging User Guide" has on the distinction https://packaging.python.org/en/latest/requirements.html skipping to the distinctions, it lists four: Whereas install_requires defines the dependencies for a single project, *Requirements Files* are often used to define the requirements for a complete python environment. Whereas install_requires requirements are minimal, requirements files often contain an exhaustive listing of pinned versions for the purpose of achieving *repeatable installations* of a complete environment. Whereas install_requires requirements are ?Abstract?, requirements files often contain pip options like --index-url or --find-links to make requirements ?Concrete?. [1] Whereas install_requires metadata is automatically analyzed by pip during an install, requirements files are not, and only are used when a user specifically installs them using pip install -r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Fri Feb 13 23:10:07 2015 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 13 Feb 2015 14:10:07 -0800 Subject: [Distutils] Parsing requirements, pip has no API ... In-Reply-To: References: <54DE6C5E.9020908@thomas-guettler.de> Message-ID: In 90% of the cases I see, requirements.txt are used to define the > requirements for the project to function which typically are the exact > same requirements necessary when installing the project. People also > will then write a test-requirements.txt (or dev-requirements.txt) file > to have a complete development environment. In this case, Thomas seems > to be using requirements.txt to define the requirements necessary when > installing the software they're developing. > > If requirements.txt were used solely for a development environment, > they would look more like > > | . > | dev-requirement-1>=0.1 > | # etc. > > Instead they seem to be used more to define the same requirements > someone would define in install_requires. This is the anti-pattern I'm > talking about. > understood, I hear you, but for a breakdown of some valid use cases, the pip docs covers 4 good use cases https://pip.pypa.io/en/latest/user_guide.html#requirements-files -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Feb 14 00:11:17 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 13 Feb 2015 18:11:17 -0500 Subject: [Distutils] Parsing requirements, pip has no API ... In-Reply-To: References: <54DE6C5E.9020908@thomas-guettler.de> Message-ID: > On Feb 13, 2015, at 4:44 PM, Ian Cordasco wrote: > > pip is a command-line tool. The fact that you can import it doesn't > make it a library. If it documents no public API then it has none and > you shouldn't be relying on things that you can import from pip. You > can import requests from pip but you shouldn't do that either. > > There seems to be a need for a separate library for this use case but > there isn't at the moment. In general, requirements.txt seems to be an > anti-pattern. You either have to use likely to break tooling or you'll > have to reinvent that from scratch. You're better off putting it > directly in setup.py and using setup.py to install dependencies in a > virtualenv instead of requirements.txt > > On Fri, Feb 13, 2015 at 3:27 PM, Thomas G?ttler > wrote: >> I was told: >> >> {{{ >> Pip does not have a public API and because of that there is no backwards compatibility contract. It's impossible to fully parse every type of requirements.txt without a session so either parse_requirements needs to create one if it doesn't (which means if we forget to pass in a session somewhere it'll use the wrong one) or it needs one passed in. >> }}} >> From https://github.com/pypa/pip/issues/2422#issuecomment-74271718 >> >> >> Up to now we used parse_requirements() of pip, but in new versions you need to pass in a >> session. >> >> If I see changes like this: >> >> setup.py >> - install_requires=[str(req.req) for req in parse_requirements("requirements.txt")], >> + install_requires=[str(req.req) for req in parse_requirements("requirements.txt", session=uuid.uuid1())], >> >> ... I think something is wrong. >> >> >> I am not an expert in python packaging details. I just want it to work. >> >> What is wrong here? >> >> - You should not use parse_requirements() in setup.py >> - pip should not change its API. >> - you should not use pip at all, you should use ...? >> >> Regards, >> Thomas >> >> >> -- >> http://www.thomas-guettler.de/ >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig requirements.txt and setup.py serve different purposes, requirements.txt is for an environment, setup.py is for a package. It doesn't make sense for a setup.py to read from a requirement.txt just like it wouldn't make sense for a deb package to read from a Chef cookbook. Often the reason people do this is they want to support people installing their thing with ``pip install -r requirements.txt`` from within a check out without needing to list their dependencies twice. That's a reasonable thing to want which is why the requirements file format has a construct that enables it, simply make a requirements.txt file that contains "." or "-e ." and pip will automatically install the project and all of it's dependencies. For more information, see https://caremad.io/2013/07/setup-vs-requirement/. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From guettliml at thomas-guettler.de Sat Feb 14 20:53:34 2015 From: guettliml at thomas-guettler.de (=?UTF-8?B?VGhvbWFzIEfDvHR0bGVy?=) Date: Sat, 14 Feb 2015 20:53:34 +0100 Subject: [Distutils] Parsing requirements, pip has no API ... In-Reply-To: References: <54DE6C5E.9020908@thomas-guettler.de> Message-ID: <54DFA7BE.1070405@thomas-guettler.de> Am 14.02.2015 um 00:11 schrieb Donald Stufft: ... > requirements.txt and setup.py serve different purposes, requirements.txt is for > an environment, setup.py is for a package. It doesn't make sense for a setup.py > to read from a requirement.txt just like it wouldn't make sense for a deb > package to read from a Chef cookbook. > > Often the reason people do this is they want to support people installing their > thing with ``pip install -r requirements.txt`` from within a check out without > needing to list their dependencies twice. That's a reasonable thing to want > which is why the requirements file format has a construct that enables it, > simply make a requirements.txt file that contains "." or "-e ." and pip will > automatically install the project and all of it's dependencies. > > For more information, see https://caremad.io/2013/07/setup-vs-requirement/. thank you very much for your answer. I understand the topic better. The comparison to a chef cookbook was good. -- http://www.thomas-guettler.de/ From ncoghlan at gmail.com Sun Feb 15 00:04:33 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 15 Feb 2015 09:04:33 +1000 Subject: [Distutils] Platter: a virtualenv based multiwheel file format In-Reply-To: References: Message-ID: Armin Ronacher just published a new utility for creating prebuilt virtualenv tarballs called "platter": http://platter.pocoo.org/dev/ As he notes, it means you don't even need pip on your production systems. >From the looks of it, it would also be well suited as a foundation of the "venv in a native package" model where you use apt or RPM to do the deployment, deployment to a user-space-tools-only PaaS environment and container image builds. I haven't used it myself yet, but I'm going to start looking into it for use with Kallithea. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Sun Feb 15 23:25:22 2015 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 16 Feb 2015 11:25:22 +1300 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: References: <20150209144208.760f23b6@anarchist.wooz.org> <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> <54DA3079.6030707@v.loewis.de> Message-ID: I probably shouldn't, but I feel compelled to reply :). On 11 February 2015 at 06:33, Donald Stufft wrote: > >> On Feb 10, 2015, at 11:23 AM, Martin v. L?wis wrote: >> >> Am 10.02.15 um 15:36 schrieb Donald Stufft: >>> Honestly, I?d rather have less federated login not more. I wish the current OpenID support had never been added. >>> >> >> Can you please elaborate on that position? Why is it useful to have >> separate accounts on separate systems? > > Sure. > > So the basic premise behind federated auth is that you can get a single set > of credentials on all (or most) of your sites and eliminate the need to have a > password for each site you visit. > > My opinion is basically influenced by a number of factors: > > 1. I feel like the goal of federated auth has failed in general and is unlikely > to ever succeed. As a user of websites I have over 400 different entries in > my password manager, even if 50% of them implement federated auth (which I > feel like is a high number but that's not backed by math, just gut feeling) > that's still over 200 entries I need to maintain in my password manager. In > this case federated auth has not meaningfully reduced the burden of > maintaining password for me since maintaining 200 isn't any easier than 400 > and instead it just complicates my login flow So, what is success here? I'd call 200 less passwords to maintain and rotate on a regular basis a GOOD THING. I very much doubt that you would have 2FA set up on the other 200 things, so that would mean a change from 400 sites w only a couple having 2FA to 200 with regular rotations and 2FA, and 200 liabilities. > 2. As a site operator I feel like authentication is a core part of the > experience of using my site and by allowing federated auth on my site I'm > giving up control over that user flow. A relevant example from PyPI is that > a number of users signed up using MyOpenID which is no longer being > maintained. This means that either PyPI has to tell those people > "tough shit" or PyPI needs to figure out a mitigation tactic against that. > Another example is that launchpad randomly starts failing for people, and > it'll fail consistently for the same person until it just stops failing for > them. I'm unable to actually reproduce this error so it's extremely hard > for me to do anything else but shrug and tell them not to use it. I'm genuinely curious here. Why do you feel that authentication is a core part of the experience? Its a necessary part, sure. But I find it hard to imagine that many people say 'that bug tracking site, its got *awesome authentication*'! I see authentication as something that is very very hard to get right, and incredibly easy to get wrong. I don't trust folk that are experts in e.g. bugtracking. Or code hosting. Or todo list management to necessarily understand all the intricacies of password handling (e.g. *how many sites don't use PBKDF2*!) Or worse truncate the input password you give to 8 characters (yes, seriously). Its not that the site operators aren't trustworthy in general, its that password handling is nasty: - its hard to get right - you won't know if you got it wrong until you or your users are compromised - even sites with dedicated teams doing just the IdP aspect get it wrong I consider it irresponsible for less well resources sites to get into credentials management unless they truely have no choice: they're tackling something they're almost certain to get wrong. > 3. I feel like unless you solely rely on federated auth, then federated auth is > always going to be a second class citizen for any particular website. For > instance Travis CI uses federated auth via Github only, but that's the only > thing they support for authentication so everything works well with that. On > the other hand a number of sites support federated auth ontop of local > accounts and federated auth is almost always worse in some ways, sometimes > as simple as the username you get is kinda crappy (dstufft_) > sometimes some features don't work (or don't work very well) at all like > on PyPI where we need to authenticate people outside of a web context so > if we don't have usernames/passwords then we end up needing to require the > user to register a secondary "api password" or API key. Relying solely on federated auth is fine by me :). You don't need to tie yourself to one provider. Yes, most users will use just one of fb/github/google/lp/twitter in our community, but you can (and should) do unification on email address's to allow dealing with failed providers [but only for trustworthy providers or by doing an email verification step before unifying] and manage ACLs and privileged operations locally. The fact that some sites doit crappily is in no way an inditement of the basic tech - in fact some sites do it really well. Its gotten so good that these days the only time I will sign into a site that *doesn't* use federated auth is if there is something I really really really want from it. E.g. I made an account with Elite:Dangerous. > 4. I feel like none of the current solutions to federated auth are very good. > OpenID relies on using an URL as your "personal identifier" which I feel > like is a strange and foreign concept to most users. The way around this is > often to just hardcode a list of sites, but then as a site operator you're > implicitly recommending that users go sign up for one of those sites and > use them on your site to login. This is creating an explicit relationship > between your site and the other site, a relationship in which you often have > no power (for instance, Google <-> PyPI, we're powerless to do anything > about them deprecating OpenID other than just sucking it up and dealing with > it). Persona did offer a way around this, but persona had other failings > like relying on the domain that you happened to be using for your email to > implement a persona IdP or otherwise falling back to an implicit relationship > with the fallback provider, again one where you're more or less powerless to > the operators of that service. I agree that they're not brilliant. OpenID is basically dead, long live OpenID Connect :/. So the thing there AIUI is that OAuth worked out a lot better (more flexible, consistent with both CLI / app workflows and server side web interactions). And as such everyone is just consolidating on the one toolchain to avoid lots of needless redundancy. But as user, its fine. I don't judge a site as subordinate to Google if they allow Google logins, for instance. Yes, if you use federated auth you need to keep up. But hell, we need to keep up if we do our own auth management. When was the last time the hash count on PyPI's password database was increased to account for hash rate growths? Managing credentials is an ongoing effort - at Canonical we split that out into its own team, and they were busy just keeping on top of it and changes in the fundamentals for years. See above about hard to get right. > Overall I think that the use of federated auth, as a site operator, is really > only worth it over the loss of control in two scenarios: > > A. When your site is already entwined with another site and relying on them for > authentication is simply increasing that. An example of this from above is > Travis CI where they only work with things hosted on GitHub so also relying > on GitHub for authentication isn't that big of a deal and actually makes > things better since they can then integrate with GitHub's permissions to > check if you have commit on a particular repository. > > B. When creating an account is likely to be enough of a burden to make people > decide not to interact with your site. This category is basically completely > comprised of sites that do not have long standing relationships with their > users. The only real example I can think of this of the top of my head is > sites with comments enabled like blogs, news sites, etc. The commentors are > unlikely to have or want a long standing relationship with your site, they > just want to make a quick one off comment and then possibly never come > back. Sites like PyPI otherhand the cost of creating an account is small > compared to the life time of majority of our user base's interaction with > us. I think you're underestimating the impact this has on users. It definitely creates a high barrier to entry for me, and I don't think I'm alone. For bugs.python.org I leapt on Federated auth, but for PyPI I can't use it because it doesn't allow consolidating the accounts (AFAICT). Is it a matter of toits? E.g. do you need someone to provide patches to both permit the new OpenID Connect, OAuth for console use, and connecting OpenID Connect identities to local usercodes? > A key thing to me, as a site operator, is keeping as much control over the > experience of my users as I can. Obviously I have to outsource some things > because It's not reasonable for me to make my own hardware, write my own > drivers, my own kernel, my own OS, my own webserver etc. A good example of a > major outsourcing that I was involved in was moving things behind Fastly. > However a key difference between that outsourcing and this outsourcing is that > if things go sour with Fastly or we need to migrate away from them for one > reason or another we can do that without end users needing to change much or > anything. However if something like Google dropping OpenID supports happens > then the users who relied on that are out of luck and our ability to shield > them from the fallout of that is limited. Thats true, OTOH I think I've made a reasonable case above that our ability to shield users from our own mistakes is limited, and dealing with passwords really isn't as simple as all that... and updating to OpenID Connect should be pretty straight forward, there are good libraries for it all around. > At this point we already have it enabled, so unless someone comes up with a > really good migration strategy I doubt we'll be able to get rid of it. However > for the reasons above I'm pretty much against adding *additional* federated > auth things and I think that we should treat it more of a legacy thing and > downplay the fact we have support for it. Bitbucket has downplayed support for > random OpenID as well, when you go to their login pages it shows a login form > that looks like http://d.stufft.io/image/1O2l2g073h0h, which still lets you > login with OpenID but it's muted and downplayed. > > In a slightly hypocritical view point, I actually think that at some point we > should get something like id.python.org which is an IdP and switch all of the > *.python.org sites to authenticate against that instead of keeping local > user accounts. This would reduce the number of passwords that Python inflicts > on people but it still keeps authentication within our (PSF/Python/whatever)'s > control. This is more along the lines of implementing SSO using a federated > auth technology than actual federated auth though. Counterpoint: why not get rid of local auth altogether (for web service, not system administration). What do we, a non-profit, do that requires direct control over auth? At least - bugs.python.org, pypi, both of which support OpenID today, we've clearly considered that there its ok. If we didn't have local auth at all that would free up cycles to do whatever (moderate) chasing of evolving federation standards is needed. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From donald at stufft.io Mon Feb 16 00:53:15 2015 From: donald at stufft.io (Donald Stufft) Date: Sun, 15 Feb 2015 18:53:15 -0500 Subject: [Distutils] Google Auth is broken for PyPI In-Reply-To: References: <20150209144208.760f23b6@anarchist.wooz.org> <48B3ED2B-7156-4C2A-9304-38B06F663733@stufft.io> <54DA3079.6030707@v.loewis.de> Message-ID: > On Feb 15, 2015, at 5:25 PM, Robert Collins wrote: > > I probably shouldn't, but I feel compelled to reply :). > > On 11 February 2015 at 06:33, Donald Stufft wrote: >> >>> On Feb 10, 2015, at 11:23 AM, Martin v. L?wis wrote: >>> >>> Am 10.02.15 um 15:36 schrieb Donald Stufft: >>>> Honestly, I?d rather have less federated login not more. I wish the current OpenID support had never been added. >>>> >>> >>> Can you please elaborate on that position? Why is it useful to have >>> separate accounts on separate systems? >> >> Sure. >> >> So the basic premise behind federated auth is that you can get a single set >> of credentials on all (or most) of your sites and eliminate the need to have a >> password for each site you visit. >> >> My opinion is basically influenced by a number of factors: >> >> 1. I feel like the goal of federated auth has failed in general and is unlikely >> to ever succeed. As a user of websites I have over 400 different entries in >> my password manager, even if 50% of them implement federated auth (which I >> feel like is a high number but that's not backed by math, just gut feeling) >> that's still over 200 entries I need to maintain in my password manager. In >> this case federated auth has not meaningfully reduced the burden of >> maintaining password for me since maintaining 200 isn't any easier than 400 >> and instead it just complicates my login flow > > So, what is success here? I'd call 200 less passwords to maintain and > rotate on a regular basis a GOOD THING. I very much doubt that you > would have 2FA set up on the other 200 things, so that would mean a > change from 400 sites w only a couple having 2FA to 200 with regular > rotations and 2FA, and 200 liabilities. Success (for me) is when federated auth enables me to no longer need to worry about passwords in my day to day use of the web. Currently it's not even close and it doesn't appear to be getting any closer. The places where it is even possible it's generally only possible to sign in with Github/Twitter/Facebook and I'm unwilling to place the ability to authenticate as me to a wide number of services with them. The only time I'm willing to do so is "throw away" sites where my account on those sites don't really matter to me. > >> 2. As a site operator I feel like authentication is a core part of the >> experience of using my site and by allowing federated auth on my site I'm >> giving up control over that user flow. A relevant example from PyPI is that >> a number of users signed up using MyOpenID which is no longer being >> maintained. This means that either PyPI has to tell those people >> "tough shit" or PyPI needs to figure out a mitigation tactic against that. >> Another example is that launchpad randomly starts failing for people, and >> it'll fail consistently for the same person until it just stops failing for >> them. I'm unable to actually reproduce this error so it's extremely hard >> for me to do anything else but shrug and tell them not to use it. > > I'm genuinely curious here. Why do you feel that authentication is a > core part of the experience? Its a necessary part, sure. But I find it > hard to imagine that many people say 'that bug tracking site, its got > *awesome authentication*'! I see authentication as something that is > very very hard to get right, and incredibly easy to get wrong. I don't > trust folk that are experts in e.g. bugtracking. Or code hosting. Or > todo list management to necessarily understand all the intricacies of > password handling (e.g. *how many sites don't use PBKDF2*!) Or worse > truncate the input password you give to 8 characters (yes, seriously). > Its not that the site operators aren't trustworthy in general, its > that password handling is nasty: > - its hard to get right > - you won't know if you got it wrong until you or your users are compromised > - even sites with dedicated teams doing just the IdP aspect get it wrong > > I consider it irresponsible for less well resources sites to get into > credentials management unless they truely have no choice: they're > tackling something they're almost certain to get wrong. Authentication is like a lot of pieces of maintaining a service, where if you get it done *really well* people won?t notice it exists and if you do it wrong people will notice it?s bad immediately. Sites go out of their way to take control of the authentication flow to ensure that it gives the best possible experience for their users. Delegating that out to someone else is giving up control of it, which means that if the place it's been delegated to isn't able to keep up then you're SOL because you've exposed what should be implementation details of a particular app to the end user. You say that you don't trust them to get authentication correct, which seems silly to me given that you apparently trust them to handle ACLs or any number of other parts of a secure web app correctly. However even if a particular app doesn't want to handle their own authentication, there are better ways to handle delegation, such as something like https://stormpath.com/ which allows you to delegate the actual storage and handling of passwords and properly handling authentication to a third party that specializes in that, but without exposing the details of that to your end users, so that if you need to migrate at some point you can. > >> 3. I feel like unless you solely rely on federated auth, then federated auth is >> always going to be a second class citizen for any particular website. For >> instance Travis CI uses federated auth via Github only, but that's the only >> thing they support for authentication so everything works well with that. On >> the other hand a number of sites support federated auth ontop of local >> accounts and federated auth is almost always worse in some ways, sometimes >> as simple as the username you get is kinda crappy (dstufft_) >> sometimes some features don't work (or don't work very well) at all like >> on PyPI where we need to authenticate people outside of a web context so >> if we don't have usernames/passwords then we end up needing to require the >> user to register a secondary "api password" or API key. > > Relying solely on federated auth is fine by me :). You don't need to > tie yourself to one provider. Yes, most users will use just one of > fb/github/google/lp/twitter in our community, but you can (and should) > do unification on email address's to allow dealing with failed > providers [but only for trustworthy providers or by doing an email > verification step before unifying] and manage ACLs and privileged > operations locally. > > The fact that some sites doit crappily is in no way an inditement of > the basic tech - in fact some sites do it really well. Its gotten so > good that these days the only time I will sign into a site that > *doesn't* use federated auth is if there is something I really really > really want from it. E.g. I made an account with Elite:Dangerous. Relying only on federated auth is an fairly poor user experience. You have to essentially tell someone that before they can use your site, they have to go pick one of these other sites to become a user of. Since I know it's relevant to you as well as me, I hate having to log into any Openstack service via Launchpad because it's an extra step that I don't have to do on any service that doesn't delegate auth, since I have to first login to launchpad and then I have to tell it to allow the login to an openstack service. Nevermind the fact that there is huge phishing potentional and that the more central auth gets the more likely you're going to see people who want to attempt to phish your login information. I do think that in some cases federated auth can make sense, especially for small sites where even a slight inconvience or delay in going from an unregistered user to a logged in user can cause you to lose traffic all together. These sites also tend to have a very low amount of inertia tied to a specific user account, if you lose access to your account spinning up a new one is low impact. Compared to PyPI where the security of an account is of paramount importance and that a lost account is incredibly disruptive. > >> 4. I feel like none of the current solutions to federated auth are very good. >> OpenID relies on using an URL as your "personal identifier" which I feel >> like is a strange and foreign concept to most users. The way around this is >> often to just hardcode a list of sites, but then as a site operator you're >> implicitly recommending that users go sign up for one of those sites and >> use them on your site to login. This is creating an explicit relationship >> between your site and the other site, a relationship in which you often have >> no power (for instance, Google <-> PyPI, we're powerless to do anything >> about them deprecating OpenID other than just sucking it up and dealing with >> it). Persona did offer a way around this, but persona had other failings >> like relying on the domain that you happened to be using for your email to >> implement a persona IdP or otherwise falling back to an implicit relationship >> with the fallback provider, again one where you're more or less powerless to >> the operators of that service. > > I agree that they're not brilliant. OpenID is basically dead, long > live OpenID Connect :/. So the thing there AIUI is that OAuth worked > out a lot better (more flexible, consistent with both CLI / app > workflows and server side web interactions). And as such everyone is > just consolidating on the one toolchain to avoid lots of needless > redundancy. But as user, its fine. I don't judge a site as subordinate > to Google if they allow Google logins, for instance. > > Yes, if you use federated auth you need to keep up. But hell, we need > to keep up if we do our own auth management. When was the last time > the hash count on PyPI's password database was increased to account > for hash rate growths? Managing credentials is an ongoing effort - at > Canonical we split that out into its own team, and they were busy just > keeping on top of it and changes in the fundamentals for years. See > above about hard to get right. PyPI uses bcrypt with a work factor of 12, we're using the excellent passlib library which means we can easily set things up to automatically migrate between different algorithms and different work factors/rounds within the same algorithm. The current settings on PyPI is roughly 0.3 (on my iMac in a completely unscientific benchmark with an iteration of 1 since any large number of iterations takes forever) seconds per password hash and bumping bcrypt to 13 makes it take roughly 0.6. I do check periodically that our work factor is appropiate and at some point I plan on migrating the algorithm to scrypt once a reasonable implementation of it is available for Python. Honestly speaking as someone who has implemented both authentication libraries, OpenID servers, and OpenID clients I feel a lot more like storing a password safely isn't really that hard, especially with good libraries like passlib to help with it, and in the grand scheme of things it's only a tiny part of what makes a secure application. Even within authentication itself there are a number of things that federated auth simply won't help with. Things like ensuring that you rotate your session identifiers when you cross authentication boundaries are things that even federated authentication doesn't handle for you and are far more likely to get forgotten when creating a site than what password storage and algorithms someone uses. PyPI in particular is a web service that needs to be designed to last decades. It's not the kind of site web service where you can just tell people that hey guess what, tomorrow we all need to switch away to some other authentication service because MyOpenID (or whatever) decided to shut down. It's less about needing to "keep up" and more about who has to do the keeping up, for ensuring we're still safely storing passwords it's simple enough for the PyPI admins to ensure that we're still "safe enough", however when using authentication where the fact it's being delegated has been "leaked" to the end user, it's up to each and every individual user on PyPI to ensure that they are using an auth provider that is keeping up, and the end users are the ones least likely to do that on any kind of meaningful scale. > >> Overall I think that the use of federated auth, as a site operator, is really >> only worth it over the loss of control in two scenarios: >> >> A. When your site is already entwined with another site and relying on them for >> authentication is simply increasing that. An example of this from above is >> Travis CI where they only work with things hosted on GitHub so also relying >> on GitHub for authentication isn't that big of a deal and actually makes >> things better since they can then integrate with GitHub's permissions to >> check if you have commit on a particular repository. >> >> B. When creating an account is likely to be enough of a burden to make people >> decide not to interact with your site. This category is basically completely >> comprised of sites that do not have long standing relationships with their >> users. The only real example I can think of this of the top of my head is >> sites with comments enabled like blogs, news sites, etc. The commentors are >> unlikely to have or want a long standing relationship with your site, they >> just want to make a quick one off comment and then possibly never come >> back. Sites like PyPI otherhand the cost of creating an account is small >> compared to the life time of majority of our user base's interaction with >> us. > > I think you're underestimating the impact this has on users. It > definitely creates a high barrier to entry for me, and I don't think > I'm alone. For bugs.python.org I leapt on Federated auth, but for PyPI > I can't use it because it doesn't allow consolidating the accounts > (AFAICT). Is it a matter of toits? E.g. do you need someone to provide > patches to both permit the new OpenID Connect, OAuth for console use, > and connecting OpenID Connect identities to local usercodes? Honestly, I feel like 90% of the problem people have with authentication on the python.org web properties can be solved by implementing SSO. The problem here is that you have one logical collection of sites that all have different authentication silos. I don't think that allowing someone to log in with Github, or Google, or whatever the flavor of the week is will make a much higher impact over SSO. > >> A key thing to me, as a site operator, is keeping as much control over the >> experience of my users as I can. Obviously I have to outsource some things >> because It's not reasonable for me to make my own hardware, write my own >> drivers, my own kernel, my own OS, my own webserver etc. A good example of a >> major outsourcing that I was involved in was moving things behind Fastly. >> However a key difference between that outsourcing and this outsourcing is that >> if things go sour with Fastly or we need to migrate away from them for one >> reason or another we can do that without end users needing to change much or >> anything. However if something like Google dropping OpenID supports happens >> then the users who relied on that are out of luck and our ability to shield >> them from the fallout of that is limited. > > Thats true, OTOH I think I've made a reasonable case above that our > ability to shield users from our own mistakes is limited, and dealing > with passwords really isn't as simple as all that... and updating to > OpenID Connect should be pretty straight forward, there are good > libraries for it all around. We'll likely update the Google authentication to use OpenID Connect because simply we already made the mistake of implementing and enough people are relying on it that we can't just simply drop it if there's a reasonable way for us to continue support it. If you want to submit a pull request to do that, it would be most appreciated. Going forward however I'm likely going to be -1 on adding any additional forms of federated/delegated authentication, except possibly to a global Python Auth service. That includes adding generic support for OpenID Connect or Persona or whatever other protocol is the flavor of the month. I'm also likely going to end up trying to de-emphasize the ability to use anything but a local account and try to guide users towards using local accounts where possible. Finally if I ever do come up with a reasonable way to migrate away from what federated authentication support we have today in a low impact way you'll likely see a proposal on distutils-sig for that, however I'm not very hopeful except to just wait it out for the services to die out and remove support for them. > >> At this point we already have it enabled, so unless someone comes up with a >> really good migration strategy I doubt we'll be able to get rid of it. However >> for the reasons above I'm pretty much against adding *additional* federated >> auth things and I think that we should treat it more of a legacy thing and >> downplay the fact we have support for it. Bitbucket has downplayed support for >> random OpenID as well, when you go to their login pages it shows a login form >> that looks like http://d.stufft.io/image/1O2l2g073h0h, which still lets you >> login with OpenID but it's muted and downplayed. >> >> In a slightly hypocritical view point, I actually think that at some point we >> should get something like id.python.org which is an IdP and switch all of the >> *.python.org sites to authenticate against that instead of keeping local >> user accounts. This would reduce the number of passwords that Python inflicts >> on people but it still keeps authentication within our (PSF/Python/whatever)'s >> control. This is more along the lines of implementing SSO using a federated >> auth technology than actual federated auth though. > > Counterpoint: why not get rid of local auth altogether (for web > service, not system administration). What do we, a non-profit, do that > requires direct control over auth? At least - bugs.python.org, pypi, > both of which support OpenID today, we've clearly considered that > there its ok. > > If we didn't have local auth at all that would free up cycles to do > whatever (moderate) chasing of evolving federation standards is > needed. I don't trust other services to handle authentication for something as important as PyPI, and it's unlikely that a service ends up coming around that I both trust enough to be willing to give them the ability to essentially authenticate as a wide number of users whenever they want, and where I'm willing to tie the long term operability to PyPI as a service to them in a way where we can't easily pull them out of our stack and replace it with minimal impact on end users. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From msabramo at gmail.com Mon Feb 16 06:28:53 2015 From: msabramo at gmail.com (Marc Abramowitz) Date: Sun, 15 Feb 2015 21:28:53 -0800 Subject: [Distutils] Parsing requirements, pip has no API ... In-Reply-To: References: <54DE6C5E.9020908@thomas-guettler.de> Message-ID: <3660A587-21EA-48B3-9F05-EB55B26FF72D@gmail.com> Agree 100% with dstufft and have long been pointing people to his blog post at caremad.io. Incidentally his post maps closely to (and references) a blog post from the Ruby world about Bundler and Gemfile vs. gemspecs, because this is a general concept that applies to all platforms (not really unique to Python) and this is backed up by the .deb vs. Chef example. In short, dependencies for building reusable blocks (libraries/packages) are different than dependencies for building non-reusable apps/systems/environments. The former emphasizes reusability and flexibility and minimal pinning. The latter instead emphasizes repeatability and thus constraining tightly. I don't think requirements.txt is an anti-pattern. I do feel like things that suck the contents of a requirements.txt into the setup.py (like pbr) are an anti-pattern. From p at 2015.forums.dobrogost.net Mon Feb 16 14:24:15 2015 From: p at 2015.forums.dobrogost.net (Piotr Dobrogost) Date: Mon, 16 Feb 2015 14:24:15 +0100 Subject: [Distutils] Platter: a virtualenv based multiwheel file format In-Reply-To: References: Message-ID: On Sun, Feb 15, 2015 at 12:04 AM, Nick Coghlan wrote: > Armin Ronacher just published a new utility for creating prebuilt virtualenv > tarballs called "platter": http://platter.pocoo.org/dev/ >From https://github.com/mitsuhiko/platter/ : "Platter is a utility for Python that simplifies deployments on Unix servers. It's a thin wrapper around pip, virtualenv and wheel and aids in creating packages that can install without compiling or downloading on servers." Am I right in thinking this is modern counterpart of buildout? Regards, Piotr Dobrogost From reinout at vanrees.org Wed Feb 18 10:49:48 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 18 Feb 2015 10:49:48 +0100 Subject: [Distutils] Platter: a virtualenv based multiwheel file format In-Reply-To: References: Message-ID: Piotr Dobrogost schreef op 16-02-15 om 14:24: > From https://github.com/mitsuhiko/platter/ : > > "Platter is a utility for Python that simplifies deployments on Unix servers. > It's a thin wrapper around pip, virtualenv and wheel and aids in creating > packages that can install without compiling or downloading on servers." > > Am I right in thinking this is modern counterpart of buildout? No, it isn't buildout-like. Buildout is extensible with recipes. This way it can write config files, install node modules, etc. Platter "only" has the possibility of a pre/post install script where you can/must doe all these extra tasks. It *might* be a replacement of buildout *for you*, but that depends on how you use buildout. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From ncoghlan at gmail.com Wed Feb 18 13:40:39 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 18 Feb 2015 22:40:39 +1000 Subject: [Distutils] Platter: a virtualenv based multiwheel file format In-Reply-To: References: Message-ID: On 18 Feb 2015 19:50, "Reinout van Rees" wrote: > > Piotr Dobrogost schreef op 16-02-15 om 14:24: > > From https://github.com/mitsuhiko/platter/ : > > > > "Platter is a utility for Python that simplifies deployments on Unix servers. > > It's a thin wrapper around pip, virtualenv and wheel and aids in creating > > packages that can install without compiling or downloading on servers." > > > > Am I right in thinking this is modern counterpart of buildout? > No, it isn't buildout-like. > > Buildout is extensible with recipes. This way it can write config files, > install node modules, etc. > > Platter "only" has the possibility of a pre/post install script where > you can/must doe all these extra tasks. > > It *might* be a replacement of buildout *for you*, but that depends on > how you use buildout. I haven't dug into platter in depth, but from a quick look, it seemed closer to the "package up an entire virtualenv as a system package" model. So it appears to be a neat and tidy way to transport an application from your build servers through to production systems. Cheers, Nick. > > Reinout > > -- > Reinout van Rees http://reinout.vanrees.org/ > reinout at vanrees.org http://www.nelen-schuurmans.nl/ > "Learning history by destroying artifacts is a time-honored atrocity" > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuaxo2 at yahoo.com Fri Feb 20 17:11:08 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Fri, 20 Feb 2015 16:11:08 +0000 (UTC) Subject: [Distutils] my setup.py eats all the memory in my machine Message-ID: <793332137.2860497.1424448668482.JavaMail.yahoo@mail.yahoo.com> Hello All, If I run python setup.py install from my project [ https://github.com/stuaxo/vext.pygtk ] inside a virtualenv, the python process promptly uses more memory than the machine has and dies. I was able to hit ctrl-c before it locked up my machine and got the output: running install running bdist_egg running egg_info writing requirements to vext.pygtk.egg-info/requires.txt writing vext.pygtk.egg-info/PKG-INFO writing top-level names to vext.pygtk.egg-info/top_level.txt writing dependency_links to vext.pygtk.egg-info/dependency_links.txt reading manifest file 'vext.pygtk.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'vext.pygtk.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib warning: install_lib: 'build/lib.linux-x86_64-2.7' does not exist -- no Python modules to install installing package data to build/bdist.linux-x86_64/egg running install_data creating build/bdist.linux-x86_64/egg creating build/bdist.linux-x86_64/egg/EGG-INFO copying vext.pygtk.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO copying vext.pygtk.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying vext.pygtk.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying vext.pygtk.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying vext.pygtk.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO zip_safe flag not set; analyzing archive contents... creating 'dist/vext.pygtk-0.1.1-py2.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it removing 'build/bdist.linux-x86_64/egg' (and everything under it) Processing vext.pygtk-0.1.1-py2.7.egg Removing /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages/vext.pygtk-0.1.1-py2.7.egg Copying vext.pygtk-0.1.1-py2.7.egg to /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages vext.pygtk 0.1.1 is already the active version in easy-install.pth Installed /mnt/data/home/stu/.virtualenvs/tmpv/lib/python2.7/site-packages/vext.pygtk-0.1.1-py2.7.egg Processing dependencies for vext.pygtk==0.1.1 Searching for vext Reading https://pypi.python.org/simple/vext/ Best match: vext 0.1.1 Downloading https://pypi.python.org/packages/source/v/vext/vext-0.1.1.tar.gz#md5=903962476d18a0125b2fa3fc7b42a56a Processing vext-0.1.1.tar.gz Writing /tmp/easy_install-MC0DcL/vext-0.1.1/setup.cfg Running vext-0.1.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-MC0DcL/vext-0.1.1/egg-dist-tmp-Od4X7D From stuaxo2 at yahoo.com Fri Feb 20 19:49:35 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Fri, 20 Feb 2015 18:49:35 +0000 (UTC) Subject: [Distutils] Installing a file into sitepackages Message-ID: <856983252.2975399.1424458175043.JavaMail.yahoo@mail.yahoo.com> Hi, In my project, I install a .pth file into site-packages, I use the data_files... in Ubuntu this seems to work OK, but in a Windows VM the file doesn't seem to be being installed: setup( .... # Install the import hook data_files=[ (site_packages_path, ["vext_importer.pth"] if environ.get('VIRTUAL_ENV') else []), ], ) - Is there a better way to do this ? I realise it's a bit odd installing a .pth - my project is to allow certain packages to use the system site packages from a virtualenv - https://github.com/stuaxo/vext S++ From ben+python at benfinney.id.au Mon Feb 23 00:33:37 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Mon, 23 Feb 2015 10:33:37 +1100 Subject: [Distutils] Upload signature (and signing key) after package upload Message-ID: <854mqd8qsu.fsf@benfinney.id.au> Howdy all, How can I upload an OpenPGP signature (and the signing key) for a version, after the upload of the distribution is complete? I have recently been informed of the ?--sign? and ?--identity? options to the ?upload? command. As described here: Signing a package is easy and it is done as part of the upload process to PyPI. [?] Can it be done, not ?as part of the upload process?, but subsequent to the upload of the distribution? How? -- \ ?Try adding ?as long as you don't breach the terms of service ? | `\ according to our sole judgement? to the end of any cloud | _o__) computing pitch.? ?Simon Phipps, 2010-12-11 | Ben Finney From richard at python.org Mon Feb 23 00:36:21 2015 From: richard at python.org (Richard Jones) Date: Sun, 22 Feb 2015 23:36:21 +0000 Subject: [Distutils] Upload signature (and signing key) after package upload References: <854mqd8qsu.fsf@benfinney.id.au> Message-ID: Sorry, there's no facility at present for signing a file that's already uploaded. On Mon Feb 23 2015 at 10:33:49 AM Ben Finney wrote: > Howdy all, > > How can I upload an OpenPGP signature (and the signing key) for a > version, after the upload of the distribution is complete? > > I have recently been informed of the ?--sign? and ?--identity? options > to the ?upload? command. As described here: > > Signing a package is easy and it is done as part of the upload > process to PyPI. [?] > verifying-python-packages-with-pgp/> > > Can it be done, not ?as part of the upload process?, but subsequent to > the upload of the distribution? How? > > -- > \ ?Try adding ?as long as you don't breach the terms of service ? | > `\ according to our sole judgement? to the end of any cloud | > _o__) computing pitch.? ?Simon Phipps, 2010-12-11 | > Ben Finney > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+python at benfinney.id.au Mon Feb 23 00:49:32 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Mon, 23 Feb 2015 10:49:32 +1100 Subject: [Distutils] Upload signature (and signing key) after package upload References: <854mqd8qsu.fsf@benfinney.id.au> Message-ID: <85zj857bhv.fsf@benfinney.id.au> Richard Jones writes: > Sorry, there's no facility at present for signing a file that's already > uploaded. Thanks. I can now stop futilely trying to find it :-) -- \ ?It's up to the masses to distribute [music] however they want | `\ ? The laws don't matter at that point. People sharing music in | _o__) their bedrooms is the new radio.? ?Neil Young, 2008-05-06 | Ben Finney From ncoghlan at gmail.com Mon Feb 23 00:55:27 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 23 Feb 2015 09:55:27 +1000 Subject: [Distutils] Upload signature (and signing key) after package upload In-Reply-To: <85zj857bhv.fsf@benfinney.id.au> References: <854mqd8qsu.fsf@benfinney.id.au> <85zj857bhv.fsf@benfinney.id.au> Message-ID: On 23 Feb 2015 09:50, "Ben Finney" wrote: > > Richard Jones writes: > > > Sorry, there's no facility at present for signing a file that's already > > uploaded. > > Thanks. I can now stop futilely trying to find it :-) Twine lets you at least separate signing from the build step, though: https://pypi.python.org/pypi/twine (Also, doesn't setup.py upload use HTTPS by default now? That part of the twine docs may need qualification) Cheers, Nick. > > -- > \ ?It's up to the masses to distribute [music] however they want | > `\ ? The laws don't matter at that point. People sharing music in | > _o__) their bedrooms is the new radio.? ?Neil Young, 2008-05-06 | > Ben Finney > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Feb 23 01:05:47 2015 From: donald at stufft.io (Donald Stufft) Date: Sun, 22 Feb 2015 19:05:47 -0500 Subject: [Distutils] Upload signature (and signing key) after package upload In-Reply-To: References: <854mqd8qsu.fsf@benfinney.id.au> <85zj857bhv.fsf@benfinney.id.au> Message-ID: <95103153-F416-4142-A2DD-43DC689434AE@stufft.io> > On Feb 22, 2015, at 6:55 PM, Nick Coghlan wrote: > > > On 23 Feb 2015 09:50, "Ben Finney" > wrote: > > > > Richard Jones > writes: > > > > > Sorry, there's no facility at present for signing a file that's already > > > uploaded. > > > > Thanks. I can now stop futilely trying to find it :-) > > Twine lets you at least separate signing from the build step, though: https://pypi.python.org/pypi/twine > (Also, doesn't setup.py upload use HTTPS by default now? That part of the twine docs may need qualification) > > Yes and no. Some of the available Pythons have been updated to use a HTTPS connection, however they don?t verify them. Python 2.7.9 should (I believe, I haven?t actually tested this!) add verification to that. I think that Python 3.4.3 includes that as well (if 2.7.9 does then 3.2.3 should as well). That of course doesn't affect anyone using 2.6, 2.7.0-2.7.8, 3.2, 3.3, and 3.4.0-3.4.2. There's an issue here about it: https://github.com/pypa/twine/issues/93 I'm not opposed to changing the wording, but I am opposed to changing it to something that makes it sound like, in general, it's now safe to use ``setup.py upload``, because it still isn?t unless you meet certain specific criteria (specifically you only ever interact with PyPI with the latest released version of 2.7). --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Mon Feb 23 11:29:46 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 23 Feb 2015 20:29:46 +1000 Subject: [Distutils] Upload signature (and signing key) after package upload In-Reply-To: <95103153-F416-4142-A2DD-43DC689434AE@stufft.io> References: <854mqd8qsu.fsf@benfinney.id.au> <85zj857bhv.fsf@benfinney.id.au> <95103153-F416-4142-A2DD-43DC689434AE@stufft.io> Message-ID: On 23 Feb 2015 10:05, "Donald Stufft" wrote: > > >> On Feb 22, 2015, at 6:55 PM, Nick Coghlan wrote: >> >> >> On 23 Feb 2015 09:50, "Ben Finney" wrote: >> > >> > Richard Jones writes: >> > >> > > Sorry, there's no facility at present for signing a file that's already >> > > uploaded. >> > >> > Thanks. I can now stop futilely trying to find it :-) >> >> Twine lets you at least separate signing from the build step, though: https://pypi.python.org/pypi/twine >> >> (Also, doesn't setup.py upload use HTTPS by default now? That part of the twine docs may need qualification) >> >> > > Yes and no. > > Some of the available Pythons have been updated to use a HTTPS connection, however they don?t verify them. Python 2.7.9 should (I believe, I haven?t actually tested this!) add verification to that. I think that Python 3.4.3 includes that as well (if 2.7.9 does then 3.2.3 should as well). That of course doesn't affect anyone using 2.6, 2.7.0-2.7.8, 3.2, 3.3, and 3.4.0-3.4.2. > > There's an issue here about it: https://github.com/pypa/twine/issues/93 > > I'm not opposed to changing the wording, but I am opposed to changing it to something that makes it sound like, in general, it's now safe to use ``setup.py upload``, because it still isn?t unless you meet certain specific criteria (specifically you only ever interact with PyPI with the latest released version of 2.7). It's the qualifier that the latest versions of Python also have the security fixed properly that I think would be worthwhile. Updating the twine docs can likely wait until after 3.4.3 goes out, though. Cheers, Nick. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Feb 24 15:27:21 2015 From: holger at merlinux.eu (holger krekel) Date: Tue, 24 Feb 2015 14:27:21 +0000 Subject: [Distutils] devpi maintenance releases Message-ID: <20150224142721.GC17945@merlinux.eu> Hi all, Florian Schulze just released several devpi package maintenance updates to PyPI, see the changelogs below for details. Upgrading is considered safe and does not require an export/import cycle on the server side. Note that the "devpi" metapackage is discontinued, please rather use:: pip install devpi-server devpi-client if you want to install both server and client. You can also install devpi-web if you want more web UI including search. Docs for devpi are here, including quickstart tutorials : http://doc.devpi.net have fun, holger devpi-client-2.0.5 ------------------ - fix issue209: argument default handling changed in argparse in Python 2.7.9. - fix issue163: use PIP_CONFIG_FILE environment variable if set. - fix issue191: provide return code !=0 for failures during push devpi-server-2.1.4 ------------------ - fix issue214: the whitelisting code stopped inheritance too early. - fix regression: easy_install went to the full simple project list for a non existing project. - When uploading an existing version to a non-volatile index, it's now a no op instead of an error if the content is identical. If the content is different, it's still an error. - Uploading documentation to non-volatile indexes is now protected the same way as packages. - added code to allow filtering on packages with stable version numbers. - Change nginx template to set the X-outside-url header based on the requested URL. This makes it possible to connect by IP address when the server name is not in DNS. devpi-web-2.2.3 --------------- - fix issue207: added documentation url for latest stable release of a package. devpi-common-2.0.5 ------------------ - added code to allow filtering on stable version numbers.