From marius at gedmin.as Mon Feb 1 02:25:25 2016 From: marius at gedmin.as (Marius Gedminas) Date: Mon, 1 Feb 2016 09:25:25 +0200 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: Message-ID: <20160201072525.GA30285@platonas> On Sat, Jan 30, 2016 at 12:17:02PM -0800, Matthew Brett wrote: > Hi, > > > I can confirm that Debian and Anaconda builds of CPython 2.7 both have > > sys.maxunicode == 0x10ffff, but Enthought Canopy has sys.maxunicode == > > 0xffff. Hmm. I guess they should fix that. > > > > Also the manylinux docker image currently has sys.maxunicode == > > 0xffff, so we should definitely fix that :-). > > A quick check on Ubuntu 12.04, Debian sid, Centos 7.2 confirms wide > unicode by default. Are there any known distributions providing UCS2 > unicode Pythons? I don't know of any. Pyenv, OTOH, deliberately uses upstream defaults and so produces narrow unicode builds. Marius Gedminas -- If you are angry with someone, you should walk a mile in their shoes... then you'll be a mile away from them, and you'll have their shoes. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From ncoghlan at gmail.com Mon Feb 1 06:38:08 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 1 Feb 2016 21:38:08 +1000 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <14911ACD-028D-4121-ABC1-73629B5B09AA@stufft.io> References: <14911ACD-028D-4121-ABC1-73629B5B09AA@stufft.io> Message-ID: On 31 January 2016 at 02:49, Donald Stufft wrote: > >> On Jan 30, 2016, at 3:58 AM, Nick Coghlan wrote: >> >> I also think this version covers everything we need it to cover, so >> I'm going to mark it as Active and point to this post as the >> resolution :) > > Hilariously I just read on Hacker news: > > "Incidentay, we have hired Natanael Copa, the awesome creator of Alpine Linux and are in the process of switching the Docker official image library from ubuntu to Alpine.? [1] > > So we might end up needing a MUSL based platform tag in the near future ;) Aye, I saw that, too. Alpine is currently the only MUSL based distro I'm aware of, so it may be enough to ensure that we include Alpine in the list of distros we ensure are correctly supported by distro-specific wheel tags. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From cournape at gmail.com Mon Feb 1 07:51:30 2016 From: cournape at gmail.com (David Cournapeau) Date: Mon, 1 Feb 2016 12:51:30 +0000 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: Message-ID: On Sat, Jan 30, 2016 at 8:37 AM, Nathaniel Smith wrote: > On Fri, Jan 29, 2016 at 11:52 PM, Nick Coghlan wrote: > > On 30 January 2016 at 09:29, Nathaniel Smith wrote: > >> Hi all, > >> > >> I think this is ready for pronouncement now -- thanks to everyone for > >> all their feedback over the last few weeks! > >> > >> The only change relative to the last posting is that we rewrote the > >> section on "Platform detection for installers", to switch to letting > >> distributors explicitly control manylinux1 compatibility by means of a > >> _manylinux module. > > > > In terms of the proposal itself, I think this version is excellent :) > > > > However, I realised that there's an implicit assumption we've been > > making that really should be spelled out explicitly: manylinux1 wheels > > targeting CPython 3.2 and earlier need to be compiled against a > > CPython built in wide Unicode mode, and in those cases, the detection > > of manylinux1 compatibility at the platform level should include > > checking for "sys.maxunicode > 0xFFFF". > > Doh, excellent catch! > > I've just pushed the obvious update to handle this directly to the > copy of the PEP in the manylinux repository. > > Diff: > https://github.com/manylinux/manylinux/commit/2e49cd16b89e0d6e84a5dc98ddb1a916968b73bc > > New text in full: > > https://raw.githubusercontent.com/manylinux/manylinux/2e49cd16b89e0d6e84a5dc98ddb1a916968b73bc/pep-513.rst > > I haven't sent to the PEP editors, because they already have another > diff from me sitting in their inboxes and I'm not sure how to do this > in a way that doesn't confuse things :-) > > > The main reason we need to spell this out explicitly is that while > > distros (and I believe other redistributors) build CPython-for-Linux > > in wide mode as a matter of course, a Linux checkout of CPython 2.7 > > will build in narrow mode by default. > > I can confirm that Debian and Anaconda builds of CPython 2.7 both have > sys.maxunicode == 0x10ffff, but Enthought Canopy has sys.maxunicode == > 0xffff. Hmm. I guess they should fix that. > Yes, they should :) I am not sure why it was built this way (before my time), it is unfortunately not easy to fix when you have a large existing customer base. David > Also the manylinux docker image currently has sys.maxunicode == > 0xffff, so we should definitely fix that :-). > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Mon Feb 1 11:18:07 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Mon, 1 Feb 2016 11:18:07 -0500 Subject: [Distutils] draft PEP: manylinux1 In-Reply-To: <0B446D67-F066-4215-8117-4C55C74C4C68@stufft.io> References: <56A09EDF.3000500@egenix.com> <56A0AD76.7090107@egenix.com> <56A1F757.4050107@egenix.com> <56A20901.1050300@egenix.com> <0B446D67-F066-4215-8117-4C55C74C4C68@stufft.io> Message-ID: On Fri, Jan 29, 2016 at 11:44 PM, Donald Stufft wrote: > > On Jan 29, 2016, at 2:35 PM, Nate Coraor wrote: > > Is there a distro-specific wheel tagging PEP in development somewhere that > I missed? If not, I will get the ball rolling on it. > > > > I think this a great idea, and I think it actually pairs nicely with the > manylinux proposal. It should be pretty easy to cover the vast bulk of > users with a handful of platform specific wheels (1-3ish) and then a > manylinux wheel to cover the rest. It would let a project use newer > toolchains/libraries in the common case, but still fall back to the older > ones on more unusual platforms. > Fantastic, this is exactly the sort of usage I was hoping to see. I'll move forward with it, then. --nate > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Mon Feb 1 11:21:07 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Mon, 1 Feb 2016 11:21:07 -0500 Subject: [Distutils] draft PEP: manylinux1 In-Reply-To: References: <56A09EDF.3000500@egenix.com> <56A0AD76.7090107@egenix.com> <56A1F757.4050107@egenix.com> Message-ID: On Fri, Jan 29, 2016 at 9:14 PM, Nick Coghlan wrote: > On 30 January 2016 at 05:30, Nate Coraor wrote: > > I wonder if, in relation to this, it may be best to have two separate > tags: > > one to indicate that the wheel includes external libraries rolled in to > it, > > and one to indicate that it doesn't. That way, a user can make a > conscious > > decision as to whether they want to install any wheels that could include > > libraries that won't be maintained by the distribution package manager. > That > > way if we end up in a future world where manylinux wheels and > > distro-specific wheels (that may depend on non-default distro packages) > live > > in PyPI together, there'd be a way to indicate a preference. > > I don't think we want to go into that level of detail in the platform > tag, but metadata for bundled pre-built binaries in wheels and > vendored dependencies in sdists is worth considering as an enhancement > in its own right. > I thought the same thing - the only reason I proposed tags that it was my understanding that such metadata is not available to installation tool(s) until the distribution is fetched and inspected. If my limited understanding is incorrect then I agree that having this in the tags is too much. --nate > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Feb 1 14:48:19 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 1 Feb 2016 11:48:19 -0800 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <20160201072525.GA30285@platonas> References: <20160201072525.GA30285@platonas> Message-ID: On Sun, Jan 31, 2016 at 11:25 PM, Marius Gedminas wrote: > On Sat, Jan 30, 2016 at 12:17:02PM -0800, Matthew Brett wrote: >> Hi, >> >> > I can confirm that Debian and Anaconda builds of CPython 2.7 both have >> > sys.maxunicode == 0x10ffff, but Enthought Canopy has sys.maxunicode == >> > 0xffff. Hmm. I guess they should fix that. >> > >> > Also the manylinux docker image currently has sys.maxunicode == >> > 0xffff, so we should definitely fix that :-). >> >> A quick check on Ubuntu 12.04, Debian sid, Centos 7.2 confirms wide >> unicode by default. Are there any known distributions providing UCS2 >> unicode Pythons? > > I don't know of any. I also tested on Debian wheezy 32-bit; Fedora 22 (32-bit packages); openSUSE 13.2, recent Arch and Gentoo, these are all wide unicode. > Pyenv, OTOH, deliberately uses upstream defaults and so produces narrow > unicode builds. Ouch - good to know. Cheers, Matthew From doko at ubuntu.com Mon Feb 1 18:37:57 2016 From: doko at ubuntu.com (Matthias Klose) Date: Tue, 2 Feb 2016 00:37:57 +0100 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: Message-ID: <56AFEC55.30706@ubuntu.com> On 30.01.2016 00:29, Nathaniel Smith wrote: > Hi all, > > I think this is ready for pronouncement now -- thanks to everyone for > all their feedback over the last few weeks! I don't think so. I am biased because I'm the maintainer for Python in Debian/Ubuntu. So I would like to have some feedback from maintainers of Python in other Linux distributions (Nick, no, you're not one of these). The proposal just takes some environment and declares that as a standard. So everybody wanting to supply these wheels basically has to use this environment. Without giving any details, without giving any advise how to produce such wheels in other environments. Without giving any hints how such wheels may be broken with newer environments. Without mentioning this is am64/i386 only. There might be more. Pretty please be specific about your environment. Have a look how the LSB specifies requirements on the runtime environment ... and then ask yourself why the lsb doesn't have any real value. Matthias From tritium-list at sdamon.com Mon Feb 1 18:47:50 2016 From: tritium-list at sdamon.com (Alexander Walters) Date: Mon, 01 Feb 2016 18:47:50 -0500 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <56AFEC55.30706@ubuntu.com> References: <56AFEC55.30706@ubuntu.com> Message-ID: <56AFEEA6.9010704@sdamon.com> On 2/1/2016 18:37, Matthias Klose wrote: > On 30.01.2016 00:29, Nathaniel Smith wrote: >> Hi all, >> >> I think this is ready for pronouncement now -- thanks to everyone for >> all their feedback over the last few weeks! > > I don't think so. I am biased because I'm the maintainer for Python > in Debian/Ubuntu. So I would like to have some feedback from > maintainers of Python in other Linux distributions (Nick, no, you're > not one of these). > > The proposal just takes some environment and declares that as a > standard. So everybody wanting to supply these wheels basically has > to use this environment. Without giving any details, without giving > any advise how to produce such wheels in other environments. Without > giving any hints how such wheels may be broken with newer > environments. Without mentioning this is am64/i386 only. > There might be more. Pretty please be specific about your > environment. Have a look how the LSB specifies requirements on the > runtime environment ... and then ask yourself why the lsb doesn't have > any real value. > > Matthias > I... Thought the environment this pep describes is the docker image, and only the docker image, and anything not made on that docker image is in violation of the pep. From donald at stufft.io Mon Feb 1 19:30:41 2016 From: donald at stufft.io (Donald Stufft) Date: Mon, 1 Feb 2016 19:30:41 -0500 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <56AFEC55.30706@ubuntu.com> References: <56AFEC55.30706@ubuntu.com> Message-ID: <945A58B8-87DC-42DD-B215-6DCB0B047B9E@stufft.io> > On Feb 1, 2016, at 6:37 PM, Matthias Klose wrote: > > On 30.01.2016 00:29, Nathaniel Smith wrote: >> Hi all, >> >> I think this is ready for pronouncement now -- thanks to everyone for >> all their feedback over the last few weeks! > > I don't think so. I am biased because I'm the maintainer for Python in Debian/Ubuntu. So I would like to have some feedback from maintainers of Python in other Linux distributions (Nick, no, you're not one of these). > > The proposal just takes some environment and declares that as a standard. So everybody wanting to supply these wheels basically has to use this environment. Without giving any details, without giving any advise how to produce such wheels in other environments. Without giving any hints how such wheels may be broken with newer environments. I?m not sure this is true. It tells you exactly what versions of glibc and other libraries it is allowed to link against. It can link against older if it wants, it can?t link against newer. > Without mentioning this is am64/i386 only. First sentence: This PEP proposes the creation of a new platform tag for Python package built distributions, such as wheels, calledmanylinux1_{x86_64,i686} with external dependencies limited to a standardized, restricted subset of the Linux kernel and core userspace ABI. Later on: Because CentOS 5 is only available for x86_64 and i686 architectures, these are the only architectures currently supported by the manylinux1 policy. I think it?s a reasonable policy too, AMD64 is responsible for an order of magnitude more downloads than all other architectures on Linux combined (71,424,040 vs 1,086,527 in current data set). If you compare AMD64+i386 against everything else then you?re looking at two orders of magnitude (72,142,511 vs 368,056). I think we can live with a solution that covers 99.5% of all Linux downloads from PyPI. > There might be more. Pretty please be specific about your environment. Have a look how the LSB specifies requirements on the runtime environment ... and then ask yourself why the lsb doesn't have any real value. > Instead of vague references to the LSB, can you tell us why you think the LSB doesn?t have any real value, and importantly, how that relates to trying to determine a minimum set of binary ABI. In addition, can you clarify why, if your assertion is this isn?t going to work, can you state why it won?t work when it is working for many user?s in the wild through Anaconda, Enthought, the Holy Build Box, etc? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From rmcgibbo at gmail.com Mon Feb 1 19:31:52 2016 From: rmcgibbo at gmail.com (Robert T. McGibbon) Date: Mon, 1 Feb 2016 16:31:52 -0800 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <56AFEEA6.9010704@sdamon.com> References: <56AFEC55.30706@ubuntu.com> <56AFEEA6.9010704@sdamon.com> Message-ID: On Mon, Feb 1, 2016 at 3:47 PM, Alexander Walters wrote: > > > On 2/1/2016 18:37, Matthias Klose wrote: > >> On 30.01.2016 00:29, Nathaniel Smith wrote: >> >>> Hi all, >>> >>> I think this is ready for pronouncement now -- thanks to everyone for >>> all their feedback over the last few weeks! >>> >> >> I don't think so. I am biased because I'm the maintainer for Python in >> Debian/Ubuntu. So I would like to have some feedback from maintainers of >> Python in other Linux distributions (Nick, no, you're not one of these). >> >> The proposal just takes some environment and declares that as a >> standard. So everybody wanting to supply these wheels basically has to use >> this environment. Without giving any details, without giving any advise how >> to produce such wheels in other environments. Without giving any hints how >> such wheels may be broken with newer environments. Without mentioning this >> is am64/i386 only. >> There might be more. Pretty please be specific about your environment. >> Have a look how the LSB specifies requirements on the runtime environment >> ... and then ask yourself why the lsb doesn't have any real value. >> >> Matthias >> >> I... Thought the environment this pep describes is the docker image, and > only the docker image, and anything not made on that docker image is in > violation of the pep. It's not correct that anything made outside the docker image is in violation of the PEP. The docker images are just tools that can help you compile compliant wheels. Nathaniel and I tried to describe this as precisely as we could. See this section of the PEP. To comply with the policy, the wheel needs to (a not link against any other external libraries beyond those mentioned in the PEP, (b) *work* on a stock CentOS 5.11 machine, and (c) not use any narrow-unicode symbols (only relevant for Python < 3.2). A consequence of requirements (a) and (b) is that versioned symbols that are referenced in the depended-upon shared libraries need to use sufficiently old versions of the symbols, which are noted in the PEP as well. In order to satisfy this aspect of the policy, the easiest route, from our experience, is to compile the wheel inside a CentOS 5 environment, but other routes are possible, including - statically link everything - use your favorite linux distro, but install an older version of glibc and configure your compiler to point to that. - use some inline assembly to instruct the linker to prefer older symbol versions in libraries like glibc. - etc. I also wrote the auditwheel command line tool that can check to see if a wheel is manylinux1 compatible, and give you some feedback about what to fix if it's not. And furthermore, I've just put up an example project on github that you can use as a template for compiling manylinux1 wheels using Travis-CI. You can find it here: https://github.com/pypa/python-manylinux-demo -Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Feb 1 19:40:40 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 1 Feb 2016 16:40:40 -0800 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <56AFEC55.30706@ubuntu.com> References: <56AFEC55.30706@ubuntu.com> Message-ID: On Mon, Feb 1, 2016 at 3:37 PM, Matthias Klose wrote: > On 30.01.2016 00:29, Nathaniel Smith wrote: >> >> Hi all, >> >> I think this is ready for pronouncement now -- thanks to everyone for >> all their feedback over the last few weeks! > > > I don't think so. I am biased because I'm the maintainer for Python in > Debian/Ubuntu. Thank you for your work! I've been using Debian/Ubuntu as my sole desktop and for Python development since ~2000, so it's very much appreciated :-). > So I would like to have some feedback from maintainers of > Python in other Linux distributions (Nick, no, you're not one of these). > > The proposal just takes some environment and declares that as a standard. > So everybody wanting to supply these wheels basically has to use this > environment. Yeah, pretty much. The spec says "we define the ABI by reference to this environment" and if you can find some other ways to build these wheels without using that environment, then go for it, but in practice I doubt there's much point. CentOS 5 is widely available, free (in all senses), and works. Even if it does require one to use weird tools like "yum" and "rpm" whose UIs are constantly confusing me and make no sense whatsoever [1]. [1] i.e., they aren't identical to apt / dpkg. > Without giving any details, without giving any advise how to > produce such wheels in other environments. There isn't a lot of end-user-focused documentation in the PEP itself, it's true, but that's because that's how PEPs work. Here's an example project for building manylinux1 wheels using Travis-CI (which runs some version of Ubuntu): https://github.com/pypa/python-manylinux-demo Obviously this is all still a work in progress ("latest commit: 40 minutes ago"). Better docs are coming :-) > Without giving any hints how such > wheels may be broken with newer environments. What kind of hints are you suggesting? -n -- Nathaniel J. Smith -- https://vorpus.org From glyph at twistedmatrix.com Mon Feb 1 20:35:27 2016 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Mon, 1 Feb 2016 17:35:27 -0800 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <56AFEC55.30706@ubuntu.com> References: <56AFEC55.30706@ubuntu.com> Message-ID: <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> > On Feb 1, 2016, at 3:37 PM, Matthias Klose wrote: > > On 30.01.2016 00:29, Nathaniel Smith wrote: >> Hi all, >> >> I think this is ready for pronouncement now -- thanks to everyone for >> all their feedback over the last few weeks! > > I don't think so. I am biased because I'm the maintainer for Python in Debian/Ubuntu. So I would like to have some feedback from maintainers of Python in other Linux distributions (Nick, no, you're not one of these). Possibly, but it would be very helpful for such maintainers to limit their critique to "in what scenarios will this fail for users" and not have the whole peanut gallery chiming in with "well on _my_ platform we would have done it _this_ way". I respect what you've done for Debian and Ubuntu, Matthias, and I use the heck out of that work, but honestly this whole message just comes across as sour grapes that someone didn't pick a super-old Debian instead of a super-old Red Hat. I don't think it's promoting any progress. > The proposal just takes some environment and declares that as a standard. So everybody wanting to supply these wheels basically has to use this environment. There's already been lots of discussion about how this environment is a lowest common denominator. Many other similar environments could _also_ be lowest common denominator. > Without giving any details, without giving any advise how to produce such wheels in other environments. Without giving any hints how such wheels may be broken with newer environments. They won't be. That's the whole point. > Without mentioning this is am64/i386 only. Wheels already have an architecture tag, separate from the platform tag, so this being "am64/i386" is irrelevant. > There might be more. Pretty please be specific about your environment. Have a look how the LSB specifies requirements on the runtime environment ... and then ask yourself why the lsb doesn't have any real value. In the future, more specific and featureful distro tags sound like a good idea. But could we please stop making the default position on distutils-sig "this doesn't cater to my one specific environment in the most optimal possible way, so let's give up on progress entirely"? This is a good proposal that addresses environment portability and gives Python a substantially better build-artifact story than it currently has, in the environment most desperately needing one (server-side linux). Could it be better? Of course. It could be lots better. There are lots of use-cases for dynamically linked wheels and fancy new platform library features in newer linuxes. But that can all come later, and none of it needs to have an impact on this specific proposal, right now. -glyph From altsheets+mailinglists at gmail.com Tue Feb 2 16:48:17 2016 From: altsheets+mailinglists at gmail.com (AltSheets Dev) Date: Tue, 2 Feb 2016 22:48:17 +0100 Subject: [Distutils] setup('postinstall'='my.py') Message-ID: Hello everyone on distutils-sig@, this is a first timer, please be gentle to me *g* I am just starting with setuptools, and I just cannot get my idea working: At install time, I'd like to generate a file with a random UID, which will later always be the same. I had hoped for a setup('postinstall'='my.py') or setup('preinstall'= ...) but there isn't one. Then I have been trying with a customized distutils.command.install.install class - but so far with no success. Here is a detailed explanation of my futile attempts: https://github.com/altsheets/coinquery/blob/master/README-setupCustom.md I guess this is be a pretty frequent question? Happy about any hints. Thanks. :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmcgibbo at gmail.com Tue Feb 2 18:00:02 2016 From: rmcgibbo at gmail.com (Robert T. McGibbon) Date: Tue, 2 Feb 2016 15:00:02 -0800 Subject: [Distutils] setup('postinstall'='my.py') In-Reply-To: References: Message-ID: One very simple technique used by some projects like numpy is just to have ``setup.py`` write a file into the source tree before calling setup(). example: https://github.com/numpy/numpy/blob/master/setup.py#L338-L339 -Robert On Tue, Feb 2, 2016 at 1:48 PM, AltSheets Dev < altsheets+mailinglists at gmail.com> wrote: > Hello everyone on distutils-sig@, > this is a first timer, please be gentle to me *g* > > I am just starting with setuptools, > and I just cannot get my idea working: > > At install time, > I'd like to generate a file with a random UID, > which will later always be the same. > > I had hoped for a setup('postinstall'='my.py') or setup('preinstall'= ...) > but there isn't one. > > Then I have been trying with a customized > distutils.command.install.install > class - but so far with no success. > > Here is a detailed explanation of my futile attempts: > https://github.com/altsheets/coinquery/blob/master/README-setupCustom.md > > I guess this is be a pretty frequent question? > > Happy about any hints. > > Thanks. > :-) > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -- -Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From altsheets+mailinglists at gmail.com Wed Feb 3 10:31:17 2016 From: altsheets+mailinglists at gmail.com (AltSheets Dev) Date: Wed, 3 Feb 2016 16:31:17 +0100 Subject: [Distutils] setup('postinstall'='my.py') Message-ID: Thanks a lot! I could get that working! See setup.py lines 13 and 69 in https://github.com/altsheets/coinquery/blob/f1759a6dd5cd891f493da343c52e8184b45fc926/setup.py BUT my custom install routines are called when the windows installer binary is CREATED, not when it is UNPACKED on the target machine. e.g. this coinquery-0.2.4.win-amd64.exe https://github.com/altsheets/coinquery/tree/master/debug when unpacked ... always results in the same UID sz4u0zrqkas3q8s3t0se you can see that when you type hello Any hints? Thanks. On 3 February 2016 at 00:00, Robert T. McGibbon wrote: > One very simple technique used by some projects like numpy is just to have > ``setup.py`` write a file into the source tree before calling setup(). > > example: https://github.com/numpy/numpy/blob/master/setup.py#L338-L339 > > -Robert > > On Tue, Feb 2, 2016 at 1:48 PM, AltSheets Dev < > altsheets+mailinglists at gmail.com> wrote: > >> Hello everyone on distutils-sig@, >> this is a first timer, please be gentle to me *g* >> >> I am just starting with setuptools, >> and I just cannot get my idea working: >> >> At install time, >> I'd like to generate a file with a random UID, >> which will later always be the same. >> >> I had hoped for a setup('postinstall'='my.py') or setup('preinstall'= >> ...) but there isn't one. >> >> Then I have been trying with a customized >> distutils.command.install.install >> class - but so far with no success. >> >> Here is a detailed explanation of my futile attempts: >> https://github.com/altsheets/coinquery/blob/master/README-setupCustom.md >> >> I guess this is be a pretty frequent question? >> >> Happy about any hints. >> >> Thanks. >> :-) >> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > > > -- > -Robert > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leorochael at gmail.com Wed Feb 3 11:10:39 2016 From: leorochael at gmail.com (Leonardo Rochael Almeida) Date: Wed, 3 Feb 2016 14:10:39 -0200 Subject: [Distutils] setup('postinstall'='my.py') In-Reply-To: References: Message-ID: Hi, The thing about setup.py is that it only runs at "build time". However, it looks like you want to generate a unique id at every install. Since you mentioned a windows installer, consider adding some code to the installer itself so that it calls an entry point in your library to generate this unique id at that moment. Sorry, there isn't any support in setuptools to do this from within setup.py itself. Regards, Leo On 3 February 2016 at 13:31, AltSheets Dev wrote: > Thanks a lot! > > I could get that working! > See setup.py lines 13 and 69 in > > https://github.com/altsheets/coinquery/blob/f1759a6dd5cd891f493da343c52e8184b45fc926/setup.py > > BUT > my custom install routines are called > when the windows installer binary is CREATED, > not when it is UNPACKED on the target machine. > > e.g. this coinquery-0.2.4.win-amd64.exe > https://github.com/altsheets/coinquery/tree/master/debug > > when unpacked ... always results in the same UID sz4u0zrqkas3q8s3t0se > you can see that when you type > > hello > > > Any hints? > > Thanks. > > > > On 3 February 2016 at 00:00, Robert T. McGibbon > wrote: > >> One very simple technique used by some projects like numpy is just to >> have ``setup.py`` write a file into the source tree before calling setup(). >> >> example: https://github.com/numpy/numpy/blob/master/setup.py#L338-L339 >> >> -Robert >> >> On Tue, Feb 2, 2016 at 1:48 PM, AltSheets Dev < >> altsheets+mailinglists at gmail.com> wrote: >> >>> Hello everyone on distutils-sig@, >>> this is a first timer, please be gentle to me *g* >>> >>> I am just starting with setuptools, >>> and I just cannot get my idea working: >>> >>> At install time, >>> I'd like to generate a file with a random UID, >>> which will later always be the same. >>> >>> I had hoped for a setup('postinstall'='my.py') or setup('preinstall'= >>> ...) but there isn't one. >>> >>> Then I have been trying with a customized >>> distutils.command.install.install >>> class - but so far with no success. >>> >>> Here is a detailed explanation of my futile attempts: >>> https://github.com/altsheets/coinquery/blob/master/README-setupCustom.md >>> >>> I guess this is be a pretty frequent question? >>> >>> Happy about any hints. >>> >>> Thanks. >>> :-) >>> >>> >>> >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >>> >> >> >> -- >> -Robert >> > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremy.kloth at gmail.com Wed Feb 3 13:57:43 2016 From: jeremy.kloth at gmail.com (Jeremy Kloth) Date: Wed, 3 Feb 2016 11:57:43 -0700 Subject: [Distutils] Fwd: setup('postinstall'='my.py') In-Reply-To: References: Message-ID: Resending to list... ---------- Forwarded message ---------- From: Jeremy Kloth Date: Wed, Feb 3, 2016 at 10:50 AM Subject: Re: [Distutils] setup('postinstall'='my.py') To: AltSheets Dev It appears that you are using bdist_wininst to create the installer. So you are in luck! The bdist_wininst command supports running simple scripts at a few points in the installation process. First is at the pre-install phase. This script doesn't need to be included in the installed files as it stored in the installer binary directly. The next phase is post-install (after all files have been copied). This script *does* need to be included in the distribution, using the 'scripts' setup() argument. Finally, there is the pre-uninstall phase. This script is the same one as the post-install script. When run, the scripts have a few additional built-in functions available: - create_shortcut(path, desc, filename, [arguments, [workingdir, [iconpath, [iconindex]]]]) - get_special_folder_path(csidl_name) - get_root_hkey() - file_created(path) - directory_created(path) - message_box(text, caption, flags) To use an installer script, you need to pass options to the bdist_wininst command. '--pre-install-script=' for the pre-install script, where 'pathname' is the path to the script to include (relative to setup.py or fully-qualified). '--install-script=' for the post-install/pre-uninstall script, where 'scriptname' is the name used in the 'scripts' argument. distutils command options can be passed on the command-line or the 'command_options' argument to setup(). Unfortunately, to add a file to the installation you need to determine the full install path yourself in the install script. Here is (an untested) set of additions that should achieve what you are asking for. setup.py: setup(... command_options={'bdist_wininst': {'pre-install-script': 'create-uuid.py'}}, ...) create-uuid.py: import os, sys # using functions you have already defined in 'setupCustom.py' but should be included directly sitedir = os.path.join(sys.prefix, 'Lib', 'site-packages') createUIDfile(sitedir) uidfile = os.path.join(sitedir, SUBFOLDER, 'UID.py') # log the file so the installer removes it on uninstall file_created(uidfile) Hope this helps! -- Jeremy Kloth From leorochael at gmail.com Wed Feb 3 14:04:18 2016 From: leorochael at gmail.com (Leonardo Rochael Almeida) Date: Wed, 3 Feb 2016 17:04:18 -0200 Subject: [Distutils] Fwd: setup('postinstall'='my.py') In-Reply-To: References: Message-ID: Re-sending to the list as well: ---------- Forwarded message ---------- From: Leonardo Rochael Almeida Date: 3 February 2016 at 16:23 Subject: Re: [Distutils] setup('postinstall'='my.py') To: Alt Sheets Hi, [...] On 3 February 2016 at 14:45, Alt Sheets wrote: > > Thx for your answer! > No problem! > [...] > I am a bit surprised it turns out to be such a difficult thing to do. > Aren't postinstall scripts pretty standard? > They're common in infrastructure designed to install software packages into operating systems, e.g. dpkg/rpm for Linux distros and Windows installers. On the other hand, I don't think there's a post-install in the usual way of installing things on the Mac. It's just a folder that is dragged into another folder (not a Mac user myself, so I could be wrong, there could be some side effects on this action). In any case, AFAIU, the use-case for pypa stack is mostly about distributing (Python) code, not end-user applications. There's also some consideration (and disagreement) about running arbitrary code during install, which could be a privileged operation compared to just using a library. > mentioned a windows installer, > Yes. > I want to deploy through windows installer, but also through pypi. > All ways necessary to get to the people. > Consider in which ways the users of your package might want or not the UUID to be generated automatically depending on whether they're using the installer or some other package. I, for example, would not want the UUID generated automatically in a machine that is used for automatic testing of your package, as I might want to use a pre-determined UUID to simulate certain situations... > thing about setup.py is that it only runs at "build time". > Ah. > > Does that mean that setup.py is then also not run during deployment > through pypi? > No. By the time a wheel is generated (or an egg, or any other bdist_* command is run) the job of the setup.py for your package is done. Unless, of course, you only upload the source package to PyPI. But in this case your setup.py will be run on every machine only as a side effect of the fact that your package would have to be built every time it is downloaded. On the other hand, if someone creates a local package index (or find-links location), they could put a pre-built package there and every install will use the pre-built package, with its already generated UUID. In your case, besides the tweaking of the Windows installer, I would consider the following strategies: - Create the UUID on first run (e.g. at import time) - Add a script (or function) for creating the UUID and document for your users that it needs to be called before using your library In the cases above, the first time your application is run could be under different privileges than those when it was installed, so it might not be able to write to the same places. So you need to consider where exactly you'll write this UUID, like: - Somewhere on the Windows Registry - A File in the user home Thanks a lot! Very helpful. > > :-) > My pleasure! -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Feb 3 14:29:41 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 3 Feb 2016 19:29:41 +0000 Subject: [Distutils] Fwd: setup('postinstall'='my.py') In-Reply-To: References: Message-ID: On 3 February 2016 at 19:04, Leonardo Rochael Almeida wrote: > Unless, of course, you only upload the source package to PyPI. But in this > case your setup.py will be run on every machine only as a side effect of the > fact that your package would have to be built every time it is downloaded. Modern versions of pip would build once and then cache the generated wheel on all subsequent installs. Long story short - for distribution Python *packages*, there is no real "post-install" step, and your setup.py cannot even be guaranteed to be run on the target machine, let alone for every install. >From the OP's description, he may in fact be building a Python *application*. Many Python applications can be (and are) distributed as packages with entry points for the actual executable command(s). But that's only really the correct option for applications that can live with the limitations involved (no post-install step, etc). Looking at your application (which I deduce is at https://github.com/altsheets/coinquery) it looks like the sort of thing that would *almost* work as a package distribution, except for this "UID" initial setup step. Have you thought of maybe not relying on the UID being generated at "install" time, but rather having a "coinq init" command that the user runs *once* to set up his/her UID? The other commands could then check for that UID wherever you choose to store it, and if they don't find it bail out with an error message something like "You need to initialise your environment with 'coinq init' before running coinq commands". (Or they could even automatically run init if they find no UID). I know this isn't how you originally thought of the design - it's a question of whether you want to modify the design to work with the tools you have, or use different tools that let you achieve what you want (in this case, a standard installer and some sort of py2exe or zipapp type of solution). Paul From greg.ewing at canterbury.ac.nz Wed Feb 3 15:39:31 2016 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 04 Feb 2016 09:39:31 +1300 Subject: [Distutils] Fwd: setup('postinstall'='my.py') In-Reply-To: References: Message-ID: <56B26583.6040402@canterbury.ac.nz> Leonardo Rochael Almeida wrote: > On the other hand, I don't think there's a post-install in the usual way > of installing things on the Mac. It's just a folder that is dragged into > another folder (not a Mac user myself, so I could be wrong, there could > be some side effects on this action). A lot of Mac software is distributed that way, but not all. There is an installer mechanism, using .pkg files (the moral equivalent of a .msi file) that can run whatever scripts it needs during installation. -- Greg From ncoghlan at gmail.com Thu Feb 4 03:14:15 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 4 Feb 2016 18:14:15 +1000 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <56AFEC55.30706@ubuntu.com> References: <56AFEC55.30706@ubuntu.com> Message-ID: On 2 Feb 2016 10:45, "Matthias Klose" wrote: > > On 30.01.2016 00:29, Nathaniel Smith wrote: >> >> Hi all, >> >> I think this is ready for pronouncement now -- thanks to everyone for >> all their feedback over the last few weeks! > > > I don't think so. I am biased because I'm the maintainer for Python in Debian/Ubuntu. So I would like to have some feedback from maintainers of Python in other Linux distributions (Nick, no, you're not one of these). The RHEL ABI maintainers raised very similar concerns (that's how I found out about the current status of libabigail), but they're in a similar position to you: Linux distros provide *much* stronger guarantees of ABI compatibility than this PEP even attempts to provide. Here's the key distinction though: not everything is a mission critical production server, and we know from cross-distro redistributors like Continuum Analytics & Enthought, as well as our own upstream experience with the extraordinarily informal ABIs for Windows and Mac OS X wheels that an underspecified approach like manylinux is likely to be good enough in practice. That puts the power in users' hands: * by default, most users will get to use developer provided builds, rather than having to build from source (bringing the Linux user experience up to parity with that for Windows and Mac OS X) * folks can still force the use of source builds if they prefer (e.g. to better optimise for their OS and hardware, or to change the threat profile for their open source project usage) * folks can still opt to use a third party build service if they prefer (whether that's a Linux distro or a cross platform redistributor) Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Feb 4 06:22:32 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 4 Feb 2016 21:22:32 +1000 Subject: [Distutils] Status update on the NumPy & SciPy vs SSE problem? Message-ID: While the manylinux PEP brings Linux up to comparable standing with Windows and Mac OS X in terms of distributing wheel files through PyPI, that does mean it still suffers from the same problem Windows does in relation to NumPy and SciPy wheels: no standardisation of the SSE capabilities of the machines. I figured that was independent of the manylinux PEP (since it affects Windows as well), but I'm also curious as to the current status (I found a couple of apparently relevant threads on the NumPy list, but figured it made more sense to just ask for an update rather than trusting my Google-fu) Cheers, Nick. P.S. I'm assuming the existing ability to publish NumPy & SciPy wheels for Mac OS X is based on Apple's tighter control of their hardware platform. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Thu Feb 4 06:42:19 2016 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 4 Feb 2016 12:42:19 +0100 Subject: [Distutils] Status update on the NumPy & SciPy vs SSE problem? References: Message-ID: <20160204124219.04e01431@fsol> On Thu, 4 Feb 2016 21:22:32 +1000 Nick Coghlan wrote: > > I figured that was independent of the manylinux PEP (since it affects > Windows as well), but I'm also curious as to the current status (I > found a couple of apparently relevant threads on the NumPy list, but > figured it made more sense to just ask for an update rather than > trusting my Google-fu) While I'm not a Numpy maintainer, I don't think you can go much further than SSE2 (which is standard under the x86-64 definition). One factor is support by the kernel. The CentOS 5 kernel doesn't seem to support AVX, so you can't use AVX there even if your processor supports it (as the registers aren't preserved accross context switches). And one design point of manylinux is to support old Linux setups... (*) There are intermediate ISA additions between SSE2 and AVX (additions that don't require OS support), but I'm not sure they help much on compiler-vectorized code as opposed to hand-written assembly. Numpy's pre-compiled loops are typically quite straightforward as far as I've seen. One mitigation is to delegate some operations to an optimized library implementing the appropriate runtime switches: for example linear algebra is delegated by Numpy and Scipy to optimized BLAS and LINPACK libraries (which exist in various implementations such as OpenBLAS or Intel's MKL). (*) (this is an issue a JIT compiler helps circumvent: it generates optimal code for the current CPU ;-)) Regards Antoine. From njs at pobox.com Thu Feb 4 12:01:07 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 4 Feb 2016 09:01:07 -0800 Subject: [Distutils] Status update on the NumPy & SciPy vs SSE problem? In-Reply-To: References: Message-ID: On Feb 4, 2016 3:22 AM, "Nick Coghlan" wrote: > > While the manylinux PEP brings Linux up to comparable standing with > Windows and Mac OS X in terms of distributing wheel files through > PyPI, that does mean it still suffers from the same problem Windows > does in relation to NumPy and SciPy wheels: no standardisation of the > SSE capabilities of the machines. > > I figured that was independent of the manylinux PEP (since it affects > Windows as well), but I'm also curious as to the current status (I > found a couple of apparently relevant threads on the NumPy list, but > figured it made more sense to just ask for an update rather than > trusting my Google-fu) I'm not entirely sure what the SSE status is of the numpy OSX wheels. I think that they may be just following Apple's guidance on this (in the sense of: we tell their compiler to target a certain OS version and then use default options beyond that), but I'm not sure. It may even differ between the 32- and 64-bit "parts" of the fat binaries. Asking on numpy-discussion might net more details. Otherwise, yeah, the current plan is to jump to SSE2 as the minimum required version as the new wheels become usable, since all the evidence seems to say that it's ubiquitous now. > P.S. I'm assuming the existing ability to publish NumPy & SciPy wheels > for Mac OS X is based on Apple's tighter control of their hardware > platform. Not particularly. It's based on (a) Linux wheels aren't allowed on pypi (modulo bugs -- see pypi issue #385), (b) windows wheels are impossible because on that platform there's no F/OSS-compatible toolchain that can build cpython-abi-compatible BLAS or scipy. So OSX is what's left after all the competitors shot themselves in the foot :-) (manylinux is the solution to (a); mingwpy.github.io is the solution to (b).) Once that stuff is solved then yeah, it might be nice to have some better solution to the problem of ISA variations. But the most important part is already handled by runtime cpu sniffing in the blas library, and for the rest there are a variety of possible solutions (doing our own cpu sniffing, or adding something to wheels, or etc.), and just in general figuring out which solution is best is just not anywhere near the top of our priority list when we still can't distribute binaries to most users at all :-) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Feb 4 13:53:11 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 4 Feb 2016 10:53:11 -0800 Subject: [Distutils] Status update on the NumPy & SciPy vs SSE problem? In-Reply-To: References: Message-ID: On Thu, Feb 4, 2016 at 9:01 AM, Nathaniel Smith wrote: > On Feb 4, 2016 3:22 AM, "Nick Coghlan" wrote: >> >> While the manylinux PEP brings Linux up to comparable standing with >> Windows and Mac OS X in terms of distributing wheel files through >> PyPI, that does mean it still suffers from the same problem Windows >> does in relation to NumPy and SciPy wheels: no standardisation of the >> SSE capabilities of the machines. >> >> I figured that was independent of the manylinux PEP (since it affects >> Windows as well), but I'm also curious as to the current status (I >> found a couple of apparently relevant threads on the NumPy list, but >> figured it made more sense to just ask for an update rather than >> trusting my Google-fu) > > I'm not entirely sure what the SSE status is of the numpy OSX wheels. I > think that they may be just following Apple's guidance on this (in the sense > of: we tell their compiler to target a certain OS version and then use > default options beyond that), but I'm not sure. It may even differ between > the 32- and 64-bit "parts" of the fat binaries. Asking on numpy-discussion > might net more details. I'm more or less responsible for the numpy and scipy OSX wheels. The compiler flags for building come from the compiler flags for Python.org Python via distutils. As Nathaniel says, the big speed problem and opportunity is in the BLAS / LAPACK libraries, and we link against the Accelerate library for this, which comes installed on OSX. This seems to be well-tuned to the underlying hardware. Another option for BLAS / LAPACK is OpenBLAS which can do run-time CPU detection to select the fastest (and not-crashing) code-paths. > Otherwise, yeah, the current plan is to jump to SSE2 as the minimum required. > version as the new wheels become usable, since all the evidence seems to say > that it's ubiquitous now. Some of that evidence for Windows is listed at https://github.com/numpy/numpy/wiki/Windows-versions Also, SSE2 instructions are part of the specification of the AMD64 architecture [1] and so, quoting from [2] "The SSE2 instruction set is supported on all 64-bit CPUs and operating systems". Cheers, Matthew [1] https://courses.cs.washington.edu/courses/cse351/12wi/supp-docs/abi.pdf [2] http://www.agner.org/optimize/optimizing_cpp.pdf From cournape at gmail.com Fri Feb 5 04:33:15 2016 From: cournape at gmail.com (David Cournapeau) Date: Fri, 5 Feb 2016 09:33:15 +0000 Subject: [Distutils] Status update on the NumPy & SciPy vs SSE problem? In-Reply-To: <20160204124219.04e01431@fsol> References: <20160204124219.04e01431@fsol> Message-ID: On Thu, Feb 4, 2016 at 11:42 AM, Antoine Pitrou wrote: > On Thu, 4 Feb 2016 21:22:32 +1000 > Nick Coghlan wrote: > > > > I figured that was independent of the manylinux PEP (since it affects > > Windows as well), but I'm also curious as to the current status (I > > found a couple of apparently relevant threads on the NumPy list, but > > figured it made more sense to just ask for an update rather than > > trusting my Google-fu) > > While I'm not a Numpy maintainer, I don't think you can go much further > than SSE2 (which is standard under the x86-64 definition). > > One factor is support by the kernel. The CentOS 5 kernel doesn't > seem to support AVX, so you can't use AVX there even if your processor > supports it (as the registers aren't preserved accross context > switches). And one design point of manylinux is to support old Linux > setups... (*) > I don't have precise numbers, but I can confirm we get from times to times some customer reports related to avx not being supported (because of CPU or OS). > > There are intermediate ISA additions between SSE2 and AVX (additions > that don't require OS support), but I'm not sure they help much on > compiler-vectorized code as opposed to hand-written assembly. Numpy's > pre-compiled loops are typically quite straightforward as far as I've > seen. > > One mitigation is to delegate some operations to an optimized library > implementing the appropriate runtime switches: for example linear > algebra is delegated by Numpy and Scipy to optimized BLAS and LINPACK > libraries (which exist in various implementations such as OpenBLAS or > Intel's MKL). > > (*) (this is an issue a JIT compiler helps circumvent: it generates > optimal code for the current CPU ;-)) > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Feb 5 06:46:54 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 5 Feb 2016 21:46:54 +1000 Subject: [Distutils] Status update on the NumPy & SciPy vs SSE problem? In-Reply-To: References: Message-ID: On 4 February 2016 at 21:22, Nick Coghlan wrote: > While the manylinux PEP brings Linux up to comparable standing with > Windows and Mac OS X in terms of distributing wheel files through > PyPI, that does mean it still suffers from the same problem Windows > does in relation to NumPy and SciPy wheels: no standardisation of the > SSE capabilities of the machines. Thanks for the replies, folks! Checking I've understood the respective updates correctly: - x86_64 implies SSE2 capability - most i686 machines still in use are also SSE2 capable - Accelerate provides native BLAS/LAPACK APIs for Mac OS X - (ATLAS SSE2 or OpenBLAS) + manylinux should handle Linux - (ATLAS SSE2 or OpenBLAS) + mingwpy.github.io should handle Windows - Numba can optimise at runtime to use newer instructions when available The choice between an SSE2 build of ATLAS and OpenBLAS as the default BLAS/LAPACK implementation doesn't appear to have been made yet, but also shouldn't significantly impact the user experience of the resulting wheels. Does that sound right? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robin at reportlab.com Fri Feb 5 09:26:32 2016 From: robin at reportlab.com (Robin Becker) Date: Fri, 5 Feb 2016 14:26:32 +0000 Subject: [Distutils] version conflict Message-ID: <56B4B118.4000002@chamonix.reportlab.co.uk> Hi, a user is trying to install reportlab and is getting an error I have not seen ie pkg_resources.VersionConflict: (certifi 2015.11.20.1 (/Library/Python/2.7/site-packages), Requirement.parse('certifi==2015.11.20')) the full traceback is visible here https://bitbucket.org/rptlab/reportlab/issues/73/versionconflict anyone here that can explain what's wrong? My pip installs of reportlab don't use any certifi packages etc etc. -- Robin Becker From altsheets+mailinglists at gmail.com Fri Feb 5 11:39:24 2016 From: altsheets+mailinglists at gmail.com (AltSheets Dev) Date: Fri, 5 Feb 2016 17:39:24 +0100 Subject: [Distutils] Fwd: setup('postinstall'='my.py') Message-ID: Thank you Robert, Leonardo, Jeremy, Paul, Greg! > One very simple technique used by some projects like > numpy is just to have ``setup.py`` write a file into the > source tree before calling setup(). example: > https://github.com/numpy/numpy/blob/master/setup.py#L338-L339 Done that now. But: > The thing about setup.py is that it only runs at "build time". Yes, ouch. So ... not the way to go, in this case. Still good to know, useful for static but often updated version info, etc. > However, it looks like you want to generate a unique id at every install. Correct. The idea is to let me discover a possible abuse of serial keys - if a serial key appears together with many UUIDs, I can revoke it. (Not a protection against experts of course, but might still help.). My older frontend, a GoogleSheet (see altfolio.ddns.net) always automatically generates a UUID, in the moment when the user makes a copy of my mastersheet. That had inspired me now. Another goal of my questions on this list ... is learning about the whole process. I have been coding in Python for over a decade, but scientifically, sharing source code with other coders; or on a webserver - but I had never needed to make my code directly available to end users. I had always been wondering about packages & installers - now I am learning it. Your answers are great for me; *now* I understand all this much better. Thanks! > consider adding some code to the installer itself Wouldn't it be great if setuptools.setup provided that option, and OS-independent? > It appears that you are using bdist_wininst to create the installer. > So you are in luck! The bdist_wininst command supports running > simple scripts at a few points in the installation process. [...] > To use an installer script, you need to pass options to the > bdist_wininst command. '--pre-install-script=' for the > pre-install script, where 'pathname' is the path to the script to > include (relative to setup.py or fully-qualified). > '--install-script=' for the post-install/pre-uninstall script Great. That sounds as if it is perhaps the way to go then, thanks a lot. Sounds as if it can do exactly what I need. However then ... aren't I stuck with a windows-only solution? Would it be a valid feature request to make those two useful options (pre-install-script/install-script) available platform-independent? > Here is (an untested) set of additions that should achieve > what you are asking for. [...] Hope this helps! Yes, fantastic. Very nice. Thanks a lot. Makes it 100% clear now. > > Does that mean that setup.py is then also not run during deployment > No. By the time a wheel is generated (or an egg, or any other bdist_* > command is run) the job of the setup.py for your package is done. That was the essential sentence that made "click" in my understanding - thanks. Might be a good sentence to be placed in a box in a central place in the manual. > I would consider the following strategies: > Create the UUID on first run (e.g. at import time) > Add a script (or function) for creating the UUID and document for your > users that it needs to be called before using your library > Have you thought of maybe not relying on the UID being generated at > "install" time, but rather having a "coinq init" command that the user > runs *once* to set up his/her UID? The other commands could then > check for that UID wherever you choose to store it, and if they don't > find it [...] automatically run init if they find no UID. Yes. I know, that is the other way to go. But then I need to understand where I am allowed to write persistent files - outside the place where the other .py files are installed anyways. And as that UUID.py has to be created only once in the lifetime of my tool on their computers, I had thought the /site-packages/... folder is the best place for it. But perhaps it is really not. Do you know a Python package that abstracts from the platform ... with which I can write to (would those be the right places?) (username/AppData/Roaming/myfolder) on Windows, and (~/.myfolder) on Linux - and is it (~/Library/myfolder) on Mac? Thanks a lot! :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Fri Feb 5 11:46:27 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Fri, 5 Feb 2016 11:46:27 -0500 Subject: [Distutils] wheel 0.27.0 released Message-ID: Hi all, I have just released wheel 0.27.0. This version includes a few new features and fixes for long standing minor issues, many from outside contributions. >From the changelog: 0.27.0 ====== - Support forcing a platform tag using `--plat-name` on pure-Python wheels, as well as nonstandard platform tags on non-pure wheels (Pull Request #60, Issue #144, thanks Andr?s D?az) - Add SOABI tags to platform-specific wheels built for Python 2.X (Pull Request #55, Issue #63, Issue #101) - Support reproducible wheel files, wheels that can be rebuilt and will hash to the same values as previous builds (Pull Request #52, Issue #143, thanks Barry Warsaw) - Support for changes in keyring >= 8.0 (Pull Request #61, thanks Jason R. Coombs) - Use the file context manager when checking if dependency_links.txt is empty, fixes problems building wheels under PyPy on Windows (Issue #150, thanks Cosimo Lupo) - Don't attempt to (recursively) create a build directory ending with `..` (invalid on all platforms, but code was only executed on Windows) (Issue #91) - Added the PyPA Code of Conduct (Pull Request #56) --nate -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Fri Feb 5 12:35:02 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Fri, 5 Feb 2016 12:35:02 -0500 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: Message-ID: On Sat, Jan 30, 2016 at 3:37 AM, Nathaniel Smith wrote: > On Fri, Jan 29, 2016 at 11:52 PM, Nick Coghlan wrote: > > On 30 January 2016 at 09:29, Nathaniel Smith wrote: > >> Hi all, > >> > >> I think this is ready for pronouncement now -- thanks to everyone for > >> all their feedback over the last few weeks! > >> > >> The only change relative to the last posting is that we rewrote the > >> section on "Platform detection for installers", to switch to letting > >> distributors explicitly control manylinux1 compatibility by means of a > >> _manylinux module. > > > > In terms of the proposal itself, I think this version is excellent :) > > > > However, I realised that there's an implicit assumption we've been > > making that really should be spelled out explicitly: manylinux1 wheels > > targeting CPython 3.2 and earlier need to be compiled against a > > CPython built in wide Unicode mode, and in those cases, the detection > > of manylinux1 compatibility at the platform level should include > > checking for "sys.maxunicode > 0xFFFF". > > Doh, excellent catch! > > I've just pushed the obvious update to handle this directly to the > copy of the PEP in the manylinux repository. > > Diff: > https://github.com/manylinux/manylinux/commit/2e49cd16b89e0d6e84a5dc98ddb1a916968b73bc > > New text in full: > > https://raw.githubusercontent.com/manylinux/manylinux/2e49cd16b89e0d6e84a5dc98ddb1a916968b73bc/pep-513.rst > > I haven't sent to the PEP editors, because they already have another > diff from me sitting in their inboxes and I'm not sure how to do this > in a way that doesn't confuse things :-) > Now that pip and wheel both support the Python 3 SOABI tags on 2.X, is this necessary? The ABI tag should be set correctly on both the build and installation systems, so is including it as part of the manylinux1 ABI (and fixing it to wide builds only) overkill? --nate > > > The main reason we need to spell this out explicitly is that while > > distros (and I believe other redistributors) build CPython-for-Linux > > in wide mode as a matter of course, a Linux checkout of CPython 2.7 > > will build in narrow mode by default. > > I can confirm that Debian and Anaconda builds of CPython 2.7 both have > sys.maxunicode == 0x10ffff, but Enthought Canopy has sys.maxunicode == > 0xffff. Hmm. I guess they should fix that. > > Also the manylinux docker image currently has sys.maxunicode == > 0xffff, so we should definitely fix that :-). > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Feb 5 12:46:57 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 5 Feb 2016 09:46:57 -0800 Subject: [Distutils] SOABI for Unicode ABI on 2.x (was: wheel 0.27.0 released) Message-ID: On Feb 5, 2016 8:47 AM, "Nate Coraor" wrote: > [...] > - Add SOABI tags to platform-specific wheels built for Python 2.X (Pull Request > #55, Issue #63, Issue #101) I can't quite untangle all the documents linked from this PR, so let me ask here :-). Does this mean that python 2.x extension wheels now can and should declare whether they're assuming the 16- or 32-bit Unicode ABI inside the abi field? And if so, should PEP 513 be updated to allow for both options to be used with manylinux1? (Right not manylinux1 just implies/requires a UCS4 build, for older pythons where this matters.) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Feb 5 12:52:23 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 5 Feb 2016 09:52:23 -0800 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: Message-ID: On Feb 5, 2016 9:35 AM, "Nate Coraor" wrote: > > On Sat, Jan 30, 2016 at 3:37 AM, Nathaniel Smith wrote: >> >> On Fri, Jan 29, 2016 at 11:52 PM, Nick Coghlan wrote: >> > On 30 January 2016 at 09:29, Nathaniel Smith wrote: >> >> Hi all, >> >> >> >> I think this is ready for pronouncement now -- thanks to everyone for >> >> all their feedback over the last few weeks! >> >> >> >> The only change relative to the last posting is that we rewrote the >> >> section on "Platform detection for installers", to switch to letting >> >> distributors explicitly control manylinux1 compatibility by means of a >> >> _manylinux module. >> > >> > In terms of the proposal itself, I think this version is excellent :) >> > >> > However, I realised that there's an implicit assumption we've been >> > making that really should be spelled out explicitly: manylinux1 wheels >> > targeting CPython 3.2 and earlier need to be compiled against a >> > CPython built in wide Unicode mode, and in those cases, the detection >> > of manylinux1 compatibility at the platform level should include >> > checking for "sys.maxunicode > 0xFFFF". >> >> Doh, excellent catch! >> >> I've just pushed the obvious update to handle this directly to the >> copy of the PEP in the manylinux repository. >> >> Diff: https://github.com/manylinux/manylinux/commit/2e49cd16b89e0d6e84a5dc98ddb1a916968b73bc >> >> New text in full: >> https://raw.githubusercontent.com/manylinux/manylinux/2e49cd16b89e0d6e84a5dc98ddb1a916968b73bc/pep-513.rst >> >> I haven't sent to the PEP editors, because they already have another >> diff from me sitting in their inboxes and I'm not sure how to do this >> in a way that doesn't confuse things :-) > > > Now that pip and wheel both support the Python 3 SOABI tags on 2.X, is this necessary? The ABI tag should be set correctly on both the build and installation systems, so is including it as part of the manylinux1 ABI (and fixing it to wide builds only) overkill? Hah, I just asked the same thing :-). Clearly I should finish scrolling through my inbox before replying... Anyway, I would like to know this as well. :-) Also, just to confirm, the new releases of pip and wheel enable this for 2.x; is it also available for all 3.x? -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Fri Feb 5 12:53:52 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Fri, 5 Feb 2016 12:53:52 -0500 Subject: [Distutils] SOABI for Unicode ABI on 2.x (was: wheel 0.27.0 released) In-Reply-To: References: Message-ID: On Fri, Feb 5, 2016 at 12:46 PM, Nathaniel Smith wrote: > On Feb 5, 2016 8:47 AM, "Nate Coraor" wrote: > > > [...] > > - Add SOABI tags to platform-specific wheels built for Python 2.X (Pull > Request > > #55, Issue #63, Issue #101) > > I can't quite untangle all the documents linked from this PR, so let me > ask here :-). Does this mean that python 2.x extension wheels now can and > should declare whether they're assuming the 16- or 32-bit Unicode ABI > inside the abi field? And if so, should PEP 513 be updated to allow for > both options to be used with manylinux1? (Right not manylinux1 just > implies/requires a UCS4 build, for older pythons where this matters.) > > -n > It isn't declared, wheel determines the ABI of the interpreter upon which the wheel is being built and tags it accordingly. So yes, I think a PEP 513 update is appropriate. As to whether the manylinux1 Docker images should include UCS-2 Pythons is a separate question, though. If there's interest, I can provide statistics of how many of Galaxy's UCS-2 Linux eggs were downloaded over time. --nate -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Fri Feb 5 12:56:05 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Fri, 5 Feb 2016 12:56:05 -0500 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: Message-ID: On Fri, Feb 5, 2016 at 12:52 PM, Nathaniel Smith wrote: > On Feb 5, 2016 9:35 AM, "Nate Coraor" wrote: > > > > On Sat, Jan 30, 2016 at 3:37 AM, Nathaniel Smith wrote: > >> > >> On Fri, Jan 29, 2016 at 11:52 PM, Nick Coghlan > wrote: > >> > On 30 January 2016 at 09:29, Nathaniel Smith wrote: > >> >> Hi all, > >> >> > >> >> I think this is ready for pronouncement now -- thanks to everyone for > >> >> all their feedback over the last few weeks! > >> >> > >> >> The only change relative to the last posting is that we rewrote the > >> >> section on "Platform detection for installers", to switch to letting > >> >> distributors explicitly control manylinux1 compatibility by means of > a > >> >> _manylinux module. > >> > > >> > In terms of the proposal itself, I think this version is excellent :) > >> > > >> > However, I realised that there's an implicit assumption we've been > >> > making that really should be spelled out explicitly: manylinux1 wheels > >> > targeting CPython 3.2 and earlier need to be compiled against a > >> > CPython built in wide Unicode mode, and in those cases, the detection > >> > of manylinux1 compatibility at the platform level should include > >> > checking for "sys.maxunicode > 0xFFFF". > >> > >> Doh, excellent catch! > >> > >> I've just pushed the obvious update to handle this directly to the > >> copy of the PEP in the manylinux repository. > >> > >> Diff: > https://github.com/manylinux/manylinux/commit/2e49cd16b89e0d6e84a5dc98ddb1a916968b73bc > >> > >> New text in full: > >> > https://raw.githubusercontent.com/manylinux/manylinux/2e49cd16b89e0d6e84a5dc98ddb1a916968b73bc/pep-513.rst > >> > >> I haven't sent to the PEP editors, because they already have another > >> diff from me sitting in their inboxes and I'm not sure how to do this > >> in a way that doesn't confuse things :-) > > > > > > Now that pip and wheel both support the Python 3 SOABI tags on 2.X, is > this necessary? The ABI tag should be set correctly on both the build and > installation systems, so is including it as part of the manylinux1 ABI (and > fixing it to wide builds only) overkill? > > Hah, I just asked the same thing :-). Clearly I should finish scrolling > through my inbox before replying... > Heh, me too. Just replied over on the other thread. > Anyway, I would like to know this as well. :-) > > Also, just to confirm, the new releases of pip and wheel enable this for > 2.x; is it also available for all 3.x? > > -n > ABI tags always worked with wheel/pip on CPython 3.2+ (it has the SOABI config var), the new change "backports" this functionality to 2.X. --nate -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Feb 5 13:23:03 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 5 Feb 2016 18:23:03 +0000 Subject: [Distutils] Fwd: setup('postinstall'='my.py') In-Reply-To: References: Message-ID: On 5 February 2016 at 16:39, AltSheets Dev wrote: > Would it be a valid feature request to make those two useful options > (pre-install-script/install-script) available platform-independent? It's something that has been discussed under the "Metadata 2.0" banner. It's certainly a valid request - but addressing the various concerns and issues that come up is complex, and (AFAIK) no-one has really come up with a satisfactory solution yet. [...] >> Have you thought of maybe not relying on the UID being generated at >> "install" time, but rather having a "coinq init" command that the user >> runs *once* to set up his/her UID? The other commands could then >> check for that UID wherever you choose to store it, and if they don't >> find it [...] automatically run init if they find no UID. > > Yes. I know, that is the other way to go. > > But then I need to understand where I am allowed to write persistent files > - outside the place where the other .py files are installed anyways. > And as that UUID.py has to be created only once in the lifetime of > my tool on their computers, I had thought the /site-packages/... folder > is the best place for it. But perhaps it is really not. > > Do you know a Python package that abstracts from the platform ... with which > > I can write to (would those be the right places?) > > (username/AppData/Roaming/myfolder) on Windows, and > (~/.myfolder) on Linux - and is it > (~/Library/myfolder) on Mac? The "appdirs" project (https://pypi.python.org/pypi/appdirs) is one I've seen used/recommended a lot. Paul From ben+python at benfinney.id.au Fri Feb 5 13:38:55 2016 From: ben+python at benfinney.id.au (Ben Finney) Date: Sat, 06 Feb 2016 05:38:55 +1100 Subject: [Distutils] version conflict References: <56B4B118.4000002@chamonix.reportlab.co.uk> Message-ID: <85h9hn9f40.fsf@benfinney.id.au> Robin Becker writes: > pkg_resources.VersionConflict: (certifi 2015.11.20.1 > (/Library/Python/2.7/site-packages), > Requirement.parse('certifi==2015.11.20')) This is the hazard of specifying a strict no-earlier-no-later version requirement. Presumably ?certifi? at version ?2015.11.20.1? would serve just fine, but the strict requirement on ?certifi==2015.11.20? rejects that. > the full traceback is visible here > https://bitbucket.org/rptlab/reportlab/issues/73/versionconflict Sadly, Setuptools (in this case the ?pkg_resources? library) gives no help to identify where the failed requirement is specified. This is IMO worth a bug report on Setuptools: it should catch that exception and give a graceful error message with useful information, not a traceback dump. -- \ ?I tell you the truth: this generation will certainly not pass | `\ away until all these things [the end of the world] have | _o__) happened.? ?Jesus, c. 30 CE, as quoted in Matthew 24:34 | Ben Finney From brett at python.org Fri Feb 5 13:52:26 2016 From: brett at python.org (Brett Cannon) Date: Fri, 05 Feb 2016 18:52:26 +0000 Subject: [Distutils] How to declare optional dependencies? Message-ID: Maybe I'm totally overlooking something or misreading the docs, but I can't find a way to say in a requirements.txt file that a dependency is optional and its failure to install is okay. E.g., aiohttp supports using cchardet as an accelerator of chardet ( http://aiohttp.readthedocs.org/en/stable/#library-installation). I would like to be able to specify in my requirements.txt that I would like cchardet to be installed if possible, but it not being available -- which it might be as it doesn't have any Python 3.5 wheels ATM -- is fine and not the end of the world. I'm also happy to push a patch upstream if there is something aiohttp should be setting in their setup.py instead (I thought maybe extras, but those seem to be hard requirements as well). -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Feb 5 14:32:41 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 5 Feb 2016 11:32:41 -0800 Subject: [Distutils] SOABI for Unicode ABI on 2.x (was: wheel 0.27.0 released) In-Reply-To: References: Message-ID: On Feb 5, 2016 9:54 AM, "Nate Coraor" wrote: > > On Fri, Feb 5, 2016 at 12:46 PM, Nathaniel Smith wrote: >> >> On Feb 5, 2016 8:47 AM, "Nate Coraor" wrote: >> > >> [...] >> > - Add SOABI tags to platform-specific wheels built for Python 2.X (Pull Request >> > #55, Issue #63, Issue #101) >> >> I can't quite untangle all the documents linked from this PR, so let me ask here :-). Does this mean that python 2.x extension wheels now can and should declare whether they're assuming the 16- or 32-bit Unicode ABI inside the abi field? And if so, should PEP 513 be updated to allow for both options to be used with manylinux1? (Right not manylinux1 just implies/requires a UCS4 build, for older pythons where this matters.) >> >> -n > > > It isn't declared, wheel determines the ABI of the interpreter upon which the wheel is being built and tags it accordingly. So yes, I think a PEP 513 update is appropriate. As to whether the manylinux1 Docker images should include UCS-2 Pythons is a separate question, though. If there's interest, I can provide statistics of how many of Galaxy's UCS-2 Linux eggs were downloaded over time. My assumption was that we should include the UCS2 option in the docker image so that we could build some wheels so that we could put them on pypi so that we could get some statistics on usage so that we could decide whether it was worth including in the docker image ;-). Anyway, yes, I at least would be interested in seeing these statistics :-) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Feb 5 16:08:45 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 5 Feb 2016 13:08:45 -0800 Subject: [Distutils] Fwd: setup('postinstall'='my.py') In-Reply-To: References: Message-ID: On Fri, Feb 5, 2016 at 8:39 AM, AltSheets Dev < altsheets+mailinglists at gmail.com> wrote: > > consider adding some code to the installer itself > Wouldn't it be great if setuptools.setup provided that option, and > OS-independent? > well, no. setuptools is a bit of an (ugly?) amalgamation of build tool, install tool, etc... we are trying to clean that up, so that setuptools is a built tool, and pip is the install tool (and each can be replaced by other options for your use-case). so it would be pip that would need a post-install-hook. Thought that's tricky too -- where would you put that code, with binary wheels, or ??? You might look at the conda package manager -- I"m pretty sure it has a post-install hook. https://github.com/conda/conda Great. That sounds as if it is perhaps the way to go then, thanks a lot. > > Sounds as if it can do exactly what I need. > > However then ... aren't I stuck with a windows-only solution? > yup. :-( -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Fri Feb 5 16:41:49 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Fri, 5 Feb 2016 16:41:49 -0500 Subject: [Distutils] wheel 0.28.0 released Message-ID: Hi all, There was a bug introduced in 0.27.0 where scripts in the wheel archive were created with the wrong permissions. This has been fixed and released in 0.28.0. --nate -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Feb 5 18:57:18 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 5 Feb 2016 15:57:18 -0800 Subject: [Distutils] How to declare optional dependencies? In-Reply-To: References: Message-ID: I don't think you're overlooking anything. A recent thread: https://mail.python.org/pipermail/distutils-sig/2015-December/027944.html My comment there: https://mail.python.org/pipermail/distutils-sig/2015-December/027946.html -n On Fri, Feb 5, 2016 at 10:52 AM, Brett Cannon wrote: > Maybe I'm totally overlooking something or misreading the docs, but I can't > find a way to say in a requirements.txt file that a dependency is optional > and its failure to install is okay. E.g., aiohttp supports using cchardet as > an accelerator of chardet > (http://aiohttp.readthedocs.org/en/stable/#library-installation). I would > like to be able to specify in my requirements.txt that I would like cchardet > to be installed if possible, but it not being available -- which it might be > as it doesn't have any Python 3.5 wheels ATM -- is fine and not the end of > the world. > > I'm also happy to push a patch upstream if there is something aiohttp should > be setting in their setup.py instead (I thought maybe extras, but those seem > to be hard requirements as well). > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Sat Feb 6 00:04:48 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 6 Feb 2016 15:04:48 +1000 Subject: [Distutils] SOABI for Unicode ABI on 2.x (was: wheel 0.27.0 released) In-Reply-To: References: Message-ID: On 6 February 2016 at 03:53, Nate Coraor wrote: > On Fri, Feb 5, 2016 at 12:46 PM, Nathaniel Smith wrote: >> >> On Feb 5, 2016 8:47 AM, "Nate Coraor" wrote: >> > >> [...] >> > - Add SOABI tags to platform-specific wheels built for Python 2.X (Pull >> > Request >> > #55, Issue #63, Issue #101) >> >> I can't quite untangle all the documents linked from this PR, so let me >> ask here :-). Does this mean that python 2.x extension wheels now can and >> should declare whether they're assuming the 16- or 32-bit Unicode ABI inside >> the abi field? And if so, should PEP 513 be updated to allow for both >> options to be used with manylinux1? (Right not manylinux1 just >> implies/requires a UCS4 build, for older pythons where this matters.) > > It isn't declared, wheel determines the ABI of the interpreter upon which > the wheel is being built and tags it accordingly. So yes, I think a PEP 513 > update is appropriate. +1 from me, since it's a genuine bug in the current specification. > As to whether the manylinux1 Docker images should > include UCS-2 Pythons is a separate question, though. If there's interest, I > can provide statistics of how many of Galaxy's UCS-2 Linux eggs were > downloaded over time. While I'd be interested in those stats, my initial inclination is to say "No" to including narrow Unicode runtimes in the build environment, as: 1. Python 2.7 narrow Unicode builds really don't handle code points >= 65,535 correctly 2. Python 3.3+ doesn't have the narrow/wide distinction 3. Canopy users will presumably be getting most of their binaries from Enthought, not PyPI That means the only folks that seem likely to miss out on pre-built binaries this way would be Python 2.7 pyenv users. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Feb 6 00:12:28 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 6 Feb 2016 15:12:28 +1000 Subject: [Distutils] How to declare optional dependencies? In-Reply-To: References: Message-ID: On 6 February 2016 at 04:52, Brett Cannon wrote: > Maybe I'm totally overlooking something or misreading the docs, but I can't > find a way to say in a requirements.txt file that a dependency is optional > and its failure to install is okay. E.g., aiohttp supports using cchardet as > an accelerator of chardet > (http://aiohttp.readthedocs.org/en/stable/#library-installation). I would > like to be able to specify in my requirements.txt that I would like cchardet > to be installed if possible, but it not being available -- which it might be > as it doesn't have any Python 3.5 wheels ATM -- is fine and not the end of > the world. No, we don't have anything comparable to the Recommends/Suggests weak dependency system offered by distro package managers. Being able to specify "install this dependency if it is missing and a pre-built binary is available, otherwise skip it" could be a nice way for software publishers to be able to tweak the install experience of their package though - everyone gets a clean install experience, but the folks with relevant pre-built dependencies available get improved runtime performance. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From marius at gedmin.as Sat Feb 6 04:06:20 2016 From: marius at gedmin.as (Marius Gedminas) Date: Sat, 6 Feb 2016 11:06:20 +0200 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: Message-ID: <20160206090620.GA23933@platonas> On Fri, Feb 05, 2016 at 12:56:05PM -0500, Nate Coraor wrote: > On Fri, Feb 5, 2016 at 12:52 PM, Nathaniel Smith wrote: > > > On Feb 5, 2016 9:35 AM, "Nate Coraor" wrote: > > > Now that pip and wheel both support the Python 3 SOABI tags on 2.X, is > > this necessary? The ABI tag should be set correctly on both the build and > > installation systems, so is including it as part of the manylinux1 ABI (and > > fixing it to wide builds only) overkill? ... > > Also, just to confirm, the new releases of pip and wheel enable this for > > 2.x; is it also available for all 3.x? > > > > -n > > > > ABI tags always worked with wheel/pip on CPython 3.2+ (it has the SOABI > config var), the new change "backports" this functionality to 2.X. What are the minimum versions of pip and wheel to have this working correctly on 2.x? Marius Gedminas -- A "critic" is a man who creates nothing and thereby feels qualified to judge the work of creative men. There is logic in this; he is unbiased -- he hates all creative people equally. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From marius at gedmin.as Sat Feb 6 05:35:00 2016 From: marius at gedmin.as (Marius Gedminas) Date: Sat, 6 Feb 2016 12:35:00 +0200 Subject: [Distutils] SOABI for Unicode ABI on 2.x (was: wheel 0.27.0 released) In-Reply-To: References: Message-ID: <20160206103500.GB23933@platonas> On Sat, Feb 06, 2016 at 03:04:48PM +1000, Nick Coghlan wrote: > On 6 February 2016 at 03:53, Nate Coraor wrote: > > On Fri, Feb 5, 2016 at 12:46 PM, Nathaniel Smith wrote: > >> > >> On Feb 5, 2016 8:47 AM, "Nate Coraor" wrote: > >> > > >> [...] > >> > - Add SOABI tags to platform-specific wheels built for Python 2.X (Pull > >> > Request > >> > #55, Issue #63, Issue #101) > >> > >> I can't quite untangle all the documents linked from this PR, so let me > >> ask here :-). Does this mean that python 2.x extension wheels now can and > >> should declare whether they're assuming the 16- or 32-bit Unicode ABI inside > >> the abi field? And if so, should PEP 513 be updated to allow for both > >> options to be used with manylinux1? (Right not manylinux1 just > >> implies/requires a UCS4 build, for older pythons where this matters.) > > > > It isn't declared, wheel determines the ABI of the interpreter upon which > > the wheel is being built and tags it accordingly. So yes, I think a PEP 513 > > update is appropriate. > > +1 from me, since it's a genuine bug in the current specification. > > > As to whether the manylinux1 Docker images should > > include UCS-2 Pythons is a separate question, though. If there's interest, I > > can provide statistics of how many of Galaxy's UCS-2 Linux eggs were > > downloaded over time. > > While I'd be interested in those stats, my initial inclination is to > say "No" to including narrow Unicode runtimes in the build > environment, as: > > 1. Python 2.7 narrow Unicode builds really don't handle code points >= > 65,535 correctly > 2. Python 3.3+ doesn't have the narrow/wide distinction > 3. Canopy users will presumably be getting most of their binaries from > Enthought, not PyPI > > That means the only folks that seem likely to miss out on pre-built > binaries this way would be Python 2.7 pyenv users. And people who run build Python 2.7 with './configure && make && make install' Why does upstream Python default to UCS-2 builds on Linux anyway? FWIW the rationale Pyenv gave when they rejected a bug asking for UCS-4 builds by default was "we prefer to follow upstream defaults". Marius Gedminas -- Some people around here wouldn't recognize subtlety if it hit them on the head. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From ncoghlan at gmail.com Sat Feb 6 06:18:39 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 6 Feb 2016 21:18:39 +1000 Subject: [Distutils] SOABI for Unicode ABI on 2.x (was: wheel 0.27.0 released) In-Reply-To: <20160206103500.GB23933@platonas> References: <20160206103500.GB23933@platonas> Message-ID: On 6 February 2016 at 20:35, Marius Gedminas wrote: > On Sat, Feb 06, 2016 at 03:04:48PM +1000, Nick Coghlan wrote: >> That means the only folks that seem likely to miss out on pre-built >> binaries this way would be Python 2.7 pyenv users. > > And people who run build Python 2.7 with './configure && make && make install' If folks can handle building their own Python, handling building other projects isn't that much worse (although stumbling across FORTRAN dependencies may still be a surprise). > Why does upstream Python default to UCS-2 builds on Linux anyway? That default long predates my time on the core development team, but my guess is that it was influenced by that also being the default for Windows and the JVM, before folks were really aware of the problems that arise when using UTF-16 as the internal encoding for working with code points outside the Basic Multilingual Plane. By the time that perspective changed, the fix was to eliminate the distinction (and significantly reduce the memory cost of correctness), rather than to just change the default. > FWIW the rationale Pyenv gave when they rejected a bug asking for UCS-4 > builds by default was "we prefer to follow upstream defaults". In this case, the old defaults are dubious, but the upstream fix eliminated the relevant setting. Historically, it didn't really matter, since very few people were building their own Python for Linux. However, if that was pyenv's only reason for rejecting a switch to wide unicode builds, it may be worth trying again, this time pointing them to PEP 513 and the wide-build default for Python 2.7 wheels in the manylinux build environment. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From marius at gedmin.as Sat Feb 6 06:26:59 2016 From: marius at gedmin.as (Marius Gedminas) Date: Sat, 6 Feb 2016 13:26:59 +0200 Subject: [Distutils] SOABI for Unicode ABI on 2.x (was: wheel 0.27.0 released) In-Reply-To: References: <20160206103500.GB23933@platonas> Message-ID: <20160206112659.GA14644@platonas> On Sat, Feb 06, 2016 at 09:18:39PM +1000, Nick Coghlan wrote: > On 6 February 2016 at 20:35, Marius Gedminas wrote: > > FWIW the rationale Pyenv gave when they rejected a bug asking for UCS-4 > > builds by default was "we prefer to follow upstream defaults". > > In this case, the old defaults are dubious, but the upstream fix > eliminated the relevant setting. Historically, it didn't really > matter, since very few people were building their own Python for > Linux. > > However, if that was pyenv's only reason for rejecting a switch to > wide unicode builds, it may be worth trying again, this time pointing > them to PEP 513 and the wide-build default for Python 2.7 wheels in > the manylinux build environment. Here's the issue, if you'd like to try: https://github.com/yyuu/pyenv/issues/257 (I don't use pyenv myself; all I know about this issue is from helping other people debug problems on IRC.) Marius Gedminas -- Tilton's Law of Lisp Programming: if you do not need a metaclass, do not use a metaclass. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From solipsis at pitrou.net Sat Feb 6 07:28:06 2016 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 6 Feb 2016 13:28:06 +0100 Subject: [Distutils] Status update on the NumPy & SciPy vs SSE problem? References: Message-ID: <20160206132806.35a5a1fb@fsol> On Fri, 5 Feb 2016 21:46:54 +1000 Nick Coghlan wrote: > > Thanks for the replies, folks! > > Checking I've understood the respective updates correctly: > > - x86_64 implies SSE2 capability > - most i686 machines still in use are also SSE2 capable > - Accelerate provides native BLAS/LAPACK APIs for Mac OS X > - (ATLAS SSE2 or OpenBLAS) + manylinux should handle Linux > - (ATLAS SSE2 or OpenBLAS) + mingwpy.github.io should handle Windows > - Numba can optimise at runtime to use newer instructions when available > > The choice between an SSE2 build of ATLAS and OpenBLAS as the default > BLAS/LAPACK implementation doesn't appear to have been made yet, but > also shouldn't significantly impact the user experience of the > resulting wheels. I'm not sure that's what you're implying, but the choice of a specific BLAS or LAPACK implementation needn't (and shouldn't) be part of manylinux, it's just a choice left to the packager. Bottom line is that the BLAS/LAPACK implementation comes linked into the specific package (or as a separate package dependency, up to the packager's preference). Regards Antoine. From ncoghlan at gmail.com Sat Feb 6 08:12:08 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 6 Feb 2016 23:12:08 +1000 Subject: [Distutils] Status update on the NumPy & SciPy vs SSE problem? In-Reply-To: <20160206132806.35a5a1fb@fsol> References: <20160206132806.35a5a1fb@fsol> Message-ID: On 6 February 2016 at 22:28, Antoine Pitrou wrote: > I'm not sure that's what you're implying, but the choice of a specific > BLAS or LAPACK implementation needn't (and shouldn't) be part of > manylinux, it's just a choice left to the packager. Bottom line is > that the BLAS/LAPACK implementation comes linked into the specific > package (or as a separate package dependency, up to the packager's > preference). Oh, nice - I wasn't sure if that was part of the set of external libraries packages that extension modules needed to agree on (I've never actually built NumPy et al from source myself, I've always used the distro packages or conda). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From coleb at eyesopen.com Sat Feb 6 09:06:47 2016 From: coleb at eyesopen.com (Brian Cole) Date: Sat, 6 Feb 2016 14:06:47 +0000 Subject: [Distutils] SOABI for Unicode ABI on 2.x (was: wheel 0.27.0 released) In-Reply-To: <20160206103500.GB23933@platonas> References: , <20160206103500.GB23933@platonas> Message-ID: <02FFA895-3CE2-4640-807C-3488D6FF880C@eyesopen.com> FWIW, we've seen a large shift in our userbase from UCS-2 to UCS-4 as Anaconda Python becomes the defacto Python2 interpreter in the sciences. We still ship both UCS-2 and UCS-4 as well. -Brian > On Feb 6, 2016, at 5:35 AM, Marius Gedminas wrote: > >> On Sat, Feb 06, 2016 at 03:04:48PM +1000, Nick Coghlan wrote: >>> On 6 February 2016 at 03:53, Nate Coraor wrote: >>>> On Fri, Feb 5, 2016 at 12:46 PM, Nathaniel Smith wrote: >>>> >>>>> On Feb 5, 2016 8:47 AM, "Nate Coraor" wrote: >>>>> >>>> [...] >>>>> - Add SOABI tags to platform-specific wheels built for Python 2.X (Pull >>>>> Request >>>>> #55, Issue #63, Issue #101) >>>> >>>> I can't quite untangle all the documents linked from this PR, so let me >>>> ask here :-). Does this mean that python 2.x extension wheels now can and >>>> should declare whether they're assuming the 16- or 32-bit Unicode ABI inside >>>> the abi field? And if so, should PEP 513 be updated to allow for both >>>> options to be used with manylinux1? (Right not manylinux1 just >>>> implies/requires a UCS4 build, for older pythons where this matters.) >>> >>> It isn't declared, wheel determines the ABI of the interpreter upon which >>> the wheel is being built and tags it accordingly. So yes, I think a PEP 513 >>> update is appropriate. >> >> +1 from me, since it's a genuine bug in the current specification. >> >>> As to whether the manylinux1 Docker images should >>> include UCS-2 Pythons is a separate question, though. If there's interest, I >>> can provide statistics of how many of Galaxy's UCS-2 Linux eggs were >>> downloaded over time. >> >> While I'd be interested in those stats, my initial inclination is to >> say "No" to including narrow Unicode runtimes in the build >> environment, as: >> >> 1. Python 2.7 narrow Unicode builds really don't handle code points >= >> 65,535 correctly >> 2. Python 3.3+ doesn't have the narrow/wide distinction >> 3. Canopy users will presumably be getting most of their binaries from >> Enthought, not PyPI >> >> That means the only folks that seem likely to miss out on pre-built >> binaries this way would be Python 2.7 pyenv users. > > And people who run build Python 2.7 with './configure && make && make install' > > Why does upstream Python default to UCS-2 builds on Linux anyway? > > FWIW the rationale Pyenv gave when they rejected a bug asking for UCS-4 > builds by default was "we prefer to follow upstream defaults". > > Marius Gedminas > -- > Some people around here wouldn't recognize > subtlety if it hit them on the head. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From donald at stufft.io Sat Feb 6 10:18:32 2016 From: donald at stufft.io (Donald Stufft) Date: Sat, 6 Feb 2016 10:18:32 -0500 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <20160206090620.GA23933@platonas> References: <20160206090620.GA23933@platonas> Message-ID: > On Feb 6, 2016, at 4:06 AM, Marius Gedminas wrote: > > On Fri, Feb 05, 2016 at 12:56:05PM -0500, Nate Coraor wrote: >> On Fri, Feb 5, 2016 at 12:52 PM, Nathaniel Smith wrote: >> >>> On Feb 5, 2016 9:35 AM, "Nate Coraor" wrote: >>>> Now that pip and wheel both support the Python 3 SOABI tags on 2.X, is >>> this necessary? The ABI tag should be set correctly on both the build and >>> installation systems, so is including it as part of the manylinux1 ABI (and >>> fixing it to wide builds only) overkill? > ... >>> Also, just to confirm, the new releases of pip and wheel enable this for >>> 2.x; is it also available for all 3.x? >>> >>> -n >>> >> >> ABI tags always worked with wheel/pip on CPython 3.2+ (it has the SOABI >> config var), the new change "backports" this functionality to 2.X. > > What are the minimum versions of pip and wheel to have this working > correctly on 2.x? > > Marius Gedminas > -- > A "critic" is a man who creates nothing and thereby feels qualified to judge > the work of creative men. There is logic in this; he is unbiased -- he hates > all creative people equally. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig pip 8 and wheel 0.27 (or 0.28, I think there was a bug with 0.27) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From xav.fernandez at gmail.com Sat Feb 6 11:47:39 2016 From: xav.fernandez at gmail.com (Xavier Fernandez) Date: Sat, 6 Feb 2016 17:47:39 +0100 Subject: [Distutils] wheel 0.28.0 released In-Reply-To: References: Message-ID: Hello Nate, I think there is another regression with version 0.27: wheel files are not zipped anymore. Cf https://bitbucket.org/pypa/wheel/issues/155 and https://bitbucket.org/pypa/wheel/pull-requests/62/ for a possible fix. So watch the size of your wheels with latest version :) Regards, Xavier On Fri, Feb 5, 2016 at 10:41 PM, Nate Coraor wrote: > Hi all, > > There was a bug introduced in 0.27.0 where scripts in the wheel archive > were created with the wrong permissions. This has been fixed and released > in 0.28.0. > > --nate > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Sat Feb 6 12:25:38 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Sat, 6 Feb 2016 12:25:38 -0500 Subject: [Distutils] wheel 0.28.0 released In-Reply-To: References: Message-ID: Hi Xavier, Thanks for the fix! I added a test to catch this in the future and uploaded 0.29.0 to PyPI. --nate On Sat, Feb 6, 2016 at 11:47 AM, Xavier Fernandez wrote: > Hello Nate, > > I think there is another regression with version 0.27: wheel files are not > zipped anymore. > > Cf https://bitbucket.org/pypa/wheel/issues/155 and > https://bitbucket.org/pypa/wheel/pull-requests/62/ for a possible fix. > > So watch the size of your wheels with latest version :) > > Regards, > Xavier > > On Fri, Feb 5, 2016 at 10:41 PM, Nate Coraor wrote: > >> Hi all, >> >> There was a bug introduced in 0.27.0 where scripts in the wheel archive >> were created with the wrong permissions. This has been fixed and released >> in 0.28.0. >> >> --nate >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sun Feb 7 03:01:11 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 7 Feb 2016 00:01:11 -0800 Subject: [Distutils] Does anyone understand what's going on with libpython on Linux? Message-ID: So we found another variation between how different distros build CPython [1], and I'm very confused. Fedora (for example) turns out to work the way I naively expected: taking py27 as our example, they have: - libpython2.7.so.1.0 contains the actual python runtime - /usr/bin/python2.7 is a tiny (~7 KiB) executable that links to libpython2.7.so.1 to do the actual work; the main python package depends on the libpython package - python extension module packages depend on the libpython package, and contain extension modules linked against libpython2.7.so.1 - python extension modules compiled locally get linked against libpython2.7.so.1 by default Debian/Ubuntu do things differently: - libpython2.7.so.1.0 exists and contains the full python runtime, but is not installed by default - /usr/bin/python2.7 *also* contains a *second* copy of the full python runtime; there is no dependency relationship between these, and you don't even get libpython2.7.so.1.0 installed unless you explicitly request it or it gets pulled in through some other dependency - most python extension module packages do *not* depend on the libpython2.7 package, and contain extension modules that are *not* linked against libpython2.7.so.1.0 (but there are exceptions!) - python extension modules compiled locally do *not* get linked against libpython2.7.so.1 by default. The only things that seem to link against libpython2.7.so.1.0 in debian are: a) other packages that embed python (e.g. gnucash, paraview, perf, ...) b) some minority of python packages (e.g. the PySide/QtOpenGL.so module is one that I found that directly links to libpython2.7.so.1.0) I guess that the reason this works is that according to ELF linking rules, the symbols defined in the main executable, or in the transitive closure of the libraries that the main executable is linked to via DT_NEEDED entries, are all injected into the global scope of any dlopen'ed libraries. Uh, let me try saying that again. When you dlopen() a library -- like, for example, a python extension module -- then the extension automatically gets access to any symbols that are exported from either (a) the main executable itself, or (b) any of the libraries that are listed if you run 'ldd '. It also gets access to any symbols that are exported by itself, or any of the libraries listed if you run 'ldd '. OTOH it does *not* get access to any symbols exported by other libraries that get dlopen'ed -- each dlopen specifically creates its own "scope". So the reason this works is that Debian's /usr/bin/python2.7 itself exports all the standard Python C ABI symbols, so any extension module that it loads automatically get access to the CPython ABI, even if they don't explicitly link to it. And programs like gnucash are linked directly to libpython2.7.so.1, so they also end up exporting the CPython ABI to any libraries that they dlopen. But, it seems to me that there are two problems with the Debian/Ubuntu way of doing things: 1) it's rather wasteful of space, since there are two complete independent copies of the whole CPython runtime (one inside /usr/bin/python2.7, the other inside libpython2.7.so.1). 2) if I ever embed cpython by doing dlopen("libpython2.7.so.1"), or dlopen("some_plugin_library_linked_to_libpython.so"), then the embedded cpython will not be able to load python extensions that are compiled in the Debian-style (but will be able to load python extensions compiled in the Fedora-style), because the dlopen() the loaded the python runtime and the dlopen() that loads the extension module create two different scopes that can't see each other's symbols. [I'm pretty sure this is right, but linking is arcane and probably I should write some tests to double check.] I guess (2) might be why some of Debian's extension modules do link to libpython2.7.so.1 directly? Or maybe that's just a bug? Is there any positive reason in favor of the Debian style approach? Clearly someone put some work into setting things up this way, so there must be some motivation, but I'm not sure what it is? The immediate problem for us is that if a manylinux1 wheel links to libpythonX.Y.so (Fedora-style), and then it gets run on a Debian system that doesn't have libpythonX.Y.so installed, it will crash with: ImportError: libpython2.7.so.1.0: cannot open shared object file: No such file or directory Maybe this is okay and the solution is to tell people that they need to 'apt install libpython2.7'. In a sense this isn't even a regression, because every system that is capable of installing a binary extension from an sdist has python2.7-dev installed, which depends on libpython2.7 --> therefore every system that used to be able to do 'pip install somebinary' with sdists will still be able to do it with manylinux1 builds. The alternative is to declare that manylinux1 extensions should not link to libpython. This should I believe work fine on both Debian-style and Fedora-style installations -- though the PySide example, and the theoretical issue with embedding python through dlopen, both give me some pause. Two more questions: - What are Debian/Ubuntu doing in distutils so that extensions don't link to libpython by default? If we do go with the option of saying that manylinux extensions shouldn't link to libpython, then that's something auditwheel *can* fix up, but it'd be even nicer if we could set up the docker image to get it right in the first place. - Can/should Debian/Ubuntu switch to the Fedora model? Obviously it would take quite some time before a generic platform like manylinux could assume that this had happened, but it does seem better to me...? And if it's going to happen at all it might be nice to get the switch into 16.04 LTS? Of course that's probably ambitious, even if I'm not missing some reason why the Debian/Ubuntu model is actually advantageous. -n [1] https://github.com/pypa/manylinux/issues/30 -- Nathaniel J. Smith -- https://vorpus.org From rmcgibbo at gmail.com Sun Feb 7 03:25:57 2016 From: rmcgibbo at gmail.com (Robert T. McGibbon) Date: Sun, 7 Feb 2016 00:25:57 -0800 Subject: [Distutils] Does anyone understand what's going on with libpython on Linux? In-Reply-To: References: Message-ID: > What are Debian/Ubuntu doing in distutils so that extensions don't link to libpython by default? I don't know exactly, but one way to reproduce this is simply to build the interpreter without `--enable-shared`. I don't know that their reasons are, but I presume that the Debian maintainers have a well-considered reason for this design. The PEP 513 text currently says that it's permissible for manylinux1 wheels to link against libpythonX.Y.so. So presumably for a platform to be manylinux1-compatible, libpythonX.Y.so should be available. I guess my preference would be for pip to simply check as to whether or not libpythonX.Y.so is available in its platform detection code (pypa/pip/pull/3446). Because Debian/Ubuntu is such a big target, instead of just bailing out and forcing the user to install the sdist from PyPI (which is going to fail, because Debian installations that lack libpythonX.Y.so also lack Python.h), I would be +1 for adding some kind of message for this case that says, "maybe you should `sudo apt-get install python-dev` to get these fancy new wheels rolling." -Robert On Sun, Feb 7, 2016 at 12:01 AM, Nathaniel Smith wrote: > So we found another variation between how different distros build > CPython [1], and I'm very confused. > > Fedora (for example) turns out to work the way I naively expected: > taking py27 as our example, they have: > - libpython2.7.so.1.0 contains the actual python runtime > - /usr/bin/python2.7 is a tiny (~7 KiB) executable that links to > libpython2.7.so.1 to do the actual work; the main python package > depends on the libpython package > - python extension module packages depend on the libpython package, > and contain extension modules linked against libpython2.7.so.1 > - python extension modules compiled locally get linked against > libpython2.7.so.1 by default > > Debian/Ubuntu do things differently: > - libpython2.7.so.1.0 exists and contains the full python runtime, but > is not installed by default > - /usr/bin/python2.7 *also* contains a *second* copy of the full > python runtime; there is no dependency relationship between these, and > you don't even get libpython2.7.so.1.0 installed unless you explicitly > request it or it gets pulled in through some other dependency > - most python extension module packages do *not* depend on the > libpython2.7 package, and contain extension modules that are *not* > linked against libpython2.7.so.1.0 (but there are exceptions!) > - python extension modules compiled locally do *not* get linked > against libpython2.7.so.1 by default. > > The only things that seem to link against libpython2.7.so.1.0 in debian > are: > a) other packages that embed python (e.g. gnucash, paraview, perf, ...) > b) some minority of python packages (e.g. the PySide/QtOpenGL.so > module is one that I found that directly links to libpython2.7.so.1.0) > > I guess that the reason this works is that according to ELF linking > rules, the symbols defined in the main executable, or in the > transitive closure of the libraries that the main executable is linked > to via DT_NEEDED entries, are all injected into the global scope of > any dlopen'ed libraries. > > Uh, let me try saying that again. > > When you dlopen() a library -- like, for example, a python extension > module -- then the extension automatically gets access to any symbols > that are exported from either (a) the main executable itself, or (b) > any of the libraries that are listed if you run 'ldd executable>'. It also gets access to any symbols that are exported by > itself, or any of the libraries listed if you run 'ldd library>'. OTOH it does *not* get access to any symbols exported by > other libraries that get dlopen'ed -- each dlopen specifically creates > its own "scope". > > So the reason this works is that Debian's /usr/bin/python2.7 itself > exports all the standard Python C ABI symbols, so any extension module > that it loads automatically get access to the CPython ABI, even if > they don't explicitly link to it. And programs like gnucash are linked > directly to libpython2.7.so.1, so they also end up exporting the > CPython ABI to any libraries that they dlopen. > > But, it seems to me that there are two problems with the Debian/Ubuntu > way of doing things: > 1) it's rather wasteful of space, since there are two complete > independent copies of the whole CPython runtime (one inside > /usr/bin/python2.7, the other inside libpython2.7.so.1). > 2) if I ever embed cpython by doing dlopen("libpython2.7.so.1"), or > dlopen("some_plugin_library_linked_to_libpython.so"), then the > embedded cpython will not be able to load python extensions that are > compiled in the Debian-style (but will be able to load python > extensions compiled in the Fedora-style), because the dlopen() the > loaded the python runtime and the dlopen() that loads the extension > module create two different scopes that can't see each other's > symbols. [I'm pretty sure this is right, but linking is arcane and > probably I should write some tests to double check.] > > I guess (2) might be why some of Debian's extension modules do link to > libpython2.7.so.1 directly? Or maybe that's just a bug? > > Is there any positive reason in favor of the Debian style approach? > Clearly someone put some work into setting things up this way, so > there must be some motivation, but I'm not sure what it is? > > The immediate problem for us is that if a manylinux1 wheel links to > libpythonX.Y.so (Fedora-style), and then it gets run on a Debian > system that doesn't have libpythonX.Y.so installed, it will crash > with: > > ImportError: libpython2.7.so.1.0: cannot open shared object file: No > such file or directory > > Maybe this is okay and the solution is to tell people that they need > to 'apt install libpython2.7'. In a sense this isn't even a > regression, because every system that is capable of installing a > binary extension from an sdist has python2.7-dev installed, which > depends on libpython2.7 --> therefore every system that used to be > able to do 'pip install somebinary' with sdists will still be able to > do it with manylinux1 builds. > > The alternative is to declare that manylinux1 extensions should not > link to libpython. This should I believe work fine on both > Debian-style and Fedora-style installations -- though the PySide > example, and the theoretical issue with embedding python through > dlopen, both give me some pause. > > Two more questions: > - What are Debian/Ubuntu doing in distutils so that extensions don't > link to libpython by default? If we do go with the option of saying > that manylinux extensions shouldn't link to libpython, then that's > something auditwheel *can* fix up, but it'd be even nicer if we could > set up the docker image to get it right in the first place. > > - Can/should Debian/Ubuntu switch to the Fedora model? Obviously it > would take quite some time before a generic platform like manylinux > could assume that this had happened, but it does seem better to me...? > And if it's going to happen at all it might be nice to get the switch > into 16.04 LTS? Of course that's probably ambitious, even if I'm not > missing some reason why the Debian/Ubuntu model is actually > advantageous. > > -n > > [1] https://github.com/pypa/manylinux/issues/30 > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- -Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sun Feb 7 07:38:56 2016 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 7 Feb 2016 13:38:56 +0100 Subject: [Distutils] Does anyone understand what's going on with libpython on Linux? References: Message-ID: <20160207133856.4e24daa7@fsol> On Sun, 7 Feb 2016 00:25:57 -0800 "Robert T. McGibbon" wrote: > > What are Debian/Ubuntu doing in distutils so that extensions don't link > to libpython by default? > > I don't know exactly, but one way to reproduce this is simply to build the > interpreter without `--enable-shared`. See https://bugs.python.org/issue21536. It would be nice if you could lobby for this issue to be resolved... (though that would only be for 3.6, presumably) > I don't know that their reasons are, but I presume that the Debian > maintainers have a well-considered reason for this design. Actually, shared library builds can be noticeably slower. I did measurements some time ago, and the results are: - shared builds are 5-10% slower on x86 - they can be up to 30% slower on some ARM CPUs! (this is on pystone which is a very crude benchmark, but in this case I think the pattern is more general, since any function call internal to Python is affected by the difference in code generation: shared library builds add an indirection overhead when resolving non-static symbols) Note btw. that Anaconda builds are also shared library builds. Regards Antoine. From doko at ubuntu.com Sun Feb 7 12:15:06 2016 From: doko at ubuntu.com (Matthias Klose) Date: Sun, 7 Feb 2016 18:15:06 +0100 Subject: [Distutils] Does anyone understand what's going on with libpython on Linux? In-Reply-To: <20160207133856.4e24daa7@fsol> References: <20160207133856.4e24daa7@fsol> Message-ID: <56B77B9A.7010901@ubuntu.com> On 07.02.2016 13:38, Antoine Pitrou wrote: > On Sun, 7 Feb 2016 00:25:57 -0800 > "Robert T. McGibbon" wrote: >>> What are Debian/Ubuntu doing in distutils so that extensions don't link >> to libpython by default? >> >> I don't know exactly, but one way to reproduce this is simply to build the >> interpreter without `--enable-shared`. > > See https://bugs.python.org/issue21536. It would be nice if you could > lobby for this issue to be resolved... (though that would only be for > 3.6, presumably) > >> I don't know that their reasons are, but I presume that the Debian >> maintainers have a well-considered reason for this design. > > Actually, shared library builds can be noticeably slower. I did > measurements some time ago, and the results are: > - shared builds are 5-10% slower on x86 > - they can be up to 30% slower on some ARM CPUs! yes, that's the reason why the python executable is built statically against the libpython library. Matthias From matthew.brett at gmail.com Sun Feb 7 17:06:19 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 7 Feb 2016 14:06:19 -0800 Subject: [Distutils] Does anyone understand what's going on with libpython on Linux? In-Reply-To: <20160207133856.4e24daa7@fsol> References: <20160207133856.4e24daa7@fsol> Message-ID: On Sun, Feb 7, 2016 at 4:38 AM, Antoine Pitrou wrote: > On Sun, 7 Feb 2016 00:25:57 -0800 > "Robert T. McGibbon" wrote: >> > What are Debian/Ubuntu doing in distutils so that extensions don't link >> to libpython by default? >> >> I don't know exactly, but one way to reproduce this is simply to build the >> interpreter without `--enable-shared`. > > See https://bugs.python.org/issue21536. It would be nice if you could > lobby for this issue to be resolved... (though that would only be for > 3.6, presumably) Just to unpack from that issue - and quoting a nice summary by you (Antoine): "... the -l flag was added in #832799, for a rather complicated case where the interpreter is linked with a library dlopened by an embedding application (I suppose for some kind of plugin system)." Following the link to https://bugs.python.org/issue832799 - the `-l` flag (and therefore the dependency on libpython was added at Python 2.3 for the case where an executable A dlopens a library B.so . B.so has an embedded Python interpreter and is linked to libpython. However, when the embedded Python interpreter in B.so loads an extension module mymodule.so , mymodule.so does not inherit a namespace with the libpython symbols already loaded. See https://bugs.python.org/msg18810 . One option we have then is to remove all DT_NEEDED references to libpython in manylinux wheels. We get instant compatibility for bare Debian / Ubuntu Python installs, at the cost of causing some puzzling crash for the case of: dlopened library with embedded Python interpreter where the embedded Python interpreter imports a manylinux wheel. On the other hand, presumably this same crash will occur for nearly all Debian-packaged Python extension modules (if it is true that they do not specify a libpython dependency) - so it seems unlikely that this is a common problem. Cheers, Matthew From rmcgibbo at gmail.com Sun Feb 7 17:19:40 2016 From: rmcgibbo at gmail.com (Robert T. McGibbon) Date: Sun, 7 Feb 2016 14:19:40 -0800 Subject: [Distutils] Does anyone understand what's going on with libpython on Linux? In-Reply-To: References: <20160207133856.4e24daa7@fsol> Message-ID: > One option we have then is to remove all DT_NEEDED references to libpython in manylinux wheels. We get instant compatibility for bare Debian / Ubuntu Python installs, at the cost of causing some puzzling crash for the case of: dlopened library with embedded Python interpreter where the embedded Python interpreter imports a manylinux wheel. I don't think this is acceptable, since it's going to break some packages that depend on dlopen. > On the other hand, presumably this same crash will occur for nearly all Debian-packaged Python extension modules (if it is true that they do not specify a libpython dependency) - so it seems unlikely that this is a common problem. I don't think so. Debian-packaged extensions that require libpython to exist (a minority of them to be sure, but ones that use complex shared library layouts) just declare a dependency on libpython. For example, python-pyside has a Depends on libpython2.7: ``` $ apt-cache depends python-pyside.qtcore python-pyside.qtcore Depends: libc6 Depends: libgcc1 Depends: libpyside1.2 Depends: libpython2.7 Depends: libqtcore4 Depends: libshiboken1.2v5 Depends: libstdc++6 Depends: python Depends: python Conflicts: python-pyside.qtcore:i386 ``` -Robert On Sun, Feb 7, 2016 at 2:06 PM, Matthew Brett wrote: > On Sun, Feb 7, 2016 at 4:38 AM, Antoine Pitrou > wrote: > > On Sun, 7 Feb 2016 00:25:57 -0800 > > "Robert T. McGibbon" wrote: > >> > What are Debian/Ubuntu doing in distutils so that extensions don't > link > >> to libpython by default? > >> > >> I don't know exactly, but one way to reproduce this is simply to build > the > >> interpreter without `--enable-shared`. > > > > See https://bugs.python.org/issue21536. It would be nice if you could > > lobby for this issue to be resolved... (though that would only be for > > 3.6, presumably) > > Just to unpack from that issue - and quoting a nice summary by you > (Antoine): > > "... the -l flag was added in #832799, for a rather complicated case > where the interpreter is linked with a library dlopened by an > embedding application (I suppose for some kind of plugin system)." > > Following the link to https://bugs.python.org/issue832799 - the `-l` > flag (and therefore the dependency on libpython was added at Python > 2.3 for the case where an executable A dlopens a library B.so . B.so > has an embedded Python interpreter and is linked to libpython. > However, when the embedded Python interpreter in B.so loads an > extension module mymodule.so , mymodule.so does not inherit a > namespace with the libpython symbols already loaded. See > https://bugs.python.org/msg18810 . > > One option we have then is to remove all DT_NEEDED references to > libpython in manylinux wheels. We get instant compatibility for bare > Debian / Ubuntu Python installs, at the cost of causing some puzzling > crash for the case of: dlopened library with embedded Python > interpreter where the embedded Python interpreter imports a manylinux > wheel. > > On the other hand, presumably this same crash will occur for nearly > all Debian-packaged Python extension modules (if it is true that they > do not specify a libpython dependency) - so it seems unlikely that > this is a common problem. > > Cheers, > > Matthew > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- -Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sun Feb 7 17:33:45 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 7 Feb 2016 14:33:45 -0800 Subject: [Distutils] Does anyone understand what's going on with libpython on Linux? In-Reply-To: References: <20160207133856.4e24daa7@fsol> Message-ID: On Sun, Feb 7, 2016 at 2:19 PM, Robert T. McGibbon wrote: >> One option we have then is to remove all DT_NEEDED references to > libpython in manylinux wheels. We get instant compatibility for bare > Debian / Ubuntu Python installs, at the cost of causing some puzzling > crash for the case of: dlopened library with embedded Python > interpreter where the embedded Python interpreter imports a manylinux > wheel. > > I don't think this is acceptable, since it's going to break some packages > that depend > on dlopen. > >> On the other hand, presumably this same crash will occur for nearly > all Debian-packaged Python extension modules (if it is true that they > do not specify a libpython dependency) - so it seems unlikely that > this is a common problem. > > I don't think so. Debian-packaged extensions that require libpython to exist > (a minority of them to be sure, but ones that use complex shared library > layouts) > just declare a dependency on libpython. For example, python-pyside has a > Depends on libpython2.7: > > ``` > $ apt-cache depends python-pyside.qtcore > python-pyside.qtcore > Depends: libc6 > Depends: libgcc1 > Depends: libpyside1.2 > Depends: libpython2.7 > Depends: libqtcore4 > Depends: libshiboken1.2v5 > Depends: libstdc++6 > Depends: python > Depends: python > Conflicts: python-pyside.qtcore:i386 > ``` Sure - and this might be because the pyside packager was being especially careful about libpython, or it might be an accident - pyside is hard to build. On the other hand, it looks like almost all the common Debian packages don't declare this dependency - so almost all of the standard scientific Python stack and more would crash in this corner case: apt-cache depends python-numpy | grep libpython apt-cache depends python-scipy | grep libpython apt-cache depends python-yaml | grep libpython apt-cache depends python-regex | grep libpython apt-cache depends python-matplotlib | grep libpython It seems reasonable to build to the same compatibility level as most Debian packaged modules. Matthew From tseaver at palladion.com Sun Feb 7 18:07:36 2016 From: tseaver at palladion.com (Tres Seaver) Date: Sun, 7 Feb 2016 18:07:36 -0500 Subject: [Distutils] SOABI for Unicode ABI on 2.x (was: wheel 0.27.0 released) In-Reply-To: <20160206103500.GB23933@platonas> References: <20160206103500.GB23933@platonas> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/06/2016 05:35 AM, Marius Gedminas wrote: > And people who run build Python 2.7 with './configure && make && make > install' > > Why does upstream Python default to UCS-2 builds on Linux anyway? I don't recall if it had any bearing on the choice of default, but Long-running processes with large quantities of mostly-8-bit-compatible text strings in RAM (Zope, nearly any other Eurocentric webapp) need measurably less memory with UCS-2. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJWt84xAAoJEPKpaDSJE9HY4KwP/jTTsxXJssYd/aPXkY3MSObU BI4BpBvgHcQnyCGJ7gUqjS7MtHcHb0iz1ch3xAZy+lz/kXhB5+Kd6q9mad5altAa RK9RK8+i4UcS4Mwd5KMKfXuaOygr/AyrZJ4C6vgNFgN1HKD3HhLtgJAwzeyk1HE+ 5ZN2XEVUVhYeTUXdP+qCea3SuPf2O0zADBat/ys8JQ0MMKPscm5acKE5uum3w1eJ 2nrF/8EP+LgZFw/3WNQON8tWKz9Iqwmqr4022jorOi6yq0OG/MAPzjNuSDZ6Ab9t klyDVbVVuFVdiPVhMd9viaoYJ5Q2DoFJG0jnt58B8L5N7M0wn4UTT/ZX5vvZJNoJ GqoavyWiFbLEu3+btlInkTioGYhNtwZKZnTH63Gjri2LAk5C4SmeD0vYiJMrHaCA ySGTLwmv/SiTNvKI0kVQ0DcJ3WP4mHherq0bB6UeNEcD1MVuvTfjM8MSelrmo4VC eJsvKfMcpZ0l3V5fX00AbE1TWTrz1DDojVzR2KH+uhUjzegZt0B68StOg3drxh94 f37Fs7CfenVsCGyThguZX/uZAtQulCDe/UNx/86cX+GuMNA5qifu8IIYb7UM/fIX Itn3fjYpjC5fhRFLiUKR3yuv9h1eckgefRYINGzB2d3bZRnkT0IsurSbg6uvt+UE ixNkskENFDuIQthyvCbQ =u3iS -----END PGP SIGNATURE----- From ncoghlan at gmail.com Sun Feb 7 22:23:44 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 8 Feb 2016 13:23:44 +1000 Subject: [Distutils] SOABI for Unicode ABI on 2.x (was: wheel 0.27.0 released) In-Reply-To: References: <20160206103500.GB23933@platonas> Message-ID: On 8 February 2016 at 09:07, Tres Seaver wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 02/06/2016 05:35 AM, Marius Gedminas wrote: > >> And people who run build Python 2.7 with './configure && make && make >> install' >> >> Why does upstream Python default to UCS-2 builds on Linux anyway? > > I don't recall if it had any bearing on the choice of default, but > Long-running processes with large quantities of mostly-8-bit-compatible > text strings in RAM (Zope, nearly any other Eurocentric webapp) need > measurably less memory with UCS-2. They can also end up being a bit faster as well, since most of their strings are smaller, and hence less data copying is needed. That's why Python 3 ended up switching to the combination of adaptive bit width sizing for str instances and non-contiguous storage for io.StringIO: individual strings use a bit width based on the largest code point they contain, while io.StringIO's non-contiguous storage means that if you avoid calling getvalue(), only the segments that actually contain higher code points need to use the higher bit widths. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun Feb 7 22:26:22 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 8 Feb 2016 13:26:22 +1000 Subject: [Distutils] SOABI for Unicode ABI on 2.x (was: wheel 0.27.0 released) In-Reply-To: <20160206112659.GA14644@platonas> References: <20160206103500.GB23933@platonas> <20160206112659.GA14644@platonas> Message-ID: On 6 February 2016 at 21:26, Marius Gedminas wrote: > On Sat, Feb 06, 2016 at 09:18:39PM +1000, Nick Coghlan wrote: >> On 6 February 2016 at 20:35, Marius Gedminas wrote: >> > FWIW the rationale Pyenv gave when they rejected a bug asking for UCS-4 >> > builds by default was "we prefer to follow upstream defaults". >> >> In this case, the old defaults are dubious, but the upstream fix >> eliminated the relevant setting. Historically, it didn't really >> matter, since very few people were building their own Python for >> Linux. >> >> However, if that was pyenv's only reason for rejecting a switch to >> wide unicode builds, it may be worth trying again, this time pointing >> them to PEP 513 and the wide-build default for Python 2.7 wheels in >> the manylinux build environment. > > Here's the issue, if you'd like to try: https://github.com/yyuu/pyenv/issues/257 > > (I don't use pyenv myself; all I know about this issue is from helping > other people debug problems on IRC.) The issue has been reopened: https://github.com/yyuu/pyenv/issues/257#issuecomment-181076545 However, they're still going to have a potential compatibility problem to deal with, since extensions built against a narrow Python build won't run against a wide one. As such, putting a narrow Python 2.7 build into the build environment and encouraging folks creating Python 2.7 wheels to upload both variants may still be a preferable option. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun Feb 7 22:48:26 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 8 Feb 2016 13:48:26 +1000 Subject: [Distutils] Does anyone understand what's going on with libpython on Linux? In-Reply-To: References: <20160207133856.4e24daa7@fsol> Message-ID: On 8 February 2016 at 08:33, Matthew Brett wrote: > It seems reasonable to build to the same compatibility level as most > Debian packaged modules. Right, one of the key things to remember with manylinux1 is that it is, *quite deliberately*, only an 80% solution to the cross-distro lack-of-ABI-compatibility problem: we want to solve the simple cases now, and then move on to figuring out how to solve the more complex cases later (and applications that embed their own Python runtimes are a whole world of pain, in more ways than one). Since we know that extensions built against a statically linked CPython will run correctly against a dynamically linked one, then it probably makes sense to go down that path for the manylinux1 reference build environment. However, there's one particular test case we should investigate before committing to that path: loading manylinux1 wheels built against a statically linked CPython into a system httpd environment running the system mod_wsgi. If I've understood the problem description correctly, that *should* work, but if it doesn't, then it would represent a significant compatibility concern. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Mon Feb 8 00:27:40 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 7 Feb 2016 21:27:40 -0800 Subject: [Distutils] Does anyone understand what's going on with libpython on Linux? In-Reply-To: References: <20160207133856.4e24daa7@fsol> Message-ID: On Sun, Feb 7, 2016 at 7:48 PM, Nick Coghlan wrote: > On 8 February 2016 at 08:33, Matthew Brett wrote: >> It seems reasonable to build to the same compatibility level as most >> Debian packaged modules. > > Right, one of the key things to remember with manylinux1 is that it > is, *quite deliberately*, only an 80% solution to the cross-distro > lack-of-ABI-compatibility problem: we want to solve the simple cases > now, and then move on to figuring out how to solve the more complex > cases later (and applications that embed their own Python runtimes are > a whole world of pain, in more ways than one). > > Since we know that extensions built against a statically linked > CPython will run correctly against a dynamically linked one, then it > probably makes sense to go down that path for the manylinux1 reference > build environment. > > However, there's one particular test case we should investigate before > committing to that path: loading manylinux1 wheels built against a > statically linked CPython into a system httpd environment running the > system mod_wsgi. If I've understood the problem description correctly, > that *should* work, but if it doesn't, then it would represent a > significant compatibility concern. That's actually a great example of the case that "ought" to fail, because first you have a host program (apache2) that uses dlopen() to load in the CPython interpreter (implicitly, by dlopen'ing mod_wsgi.so, which is linked to libpython), and then the CPython interpreter turns around and tries to use dlopen() to load extension modules. Normally, this should work if and only if the extension modules are themselves linked against libpythonX.Y.so, --enable-shared / Fedora style. However, this is not Apache's first rodeo: revision 1.12 date: 1998/07/10 18:29:50; author: rasmus; state: Exp; lines: +2 -2 Set the RTLD_GLOBAL dlopen mode parameter to allow dynamically loaded modules to load their own modules dynamically. This improves mod_perl and mod_php3 when these modules are loaded dynamically into Apache. (Confirmation that this is still true: https://apr.apache.org/docs/apr/2.0/group__apr__dso.html#gaedc8609c2bb76e5c43f2df2281a9d8b6 -- also I ran mod_wsgi-express under LD_DEBUG=scopes and that also showed libpython2.7.so.1 getting added to the global scope.) Using RTLD_GLOBAL like this is the "wrong" thing -- it means that different Apache mods all get loaded into the same global namespace, and that means that they can have colliding symbols and step on each other's feet. E.g., this is the sole and entire reason why you can't load a python2 version of mod_wsgi and a python3 version of mod_wsgi into the same apache. But OTOH it means that Python extension modules will work even if they don't explicitly link against libpython. I also managed to track down two other programs that also follow this load-a-plugin-that-embeds-python pattern -- LibreOffice and xchat -- and they also both seem to use RTLD_GLOBAL. So even if it's the "wrong" thing, extension modules that don't explicitly link to libpython do seem to work reliably here in the world we have, and they're more compatible with Debian/Ubuntu and their massive market share, so... the whole manylinux1 strategy is nothing if not relentlessly pragmatic. I guess we should forbid linking to libpython in the PEP. [Note: I did not actually try loading any such modules into mod_wsgi or these other programs, because I have no idea how to use mod_wsgi or these other programs :-). The LD_DEBUG output is fairly definitive, but it wouldn't hurt for someone to double-check if they feel inspired...] -n -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Mon Feb 8 01:13:31 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 7 Feb 2016 22:13:31 -0800 Subject: [Distutils] Does anyone understand what's going on with libpython on Linux? In-Reply-To: References: Message-ID: On Sun, Feb 7, 2016 at 12:01 AM, Nathaniel Smith wrote: > 2) if I ever embed cpython by doing dlopen("libpython2.7.so.1"), or > dlopen("some_plugin_library_linked_to_libpython.so"), then the > embedded cpython will not be able to load python extensions that are > compiled in the Debian-style (but will be able to load python > extensions compiled in the Fedora-style), because the dlopen() the > loaded the python runtime and the dlopen() that loads the extension > module create two different scopes that can't see each other's > symbols. [I'm pretty sure this is right, but linking is arcane and > probably I should write some tests to double check.] Just to confirm, I did test this, and it is correct. Code at https://github.com/njsmith/test-link-namespaces if anyone is curious. -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Mon Feb 8 20:36:43 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 8 Feb 2016 17:36:43 -0800 Subject: [Distutils] Updates to PEP-513 Message-ID: Hi all, Based on recent discussions, I'm proposing the following update to PEP 513: https://github.com/pypa/manylinux/pull/31/files The changes to the text are: * Remove the UCS-4 requirement in favor of pip/wheel's new SOABI support. * Mandate that manylinux1 extensions not link against libpythonX.Y.so.1, for the complicated compatibility-related reasons that were discussed on list. * Update link for the docker images to their new PyPA home. Comments welcome, of course. I haven't sent this to the PEP editors yet, because I'm not sure if there's any particular ceremony needed for changing the semantics of a PEP that's already been pronounced upon? -n -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Tue Feb 9 05:31:02 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 9 Feb 2016 20:31:02 +1000 Subject: [Distutils] Updates to PEP-513 In-Reply-To: References: Message-ID: On 9 February 2016 at 11:36, Nathaniel Smith wrote: > Hi all, > > Based on recent discussions, I'm proposing the following update to PEP 513: > > https://github.com/pypa/manylinux/pull/31/files > > The changes to the text are: > > * Remove the UCS-4 requirement in favor of pip/wheel's new SOABI support. > * Mandate that manylinux1 extensions not link against > libpythonX.Y.so.1, for the complicated compatibility-related reasons > that were discussed on list. > * Update link for the docker images to their new PyPA home. +1 from me. > Comments welcome, of course. > > I haven't sent this to the PEP editors yet, because I'm not sure if > there's any particular ceremony needed for changing the semantics of a > PEP that's already been pronounced upon? No particular ceremony, as I'm treating these as bugs found by working on the reference implementation (in this case, the Docker build environment). Finding spec defects after approval is actually a pretty normal occurrence for PEPs targeting the CPython reference implementation, as some flaws only become apparent once you put something to the test of implementation (as happened here). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Tue Feb 9 16:24:19 2016 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 10 Feb 2016 10:24:19 +1300 Subject: [Distutils] Idea: Positive/negative extras in general and as replacement for features In-Reply-To: <567061F9.1040500@ronnypfannschmidt.de> References: <567061F9.1040500@ronnypfannschmidt.de> Message-ID: Sorry for not replying for so long. On 16 December 2015 at 07:54, Ronny Pfannschmidt wrote: > Hello everyone, > > the talk about the sqlalchemy feature extra got me thinking > > what if i could specify extras that where installed by default, but > users could opt out > > a concrete example i have in mind is the default set of pytest plugins > i would like to be able to externalize legacy support details of pytest > without needing people to suddenly depend on pytest[recommended] instead > of pytest to have what they know function as is > > instead i would like to be able to do something like a dependency on > pytest[-nose,-unittest,-cache] to subtract the items So the challenge here will be defining good, useful, and predictable behaviour when the dependency graph is non-trivial. Using your example, what should pip do when told pip install pytest[nose, -unittest] proj2 and proj2 depends on pytest[unittest] ? If it installs pytest[unittest], then the first pytest dependency was no honoured. If it does not install pytest[unittest], then the proj2 dependencies were not honoured. So it must error. -> Which means that using a [-THING] clause anywhere is going to be super fragile, as indeed 'never install with X' things are in distributions - its much better to find ways to express things purely additively IMO. There are many more complex interactions possible with the + / - DSL you've sketched - and I don't think they are reducible - that is, its not the DSL per se, but the basic feature of allowing cuts in the graph traversal that lead to that complexity. If we can come up with good, consistent, predictable answers, even considering three-party interactions, then I've no objection per se: but I think that will be very hard to do. I certainly don't have time at the moment to help - sorry :(. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From njs at pobox.com Tue Feb 9 16:29:06 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 9 Feb 2016 13:29:06 -0800 Subject: [Distutils] Idea: Positive/negative extras in general and as replacement for features In-Reply-To: <567061F9.1040500@ronnypfannschmidt.de> References: <567061F9.1040500@ronnypfannschmidt.de> Message-ID: On Tue, Dec 15, 2015 at 10:54 AM, Ronny Pfannschmidt wrote: > Hello everyone, > > the talk about the sqlalchemy feature extra got me thinking > > what if i could specify extras that where installed by default, but > users could opt out If we reified extras (https://mail.python.org/pipermail/distutils-sig/2015-October/027364.html), and added support for weak "Recommends/Suggests" dependencies (https://mail.python.org/pipermail/distutils-sig/2016-February/028258.html), then those two features together would give you what you want (as well as solving a variety of other problems). -n -- Nathaniel J. Smith -- https://vorpus.org From robertc at robertcollins.net Tue Feb 9 16:38:49 2016 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 10 Feb 2016 10:38:49 +1300 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On 27 January 2016 at 22:24, Paul Moore wrote: > On 27 January 2016 at 05:39, Robert Collins wrote: >>> - You and Paul were strongly in favor of splitting up "working directories" >>> and "sdists"; Robert and I didn't like the idea. The argument in favor is >>> that we have to support working directory -> sdist and sdist -> wheel, so >>> adding working directory -> wheel as an additional pathway creates larger >>> test matrices and more opportunities for things to go wrong; the argument >>> against is that this requires a new sdist format definition (currently it's >>> basically "a zipped up working dir"), and it breaks like incremental >>> rebuilds, which are a critical feature for developer adoption. >> >> That was something that occurred in the rabbit-hole discussion of your >> first draft; The PR Donald is proposing to accept is I think the >> fourth major iteration? Two from you, two from me - building on >> feedback each time. While I don't think Donalds position here has >> changed, the draft in question doesn't alter any semantics of anything >> - it captures a subset of the semantics such that the existing >> behaviour of pip can be modelled on the resulting interface. In >> particular, for this question, it retains 'develop', which is the >> primary developer mode in use today: it has no implications for or >> against incremental builds - it doesn't alter pips copy-before-build >> behaviour, which pip already has for non-develop installs. E.g. its >> moot; we can debate that further - and i'm sure we shall - but it has >> no impact on the interface. > > I'm still not clear on what the position is here. The PEP as I read it > still offers no means to ask the build system to produce a sdist, so > I'm concerned as to what will happen here. > In the absence of a "pip sdist" command I can see that you're saying > that we can implement pip's functionality without caring about this. > But fundamentally, people upload sdists and wheels to PyPI. A build > system that doesn't support sdists (which this PEP allows) will > therefore have to publish wheel-only builds to PyPI, and I am very > much against the idea of PyPI hosting "binary only" distributions. As you know - https://mail.python.org/pipermail/distutils-sig/2015-March/026013.html - flit /already/ does this. If we want to require sdists as a thing, PyPI needs to change to do that. I don't care whether it does or doesn't myself: folk that want to not ship sources will always find a way. I predict empty sdists, for instance. > If project foo has no sdist, "pip wheel foo" for a platform where > there's no compatible wheel available will fail, as there's no way for > pip to locate the source. Thats true; but pip isn't a build system itself - and so far pip has no role in the uploading of wheels to PyPI. Nor does - last I checked - twine build things for you. The pattern is that developers prepare there artifacts, then ask twine to upload them. > So can we please revisit the question of whether build systems will be > permitted to refuse to generate sdists? Note that I don't care whether > we formally define a new sdist format, or go with something adhoc, or > whatever. All I care about is that the PEP states that build systems > must support generating a file that can be uploaded to PyPI and used > by pip to build a wheel as described above (not "git clone this > location and do 'pip wheel .'"). I think that making build systems > expose how you make such a file by requiring a "sdist" subcommand is a > reasonable approach, but I'm OK with naming the subcommand > differently. I truely have no opinion here. I don't think it harms the abstract build system to have it, but I do know that Donald very much does not want pip to have to know or care about building sdists per se, and he may see this as an attractive nuisance. Who / what tool do we expect to use the sdist command in the abstract interface? > Thanks, > Paul > > PS I agree with Nathaniel, insofar as there were definitely a number > of unresolved points remaining after the various public discussions, > and I don't think it's reasonable to say that any proposal is "ready > for implementation" without a public confirmation that those issues > have been resolved. I suspect it had been quescient too long and folk had paged out the state. Certainly I don't think Donald intended to bypass any community debate, and neither did I. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From p.f.moore at gmail.com Tue Feb 9 17:40:45 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 9 Feb 2016 22:40:45 +0000 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On 9 February 2016 at 21:38, Robert Collins wrote: >> In the absence of a "pip sdist" command I can see that you're saying >> that we can implement pip's functionality without caring about this. >> But fundamentally, people upload sdists and wheels to PyPI. A build >> system that doesn't support sdists (which this PEP allows) will >> therefore have to publish wheel-only builds to PyPI, and I am very >> much against the idea of PyPI hosting "binary only" distributions. > > As you know - https://mail.python.org/pipermail/distutils-sig/2015-March/026013.html > - flit /already/ does this. If we want to require sdists as a thing, > PyPI needs to change to do that. I don't care whether it does or > doesn't myself: folk that want to not ship sources will always find a > way. I predict empty sdists, for instance. I know flit does this. We're obviously never going to be able to *stop* people generating wheels any way they like. But there is value in having a standard invocation that builds the wheel - anyone who has ever used make is aware that a standardised build command has value. At the moment, that standard is "pip wheel foo". Or maybe it's "setup.py bdist_wheel", but I prefer the layer of build system abstraction offered by pip. > >> If project foo has no sdist, "pip wheel foo" for a platform where >> there's no compatible wheel available will fail, as there's no way for >> pip to locate the source. > > Thats true; but pip isn't a build system itself - and so far pip has > no role in the uploading of wheels to PyPI. Nor does - last I checked > - twine build things for you. The pattern is that developers prepare > there artifacts, then ask twine to upload them. As someone who frequently builds wheels for multiple projects, where those projects don't have complex build requirements but the projects themselves do not upload wheels, I find significant value in being able to just do "pip wheel foo". Maybe it's not pip's core functionality, maybe in the long term it'll be replaced by a separate utility, but the key point is that one command, given a project name, locates it on PyPI, downloads it and builds the source into a wheel. Having that capability is a *huge* benefit (I remember the days before distutils and PyPI, when every single package build was a process of locating files on obscure websites, and struggling with custom build processes - I'm admittedly over-sensitive about the risks of going back to that). I don't care in the slightest how build systems implement " sdist". They can write a custom "setup.py" script that downloads the real sources from github and bulids them using flit, for all I care. What I do care about is ensuring that if a projects hasn't published a wheel for some particular environment, and the project is *not* actively trying to avoid publishing sources, then we provide them with a means to publish those sources in such a way that "pip wheel theproject" (or whatever command replaces it as the canonical "build a wheel" command) can automate the process of generating a wheel for the user's platform. Of course projects who want to make their sources unavailable can, and will. I'm not trying to force them not to. Of course users can manually locate the sources and build from them. I believe that is a major step back in usability. Of course projects can upload wheels. They will never cover all platforms. Longer term, do we want a standard "source distribution" format? I thought we did (Nathaniel's proposals generated a lot of debate, but no-one was saying "let's just leave projects to do whatever they want, there's no point standardising"). If we do, then a standard process for building a wheel from that format seems like an obvious thing to expect. On the way to that goal, let's not start off by going backwards and abandoning the standard we currently have in "pip wheel project-with-source-on-pypi". >> So can we please revisit the question of whether build systems will be >> permitted to refuse to generate sdists? Note that I don't care whether >> we formally define a new sdist format, or go with something adhoc, or >> whatever. All I care about is that the PEP states that build systems >> must support generating a file that can be uploaded to PyPI and used >> by pip to build a wheel as described above (not "git clone this >> location and do 'pip wheel .'"). I think that making build systems >> expose how you make such a file by requiring a "sdist" subcommand is a >> reasonable approach, but I'm OK with naming the subcommand >> differently. > > I truely have no opinion here. I don't think it harms the abstract > build system to have it, but I do know that Donald very much does not > want pip to have to know or care about building sdists per se, and he > may see this as an attractive nuisance. As I said above, all I really care about is that "pip wheel foo" either continue to work for any project with sources published on PyPI, or that we have a replacement available before we deprecate/remove it. IMO, requiring build systems to support generating "something that pip can treat as a sdist" is a less onerous solution in the short term than requiring pip to work out how to support "pip wheel" to create wheels from source in the absence of a sdist command. Longer term, we'll have to work out how to move away from the current sdist format, but I can't conceive of a way that we can have a standard tool (whether pip or something else) that builds wheels from PyPI-hosted sources unless we have *some* commonly-agreed format - and the "build system generate-a-source-file command" can remain as an interface for creating that format. > Who / what tool do we expect to use the sdist command in the abstract interface? In the short term, "pip wheel foo". Longer term, if we want to remove that functionality from pip, then whatever command we create to replace it. I do not accept any proposal that removes "pip wheel " without providing *any* replacement. >> PS I agree with Nathaniel, insofar as there were definitely a number >> of unresolved points remaining after the various public discussions, >> and I don't think it's reasonable to say that any proposal is "ready >> for implementation" without a public confirmation that those issues >> have been resolved. > > I suspect it had been quescient too long and folk had paged out the > state. Certainly I don't think Donald intended to bypass any community > debate, and neither did I. No problem - I didn't expect that either of you had any such intention. I just wanted to raise a red flag that (at least in my view) there were unanswered concerns about the proposal that needed to be addressed before it could be signed off. Donald's note certainly gave the impression that he had forgotten about those issues. Paul From njs at pobox.com Tue Feb 9 17:59:24 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 9 Feb 2016 14:59:24 -0800 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On Tue, Feb 9, 2016 at 2:40 PM, Paul Moore wrote: > On 9 February 2016 at 21:38, Robert Collins wrote: >>> In the absence of a "pip sdist" command I can see that you're saying >>> that we can implement pip's functionality without caring about this. >>> But fundamentally, people upload sdists and wheels to PyPI. A build >>> system that doesn't support sdists (which this PEP allows) will >>> therefore have to publish wheel-only builds to PyPI, and I am very >>> much against the idea of PyPI hosting "binary only" distributions. >> >> As you know - https://mail.python.org/pipermail/distutils-sig/2015-March/026013.html >> - flit /already/ does this. If we want to require sdists as a thing, >> PyPI needs to change to do that. I don't care whether it does or >> doesn't myself: folk that want to not ship sources will always find a >> way. I predict empty sdists, for instance. > > I know flit does this. We're obviously never going to be able to > *stop* people generating wheels any way they like. But there is value > in having a standard invocation that builds the wheel - anyone who has > ever used make is aware that a standardised build command has value. > > At the moment, that standard is "pip wheel foo". Or maybe it's > "setup.py bdist_wheel", but I prefer the layer of build system > abstraction offered by pip. Everyone agrees that we absolutely need to support "pip wheel packagename" and and "pip wheel sdist.zip" and "pip wheel some-path/". The (potential) disagreement is about whether the most primitive input to "pip wheel" is an sdist or a directory tree. I.e., option A is that "pip wheel some-path/" is implemented as "pip sdist some-path/ && pip wheel some-path-sdist.zip", and option B is "pip wheel sdist.zip" implemented as "unzip sdist.zip && pip wheel sdist-tree/". Robert's draft PEP doesn't explicitly come down on either side, but in practice I think that at least to start with it would require the "option B" approach, since it doesn't provide any "pip sdist" command. >>> If project foo has no sdist, "pip wheel foo" for a platform where >>> there's no compatible wheel available will fail, as there's no way for >>> pip to locate the source. >> >> Thats true; but pip isn't a build system itself - and so far pip has >> no role in the uploading of wheels to PyPI. Nor does - last I checked >> - twine build things for you. The pattern is that developers prepare >> there artifacts, then ask twine to upload them. > > As someone who frequently builds wheels for multiple projects, where > those projects don't have complex build requirements but the projects > themselves do not upload wheels, I find significant value in being > able to just do "pip wheel foo". Maybe it's not pip's core > functionality, maybe in the long term it'll be replaced by a separate > utility, but the key point is that one command, given a project name, > locates it on PyPI, downloads it and builds the source into a wheel. > > Having that capability is a *huge* benefit (I remember the days before > distutils and PyPI, when every single package build was a process of > locating files on obscure websites, and struggling with custom build > processes - I'm admittedly over-sensitive about the risks of going > back to that). > > I don't care in the slightest how build systems implement > " sdist". They can write a custom "setup.py" script that > downloads the real sources from github and bulids them using flit, for > all I care. > > What I do care about is ensuring that if a projects hasn't published a > wheel for some particular environment, and the project is *not* > actively trying to avoid publishing sources, then we provide them with > a means to publish those sources in such a way that "pip wheel > theproject" (or whatever command replaces it as the canonical "build a > wheel" command) can automate the process of generating a wheel for the > user's platform. Both of the options under discussion give us all of these properties. You can absolutely implement a "pip wheel" command that handles all these cases without needing any "pip sdist" command. > Of course projects who want to make their sources unavailable can, and > will. I'm not trying to force them not to. > Of course users can manually locate the sources and build from them. I > believe that is a major step back in usability. > Of course projects can upload wheels. They will never cover all platforms. > > Longer term, do we want a standard "source distribution" format? I > thought we did (Nathaniel's proposals generated a lot of debate, but > no-one was saying "let's just leave projects to do whatever they want, > there's no point standardising"). If we do, then a standard process > for building a wheel from that format seems like an obvious thing to > expect. On the way to that goal, let's not start off by going > backwards and abandoning the standard we currently have in "pip wheel > project-with-source-on-pypi". > >>> So can we please revisit the question of whether build systems will be >>> permitted to refuse to generate sdists? Note that I don't care whether >>> we formally define a new sdist format, or go with something adhoc, or >>> whatever. All I care about is that the PEP states that build systems >>> must support generating a file that can be uploaded to PyPI and used >>> by pip to build a wheel as described above (not "git clone this >>> location and do 'pip wheel .'"). I think that making build systems >>> expose how you make such a file by requiring a "sdist" subcommand is a >>> reasonable approach, but I'm OK with naming the subcommand >>> differently. >> >> I truely have no opinion here. I don't think it harms the abstract >> build system to have it, but I do know that Donald very much does not >> want pip to have to know or care about building sdists per se, and he >> may see this as an attractive nuisance. > > As I said above, all I really care about is that "pip wheel foo" > either continue to work for any project with sources published on > PyPI, or that we have a replacement available before we > deprecate/remove it. > > IMO, requiring build systems to support generating "something that pip > can treat as a sdist" is a less onerous solution in the short term > than requiring pip to work out how to support "pip wheel" to create > wheels from source in the absence of a sdist command. What's onerous about it? Right now sdists are just zipped up source trees; the way you generate a wheel from an sdist is to unpack the zipfile and then build from the resulting tree. If you *start* with a tree, then the only reason you would need an sdist command is if you want to zip up the tree and then unzip it again. There are reasons you might want to do that, but skipping it would hardly be an extra burden :-). -n -- Nathaniel J. Smith -- https://vorpus.org From robertc at robertcollins.net Tue Feb 9 18:19:07 2016 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 10 Feb 2016 12:19:07 +1300 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On 10 February 2016 at 11:40, Paul Moore wrote: > On 9 February 2016 at 21:38, Robert Collins wrote: >>> In the absence of a "pip sdist" command I can see that you're saying >>> that we can implement pip's functionality without caring about this. >>> But fundamentally, people upload sdists and wheels to PyPI. A build >>> system that doesn't support sdists (which this PEP allows) will >>> therefore have to publish wheel-only builds to PyPI, and I am very >>> much against the idea of PyPI hosting "binary only" distributions. >> >> As you know - https://mail.python.org/pipermail/distutils-sig/2015-March/026013.html >> - flit /already/ does this. If we want to require sdists as a thing, >> PyPI needs to change to do that. I don't care whether it does or >> doesn't myself: folk that want to not ship sources will always find a >> way. I predict empty sdists, for instance. > > I know flit does this. We're obviously never going to be able to > *stop* people generating wheels any way they like. But there is value > in having a standard invocation that builds the wheel - anyone who has > ever used make is aware that a standardised build command has value. > > At the moment, that standard is "pip wheel foo". Or maybe it's > "setup.py bdist_wheel", but I prefer the layer of build system > abstraction offered by pip. But 'pip wheel foo' already reuses existing wheels: things like flit that upload universal wheels work with 'pip wheel foo' just fine (as long as --no-binary has not been passed). Things that upload an sdist will also still work, unchanged, with this proposal. >>> If project foo has no sdist, "pip wheel foo" for a platform where >>> there's no compatible wheel available will fail, as there's no way for >>> pip to locate the source. >> >> Thats true; but pip isn't a build system itself - and so far pip has >> no role in the uploading of wheels to PyPI. Nor does - last I checked >> - twine build things for you. The pattern is that developers prepare >> there artifacts, then ask twine to upload them. > > As someone who frequently builds wheels for multiple projects, where > those projects don't have complex build requirements but the projects > themselves do not upload wheels, I find significant value in being > able to just do "pip wheel foo". Maybe it's not pip's core > functionality, maybe in the long term it'll be replaced by a separate > utility, but the key point is that one command, given a project name, > locates it on PyPI, downloads it and builds the source into a wheel. > > Having that capability is a *huge* benefit (I remember the days before > distutils and PyPI, when every single package build was a process of > locating files on obscure websites, and struggling with custom build > processes - I'm admittedly over-sensitive about the risks of going > back to that). > > I don't care in the slightest how build systems implement > " sdist". They can write a custom "setup.py" script that > downloads the real sources from github and bulids them using flit, for > all I care. I'm struggling to connect 'pip wheel foo' to 'they can do an sdist'. 'pip wheel foo' does not invoke sdist. There is a bug proposing that it should ( https://github.com/pypa/pip/issues/2195 and https://github.com/pypa/pip/pull/3219 ) - but that proposal is contentious - see the ongoing related debate about editable mode / incremental builds and also the discussion on PR 3219. > What I do care about is ensuring that if a projects hasn't published a > wheel for some particular environment, and the project is *not* > actively trying to avoid publishing sources, then we provide them with > a means to publish those sources in such a way that "pip wheel > theproject" (or whatever command replaces it as the canonical "build a > wheel" command) can automate the process of generating a wheel for the > user's platform. Sure, they should have that. I don't understand why pip's *abstraction* over project *building* should know about that. Issue 2195 aside, which seems to be entirely about avoiding one particular developer failure mode, and other than that makes no sense to me. > Of course projects who want to make their sources unavailable can, and > will. I'm not trying to force them not to. > Of course users can manually locate the sources and build from them. I > believe that is a major step back in usability. > Of course projects can upload wheels. They will never cover all platforms. > > Longer term, do we want a standard "source distribution" format? I > thought we did (Nathaniel's proposals generated a lot of debate, but > no-one was saying "let's just leave projects to do whatever they want, > there's no point standardising"). If we do, then a standard process > for building a wheel from that format seems like an obvious thing to > expect. On the way to that goal, let's not start off by going > backwards and abandoning the standard we currently have in "pip wheel > project-with-source-on-pypi". We certainly want to have any metadata published in an sdist be trustable, which isn't a format change thing per se - but in a sense, my proposal is a new sdist format: rather than setup.py, it has pypa.json, the rest is the same. There's never been any guarantee that an sdist can produce a new sdist :- and there are lots of ways that that can break, already. >>> So can we please revisit the question of whether build systems will be >>> permitted to refuse to generate sdists? Note that I don't care whether >>> we formally define a new sdist format, or go with something adhoc, or >>> whatever. All I care about is that the PEP states that build systems >>> must support generating a file that can be uploaded to PyPI and used >>> by pip to build a wheel as described above (not "git clone this >>> location and do 'pip wheel .'"). I think that making build systems >>> expose how you make such a file by requiring a "sdist" subcommand is a >>> reasonable approach, but I'm OK with naming the subcommand >>> differently. >> >> I truely have no opinion here. I don't think it harms the abstract >> build system to have it, but I do know that Donald very much does not >> want pip to have to know or care about building sdists per se, and he >> may see this as an attractive nuisance. > > As I said above, all I really care about is that "pip wheel foo" > either continue to work for any project with sources published on > PyPI, or that we have a replacement available before we > deprecate/remove it. The current proposal doesn't stop it working,and doesn't make it any easier for it to stop working. It is, AFAICT, unrelated / orthogonal. > IMO, requiring build systems to support generating "something that pip > can treat as a sdist" is a less onerous solution in the short term Which we already do/have. Remember the abstraction is not the build system. Its not intended to - and it would be counterproductive for it to be - an end user interface to build systems. They are too varied, with too many different styles, for consolidating all that as an end user tool to make sense. > than requiring pip to work out how to support "pip wheel" to create > wheels from source in the absence of a sdist command. Longer term, > we'll have to work out how to move away from the current sdist format, > but I can't conceive of a way that we can have a standard tool > (whether pip or something else) that builds wheels from PyPI-hosted > sources unless we have *some* commonly-agreed format - and the "build > system generate-a-source-file command" can remain as an interface for > creating that format. > >> Who / what tool do we expect to use the sdist command in the abstract interface? > > In the short term, "pip wheel foo". > Longer term, if we want to remove that functionality from pip, then > whatever command we create to replace it. So - to be clear - you're proposing that PR 3219 be merged? > I do not accept any proposal that removes "pip wheel " without > providing *any* replacement. But the current proposal *DOES NOT REMOVE IT*. We're clearly miscommunicating about something :). -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From p.f.moore at gmail.com Tue Feb 9 18:56:05 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 9 Feb 2016 23:56:05 +0000 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On 9 February 2016 at 22:59, Nathaniel Smith wrote: >> At the moment, that standard is "pip wheel foo". Or maybe it's >> "setup.py bdist_wheel", but I prefer the layer of build system >> abstraction offered by pip. > > Everyone agrees that we absolutely need to support "pip wheel > packagename" and and "pip wheel sdist.zip" and "pip wheel some-path/". > The (potential) disagreement is about whether the most primitive input > to "pip wheel" is an sdist or a directory tree. > > I.e., option A is that "pip wheel some-path/" is implemented as "pip > sdist some-path/ && pip wheel some-path-sdist.zip", and option B is > "pip wheel sdist.zip" implemented as "unzip sdist.zip && pip wheel > sdist-tree/". Robert's draft PEP doesn't explicitly come down on > either side, but in practice I think that at least to start with it > would require the "option B" approach, since it doesn't provide any > "pip sdist" command. Hmm, I see what you're getting at. But if project X uses flit, what do they need to do to upload something to PyPI so that "pip wheel X" builds a wheel? Let's assume they aren't actively trying to hide their sources, but equally they aren't looking to do a load of work beyond their normal workflow just to support that usage. And if a project currently using setuptools wants to switch to flit, how do they upload what's needed to allow pip to install on platforms where they don't supply wheels? Maybe that's not functionality pip *needs* to do its job, but it seems far too likely to me that projects will build and upload wheels for some platforms, and then say "to get the source go to ". With luck they'll include build instructions, although they may just be "git clone ; pip wheel ." I have set up a number of systems that automate wheel builds - these days, that's as simple as putting "pip wheel -r mypackages.txt" in a scheduled job. I do *not* want to have to go back to maintaining a mapping from project to source URL, or working out how to get things to work on systems without git installed and how to detect the correct way of getting the latest release version from a source repo. Documenting that build systems are required to define how they support the operation of "build a release source file that can be uploaded to PyPI" makes it clear that we consider this to be an important operation. Even if it's not technically needed for pip to operate. >> What I do care about is ensuring that if a projects hasn't published a >> wheel for some particular environment, and the project is *not* >> actively trying to avoid publishing sources, then we provide them with >> a means to publish those sources in such a way that "pip wheel >> theproject" (or whatever command replaces it as the canonical "build a >> wheel" command) can automate the process of generating a wheel for the >> user's platform. > > Both of the options under discussion give us all of these properties. > You can absolutely implement a "pip wheel" command that handles all > these cases without needing any "pip sdist" command. I don't think (bit it is some time ago, and I haven't reread the thread yet) I ever said that I want "pip sdist" to be mandated. (I have no real opinion on that). What I want is that we make it clear that build systems need to provide a means to create "a source bundle that can be uploaded to PyPI that pip can use to build wheels" (I'll buy a beer for anyone who can come up with an acceptable name for this that doesn't make people thing "but we want to get away from the sdist format" :-)) I also want people to agree that it's a good thing to expect build systems to support creating such a source artifact. I'm aware that this "source artifact" may well be what you were proposing be a "directory tree". That's another debate that as far as I know petered out without conclusion. I'm trying to keep the two discussions separate - for the purposes of *this* discussion, whether it's a (zipped) source tree or a sdist (current or 2.0 format) or something else is beside the point (except insofar as how we name the command we expect the build system to provide :-)) >> IMO, requiring build systems to support generating "something that pip >> can treat as a sdist" is a less onerous solution in the short term >> than requiring pip to work out how to support "pip wheel" to create >> wheels from source in the absence of a sdist command. > > What's onerous about it? Right now sdists are just zipped up source > trees; the way you generate a wheel from an sdist is to unpack the > zipfile and then build from the resulting tree. If you *start* with a > tree, then the only reason you would need an sdist command is if you > want to zip up the tree and then unzip it again. There are reasons you > might want to do that, but skipping it would hardly be an extra burden What I meant was that right now, sdists contain a setup.py that pip can use to build a wheel. By "less onerous" I meant that pip doesn't change and build tool X writes out a setup.py that does subprocess.call(['build-tool-x', '.']) (massively oversimplified, but you get the point). But on reflection, pip has to change anyway to detect what build tool to use, and the whole point of this proposal is that pip can then ask the build tool how to create a wheel, so the minimal set of changes needed in pip make the remaining steps trivial. So I'm wrong on this point - sorry for the confusion. OK, I'm part convinced. What's in the proposal is fine. But I strongly want to avoid giving the impression that we consider providing a means to build a "source thingy that can go on PyPI" in any way optional. That may be nothing more than a filename convention that must be followed, but even that is not something I think we should expect people to get right by hand. And I don't think it's wise to open up support of other build systems without resolving that question at least. Let me think on this some more. You've convinced me that the risk isn't as high as I'd originally thought, but I suspect the PEP might need a bit more explanatory context. Paul From njs at pobox.com Tue Feb 9 19:06:24 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 9 Feb 2016 16:06:24 -0800 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: The state of the art right now is that sdist generation is already a quirky build-system-specific interface that isn't used by any other tools -- 'python setup.py sdist --whatever=whatever' -- and that the de facto sdist format is 'a zip/tarball named NAME-VERSION.EXT which contains a setup.py'. So nothing in the current build interface proposals makes this any better, or any worse. I understand your concern that we want to encourage people to upload sdists, but if it reassures you any, recall that part of what started this whole discussion is that flit already *wants* to upload sdists -- the problem is just that they currently can't :-). See also https://github.com/takluyver/flit/issues/74 So there are definitely things we can do to improve sdists in general, but I think we can treat that as an independent discussion, and I'm not terribly worried that distutils-sig supervision is necessary to prevent build system authors from running free ignoring the concept of source releases. And if they do then we can shake a finger at them when it happens :-) -n On Tue, Feb 9, 2016 at 3:56 PM, Paul Moore wrote: > On 9 February 2016 at 22:59, Nathaniel Smith wrote: >>> At the moment, that standard is "pip wheel foo". Or maybe it's >>> "setup.py bdist_wheel", but I prefer the layer of build system >>> abstraction offered by pip. >> >> Everyone agrees that we absolutely need to support "pip wheel >> packagename" and and "pip wheel sdist.zip" and "pip wheel some-path/". >> The (potential) disagreement is about whether the most primitive input >> to "pip wheel" is an sdist or a directory tree. >> >> I.e., option A is that "pip wheel some-path/" is implemented as "pip >> sdist some-path/ && pip wheel some-path-sdist.zip", and option B is >> "pip wheel sdist.zip" implemented as "unzip sdist.zip && pip wheel >> sdist-tree/". Robert's draft PEP doesn't explicitly come down on >> either side, but in practice I think that at least to start with it >> would require the "option B" approach, since it doesn't provide any >> "pip sdist" command. > > Hmm, I see what you're getting at. > > But if project X uses flit, what do they need to do to upload > something to PyPI so that "pip wheel X" builds a wheel? Let's assume > they aren't actively trying to hide their sources, but equally they > aren't looking to do a load of work beyond their normal workflow just > to support that usage. > > And if a project currently using setuptools wants to switch to flit, > how do they upload what's needed to allow pip to install on platforms > where they don't supply wheels? > > Maybe that's not functionality pip *needs* to do its job, but it seems > far too likely to me that projects will build and upload wheels for > some platforms, and then say "to get the source go to site>". With luck they'll include build instructions, although they > may just be "git clone ; pip wheel ." > > I have set up a number of systems that automate wheel builds - these > days, that's as simple as putting "pip wheel -r mypackages.txt" in a > scheduled job. I do *not* want to have to go back to maintaining a > mapping from project to source URL, or working out how to get things > to work on systems without git installed and how to detect the correct > way of getting the latest release version from a source repo. > > Documenting that build systems are required to define how they support > the operation of "build a release source file that can be uploaded to > PyPI" makes it clear that we consider this to be an important > operation. Even if it's not technically needed for pip to operate. > >>> What I do care about is ensuring that if a projects hasn't published a >>> wheel for some particular environment, and the project is *not* >>> actively trying to avoid publishing sources, then we provide them with >>> a means to publish those sources in such a way that "pip wheel >>> theproject" (or whatever command replaces it as the canonical "build a >>> wheel" command) can automate the process of generating a wheel for the >>> user's platform. >> >> Both of the options under discussion give us all of these properties. >> You can absolutely implement a "pip wheel" command that handles all >> these cases without needing any "pip sdist" command. > > I don't think (bit it is some time ago, and I haven't reread the > thread yet) I ever said that I want "pip sdist" to be mandated. (I > have no real opinion on that). What I want is that we make it clear > that build systems need to provide a means to create "a source bundle > that can be uploaded to PyPI that pip can use to build wheels" (I'll > buy a beer for anyone who can come up with an acceptable name for this > that doesn't make people thing "but we want to get away from the sdist > format" :-)) > > I also want people to agree that it's a good thing to expect build > systems to support creating such a source artifact. > > I'm aware that this "source artifact" may well be what you were > proposing be a "directory tree". That's another debate that as far as > I know petered out without conclusion. I'm trying to keep the two > discussions separate - for the purposes of *this* discussion, whether > it's a (zipped) source tree or a sdist (current or 2.0 format) or > something else is beside the point (except insofar as how we name the > command we expect the build system to provide :-)) > >>> IMO, requiring build systems to support generating "something that pip >>> can treat as a sdist" is a less onerous solution in the short term >>> than requiring pip to work out how to support "pip wheel" to create >>> wheels from source in the absence of a sdist command. >> >> What's onerous about it? Right now sdists are just zipped up source >> trees; the way you generate a wheel from an sdist is to unpack the >> zipfile and then build from the resulting tree. If you *start* with a >> tree, then the only reason you would need an sdist command is if you >> want to zip up the tree and then unzip it again. There are reasons you >> might want to do that, but skipping it would hardly be an extra burden > > What I meant was that right now, sdists contain a setup.py that pip > can use to build a wheel. By "less onerous" I meant that pip doesn't > change and build tool X writes out a setup.py that does > > subprocess.call(['build-tool-x', '.']) > > (massively oversimplified, but you get the point). > > But on reflection, pip has to change anyway to detect what build tool > to use, and the whole point of this proposal is that pip can then ask > the build tool how to create a wheel, so the minimal set of changes > needed in pip make the remaining steps trivial. So I'm wrong on this > point - sorry for the confusion. > > OK, I'm part convinced. What's in the proposal is fine. But I strongly > want to avoid giving the impression that we consider providing a means > to build a "source thingy that can go on PyPI" in any way optional. > That may be nothing more than a filename convention that must be > followed, but even that is not something I think we should expect > people to get right by hand. And I don't think it's wise to open up > support of other build systems without resolving that question at > least. > > Let me think on this some more. You've convinced me that the risk > isn't as high as I'd originally thought, but I suspect the PEP might > need a bit more explanatory context. > > Paul -- Nathaniel J. Smith -- https://vorpus.org From p.f.moore at gmail.com Tue Feb 9 19:09:55 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 10 Feb 2016 00:09:55 +0000 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: [I need to read and digest the rest of this, but it's late here, so that will be tomorrow] On 9 February 2016 at 23:19, Robert Collins wrote: >>> Who / what tool do we expect to use the sdist command in the abstract interface? >> >> In the short term, "pip wheel foo". >> Longer term, if we want to remove that functionality from pip, then >> whatever command we create to replace it. > > So - to be clear - you're proposing that PR 3219 be merged? It or something like it is how I picture things working, yes. But as you say there's controversy around in-place builds, so I'm hardly proposing that we ignore that and merge 3219 without resolving those issues. Which may or may not be possible without someone rethinking. >> I do not accept any proposal that removes "pip wheel " without >> providing *any* replacement. > > But the current proposal *DOES NOT REMOVE IT*. By I had in mind "project name", implying "download from PyPI". And by "remove" i meant "open up the possibility of people using tools that don't support easy creation of source artifacts that can be uploaded to PyPI, resulting in pip not being able to find something to download". > We're clearly miscommunicating about something :). Yes, and that's probably my fault. I need to go back and reread the PEP and the thread. But as I said in my response to Nathaniel, it may be that all that is needed is some context in the PEP explaining how we require[1] people to upload source to PyPI in the new world where we support build systems which don't have a "sdist" command like setuptools does. Paul [1] I say "require" in the sense of "you have to follow these rules if pip is to be able to use your source", not "you must upload source" - although I hope that the number of people actually preferring to *not* include source in their PyPI uploads is vanishingly small... From ben+python at benfinney.id.au Tue Feb 9 19:37:39 2016 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 10 Feb 2016 11:37:39 +1100 Subject: [Distutils] Ensuring source availability for PyPI entries (was: PEP: Build system abstraction for pip/conda etc) References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: <85vb5x763w.fsf_-_@benfinney.id.au> Paul Moore writes: > But as I said in my response to Nathaniel, it may be that all that is > needed is some context in the PEP explaining how we require[1] people > to upload source to PyPI in the new world where we support build > systems which don't have a "sdist" command like setuptools does. > > Paul > > [1] I say "require" in the sense of "you have to follow these rules if > pip is to be able to use your source", not "you must upload source" - > although I hope that the number of people actually preferring to *not* > include source in their PyPI uploads is vanishingly small... Don't underestimate the number of people who don't wish to put source in PyPI. Especially, don't rely merely on hope that such numbers remain vanishingly small. After all, as you may recall from the discussion in 2014 [0], there are core Python members who wish PyPI had remained an index only, with distributions allowed to have *no* files necessarily hosted at PyPI. It seems reasonable to infer that their preferences are shared by others, and some people would wish to avoid putting source for a distribution on PyPI if that were to become easier. [0] https://mail.python.org/pipermail/distutils-sig/2014-May/024224.html -- \ ?DRM doesn't inconvenience [lawbreakers] ? indeed, over time it | `\ trains law-abiding users to become [lawbreakers] out of sheer | _o__) frustration.? ?Charles Stross, 2010-05-09 | Ben Finney From njs at pobox.com Tue Feb 9 19:40:36 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 9 Feb 2016 16:40:36 -0800 Subject: [Distutils] Ensuring source availability for PyPI entries (was: PEP: Build system abstraction for pip/conda etc) In-Reply-To: <85vb5x763w.fsf_-_@benfinney.id.au> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> Message-ID: On Tue, Feb 9, 2016 at 4:37 PM, Ben Finney wrote: > Paul Moore writes: > >> But as I said in my response to Nathaniel, it may be that all that is >> needed is some context in the PEP explaining how we require[1] people >> to upload source to PyPI in the new world where we support build >> systems which don't have a "sdist" command like setuptools does. >> >> Paul >> >> [1] I say "require" in the sense of "you have to follow these rules if >> pip is to be able to use your source", not "you must upload source" - >> although I hope that the number of people actually preferring to *not* >> include source in their PyPI uploads is vanishingly small... > > Don't underestimate the number of people who don't wish to put source in > PyPI. Especially, don't rely merely on hope that such numbers remain > vanishingly small. > > After all, as you may recall from the discussion in 2014 [0], there are > core Python members who wish PyPI had remained an index only, with > distributions allowed to have *no* files necessarily hosted at PyPI. > > It seems reasonable to infer that their preferences are shared by > others, and some people would wish to avoid putting source for a > distribution on PyPI if that were to become easier. > > > [0] https://mail.python.org/pipermail/distutils-sig/2014-May/024224.html This may or may not be the case, but it's an argument against allowing wheels on PyPI, not an argument against a system that makes it possible for more people to put sdists on PyPI. And it's the latter that's been under discussion here... -n -- Nathaniel J. Smith -- https://vorpus.org From robertc at robertcollins.net Tue Feb 9 20:19:02 2016 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 10 Feb 2016 14:19:02 +1300 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On 10 February 2016 at 13:09, Paul Moore wrote: > [I need to read and digest the rest of this, but it's late here, so > that will be tomorrow] OK, cool. > On 9 February 2016 at 23:19, Robert Collins wrote: >>>> Who / what tool do we expect to use the sdist command in the abstract interface? >>> I do not accept any proposal that removes "pip wheel " without >>> providing *any* replacement. >> >> But the current proposal *DOES NOT REMOVE IT*. > > By I had in mind "project name", implying "download from > PyPI". And by "remove" i meant "open up the possibility of people > using tools that don't support easy creation of source artifacts that > can be uploaded to PyPI, resulting in pip not being able to find > something to download". Sure - but Nathaniel and I both seem to think that the PEP doesn't make it any easier to do that - and in fact should allow flit to start uploading source artifacts (by allowing pip to consume it's sdists), optionally with a setuptools_shim style setup.py. >> We're clearly miscommunicating about something :). > > Yes, and that's probably my fault. I need to go back and reread the > PEP and the thread. > > But as I said in my response to Nathaniel, it may be that all that is > needed is some context in the PEP explaining how we require[1] people > to upload source to PyPI in the new world where we support build > systems which don't have a "sdist" command like setuptools does. > > Paul > > [1] I say "require" in the sense of "you have to follow these rules if > pip is to be able to use your source", not "you must upload source" - > although I hope that the number of people actually preferring to *not* > include source in their PyPI uploads is vanishingly small... So, I'm not against us making a statement like that, but I don't think it belongs in this PEP - it should be in the main PyPI docs/rules, surely? -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From mal at egenix.com Wed Feb 10 04:34:34 2016 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 10 Feb 2016 10:34:34 +0100 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: <85vb5x763w.fsf_-_@benfinney.id.au> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> Message-ID: <56BB042A.1010100@egenix.com> > Paul Moore writes: > >> But as I said in my response to Nathaniel, it may be that all that is >> needed is some context in the PEP explaining how we require[1] people >> to upload source to PyPI in the new world where we support build >> systems which don't have a "sdist" command like setuptools does. >> >> Paul >> >> [1] I say "require" in the sense of "you have to follow these rules if >> pip is to be able to use your source", not "you must upload source" - >> although I hope that the number of people actually preferring to *not* >> include source in their PyPI uploads is vanishingly small... I'm not sure I'm parsing your comment correctly, but if you are suggesting that PyPI should no longer allow supporting non-open-source packages, this is definitely not going to happen. Python is free for everyone to use without any GPL-like restrictions, which is part of our big success, and our packaging environment has to follow the same principle. The attitude that some people in this discussion are showing does not align with those principles, which I find increasingly worrying. When discussing technicalities in this space, you always have to take the political implications into account as well. Back on topic: I don't think that the build system abstraction is moving in the right direction. Instead of coming up with yet another standard for build interfacing, we should simply pin down the commands and options that pip and other installers will want to see working with the standard setup.py command line interface we have. There aren't all that many - simply take what pip does now as minimal standard. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Feb 10 2016) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ 2016-01-19: Released eGenix pyOpenSSL 0.13.13 ... http://egenix.com/go86 ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From p.f.moore at gmail.com Wed Feb 10 05:08:06 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 10 Feb 2016 10:08:06 +0000 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: <56BB042A.1010100@egenix.com> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> Message-ID: On 10 February 2016 at 09:34, M.-A. Lemburg wrote: > I'm not sure I'm parsing your comment correctly, but if you are > suggesting that PyPI should no longer allow supporting > non-open-source packages, this is definitely not going to > happen. Not at all. Although as far as I know the number of closed-source packages on PyPI is vanishingly small... My concern is that we seem to be opening up the option of using non-setuptools build systems without having a good solution for people *wishing* to upload sources. It's more a matter of timing - if we allow people to use (say) flit for their builds then presumably a proportion of people will, because it's easier to use than setuptools, *for builds*. But those people will then find that distributing their sources isn't something that flit covers, so they'll make up their own approach (if it were me, I'd probably just point people at the project's github account). Once people get set up with a workflow that goes like this (build wheels and point people to github for source) it'll be harder to encourage them later to switch *back* to a process of uploading sources to PyPI. And that I do think is bad - that we end up pushing people who would otherwise happily use PyPI for source and binary hosting, to end up with a solution where they host binaries only on PyPI and make the source available via another (non-standardised) means. In no way though am I proposing that we stop people making deliberate choices on how they distribute their packages. Just that we make hosting both source and binaries on PyPI the "no friction" easy option for (the majority of?) people who don't really mind, and just want to make their work publicly available. Paul PS This has gone a long way off the topic of the build interface proposal, so I'm glad it's been spun off into its own thread. I'm now of the view that this relates at best peripherally to the build interface proposal, which I'll comment on in the other thread. From mal at egenix.com Wed Feb 10 05:23:55 2016 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 10 Feb 2016 11:23:55 +0100 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> Message-ID: <56BB0FBB.3080105@egenix.com> On 10.02.2016 11:08, Paul Moore wrote: > On 10 February 2016 at 09:34, M.-A. Lemburg wrote: >> I'm not sure I'm parsing your comment correctly, but if you are >> suggesting that PyPI should no longer allow supporting >> non-open-source packages, this is definitely not going to >> happen. > > Not at all. That's good to know :-) > Although as far as I know the number of closed-source packages > on PyPI is vanishingly small... > > My concern is that we seem to be opening up the option of using > non-setuptools build systems without having a good solution for people > *wishing* to upload sources. It's more a matter of timing - if we > allow people to use (say) flit for their builds then presumably a > proportion of people will, because it's easier to use than setuptools, > *for builds*. But those people will then find that distributing their > sources isn't something that flit covers, so they'll make up their own > approach (if it were me, I'd probably just point people at the > project's github account). > > Once people get set up with a workflow that goes like this (build > wheels and point people to github for source) it'll be harder to > encourage them later to switch *back* to a process of uploading > sources to PyPI. > > And that I do think is bad - that we end up pushing people who would > otherwise happily use PyPI for source and binary hosting, to end up > with a solution where they host binaries only on PyPI and make the > source available via another (non-standardised) means. That's a fair argument indeed. > In no way though am I proposing that we stop people making deliberate > choices on how they distribute their packages. Just that we make > hosting both source and binaries on PyPI the "no friction" easy option > for (the majority of?) people who don't really mind, and just want to > make their work publicly available. Well, you know, there's an important difference between making work publicly available and giving away the source code :-) But I get your point and do support it: It should be possible for package authors to use different build tools than distutils. IMO, that's easy to achieve, though, with the existing de-facto standard interface we already have: the setup.py command line API. We'd just need to publish the minimal set of commands and options, installer will want to see implemented in order to initiate the builds. > PS This has gone a long way off the topic of the build interface > proposal, so I'm glad it's been spun off into its own thread. I'm now > of the view that this relates at best peripherally to the build > interface proposal, which I'll comment on in the other thread. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Feb 10 2016) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ 2016-01-19: Released eGenix pyOpenSSL 0.13.13 ... http://egenix.com/go86 ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From p.f.moore at gmail.com Wed Feb 10 05:53:28 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 10 Feb 2016 10:53:28 +0000 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On 10 February 2016 at 01:19, Robert Collins wrote: > On 10 February 2016 at 13:09, Paul Moore wrote: >> [I need to read and digest the rest of this, but it's late here, so >> that will be tomorrow] > > OK, cool. Right, I've been thinking about this a bit more, and I've re-read the PEP. I wasn't at all clear in my concern - in my own mind, and as a consequence in how I explained the issue. My apologies. I don't have any problem with the basic proposed interface, and I see what you mean that it doesn't need to require a "sdist" command. I was wrong on that. But the proposal is written in terms of how pip "works from a Python source tree". The example used throughout the PEP is a local source directory. While that's an easy and understandable example to use, it glosses over the detail of how pip will *get* such a source tree. What the PEP omits, it seems to me, is an explanation of how pip will get a "source tree" in the form defined by the PEP. In essence, there is no transition plan in the PEP. At present, pip sets up a source tree by downloading a sdist from PyPI and unpacking it. The PEP explains that sources without a pypa.json should be treated as current setuptools-style sdists for backward compatibility, but offers no guidance on how projects (or build tools, on behalf of projects that use them) can transition to a pypa.json-based approach. That's really what I'm trying to get at with my focus on "source distributions". As a project author, how do I switch to this new format? What do I publish? Publishing binaries in the form of wheels is irrelevant - projects using flit or other build tools can do that right now. This PEP is entirely about enabling them to publish *source* using new build tools, and yet it doesn't cover the publishing side at all. So while the PEP is fine, it's incomplete from the user's point of view, and my concern is that if we finalise it in its current form, people will have to roll their own publishing solutions, and we'll then be playing "catch up" trying to define a standard that takes into account the approaches people have developed in the meantime. And as a user, I'll then be left with projects I can't easily build wheels for. It's all very well to say "the project should build wheels", and I wish they would, but not all do or will - I care because one of my ongoing side projects is an automated wheel build farm (for simple cases only, don't anyone get their hopes up! ;-)) and being able to build wheels in a standard way is key to that. We don't have to solve the whole "sdist 2.0" issue right now. Simply saying that in order to publish pypa.json-based source trees you need to zip up the source directory, name the file "project-version.zip" and upload to PyPI, would be sufficient as a short-term answer (assuming that this *would* be a viable "source file" that pip could use - and I must be clear that I *haven't checked this*!!!) until something like Nathaniel's source distribution proposal, or a full-blown sdist-2.0 spec, is available. We'd need to support whatever stopgap proposal we recommend for backward compatibility in those new proposals, but that's a necessary cost of not wanting to delay the current PEP on those other ones. Hope this is clearer. Paul From p.f.moore at gmail.com Wed Feb 10 06:10:40 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 10 Feb 2016 11:10:40 +0000 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: <56BB0FBB.3080105@egenix.com> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> Message-ID: On 10 February 2016 at 10:23, M.-A. Lemburg wrote: > IMO, that's easy to achieve, though, with the existing de-facto > standard interface we already have: the setup.py command line API. > We'd just need to publish the minimal set of commands and options, > installer will want to see implemented in order to initiate > the builds. No-one who's investing time in writing PEPs is willing to thrash out the details of how to use the setup.py interface in a formal proposal that sticks to the sort of "minimum required" spec that alternative tools would be willing to support. And there's no indication that tool developers are willing to implement a setup.py compatible interface format as you suggest. And finally, you'd need a way to declare that pip installs tool X before trying to run setup.py. So "easy to achieve" still needs someone to take the time to deal with these sorts of issue. It's the usual process of the people willing to put in the effort get to choose the direction (which is also why I just provide feedback, and don't tend to offer my own proposals, because I'm not able to commit that sort of time). Paul From mal at egenix.com Wed Feb 10 07:21:31 2016 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 10 Feb 2016 13:21:31 +0100 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> Message-ID: <56BB2B4B.1040806@egenix.com> On 10.02.2016 12:10, Paul Moore wrote: > On 10 February 2016 at 10:23, M.-A. Lemburg wrote: >> IMO, that's easy to achieve, though, with the existing de-facto >> standard interface we already have: the setup.py command line API. >> We'd just need to publish the minimal set of commands and options, >> installer will want to see implemented in order to initiate >> the builds. > > No-one who's investing time in writing PEPs is willing to thrash out > the details of how to use the setup.py interface in a formal proposal > that sticks to the sort of "minimum required" spec that alternative > tools would be willing to support. And there's no indication that tool > developers are willing to implement a setup.py compatible interface > format as you suggest. And finally, you'd need a way to declare that > pip installs tool X before trying to run setup.py. I don't think that installing 3rd party tools is within the scope of such a proposal. The setup.py of packages using such tools would have to either define a dependency to have the installer get the extra tool, download and install it directly when needed, or tell the user how to install the tool. Alternatively, the package distro could simply ship the tool embedded in the package. That's what we're doing with mxSetup.py. > So "easy to achieve" still needs someone to take the time to deal with > these sorts of issue. It's the usual process of the people willing to > put in the effort get to choose the direction (which is also why I > just provide feedback, and don't tend to offer my own proposals, > because I'm not able to commit that sort of time). Wait. You are missing the point that the setup.py interface already does work, so no extra effort is needed. All that's needed is some documentation of what's currently being used, so that other tools can support the interface going forward. At the moment, pip this interface is only defined by "what pip uses" and that's a moving target. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Feb 10 2016) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ 2016-01-19: Released eGenix pyOpenSSL 0.13.13 ... http://egenix.com/go86 ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From oscar.j.benjamin at gmail.com Wed Feb 10 08:00:47 2016 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 10 Feb 2016 13:00:47 +0000 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: <56BB2B4B.1040806@egenix.com> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> Message-ID: On 10 February 2016 at 12:21, M.-A. Lemburg wrote: >> So "easy to achieve" still needs someone to take the time to deal with >> these sorts of issue. It's the usual process of the people willing to >> put in the effort get to choose the direction (which is also why I >> just provide feedback, and don't tend to offer my own proposals, >> because I'm not able to commit that sort of time). > > Wait. You are missing the point that the setup.py interface > already does work, so no extra effort is needed. All that's > needed is some documentation of what's currently being used, > so that other tools can support the interface going forward. You can see an example of a minimal setup.py file here: https://github.com/oscarbenjamin/setuppytest/blob/master/setuppytest/setup.py I wrote that some time ago and don't know if it still works (that's the problem with just having a de facto standard). > At the moment, pip this interface is only defined by > "what pip uses" and that's a moving target. The setup.py interface is a terrible interface for tools like pip to use and for tools like flit to emulate. Currently what pip does is to invoke $ python setup.py egg_info --egg-base $TEMPDIR to get the metadata. It is not possible to get the metadata without executing the setup.py which is problematic for many applications. Providing a static pypa.json file is much better: tools can read a static file to get the metadata. To install a distribution pip runs: $ python setup.py install --record $RECORD_FILE \ --single-version-externally-managed So the setup.py is entirely responsible not just for building but also for installing everything. This makes it very difficult to develop a system where different installer tools and different build tools can cooperate to allow end users to specify installation options. It also means that the installer has no direct control over where any of the files are installed. If you were designing this from scratch then there are some obvious things that you would want to do differently here. The setup.py interface also has so much legacy usage that it's difficult for setuptools and pip to evolve. The idea with this proposal is to decouple things by introducing a new interface with well defined and sensible behaviour. -- Oscar From ncoghlan at gmail.com Wed Feb 10 08:14:39 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 10 Feb 2016 23:14:39 +1000 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: <56BB2B4B.1040806@egenix.com> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> Message-ID: On 10 February 2016 at 22:21, M.-A. Lemburg wrote: > Wait. You are missing the point that the setup.py interface > already does work, so no extra effort is needed. All that's > needed is some documentation of what's currently being used, > so that other tools can support the interface going forward. One of the key points of the proposal is to be able to write *one* setuptools/distutils shim, and then never having to write another one, regardless of how many build systems people come up with [1]. The build system abstraction PEP itself comes from figuring out what pip needs (i.e. the "minimal interface" you're after), and documenting that specifically, without the distracting noise that comes from documenting it in terms of "how pip calls setup.py" (which includes things like passing "--single-version-externally-managed", which only makes sense in the context of setuptools originally being designed to serve the needs of the Chandler project). Cheers, Nick. [1] If we never end up with a build system called "rennet", I am going to be most disappointed :) -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Feb 10 08:23:49 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 10 Feb 2016 23:23:49 +1000 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On 10 February 2016 at 20:53, Paul Moore wrote: > We don't have to solve the whole "sdist 2.0" issue right now. Simply > saying that in order to publish pypa.json-based source trees you need > to zip up the source directory, name the file "project-version.zip" > and upload to PyPI, would be sufficient as a short-term answer > (assuming that this *would* be a viable "source file" that pip could > use - and I must be clear that I *haven't checked this*!!!) until > something like Nathaniel's source distribution proposal, or a > full-blown sdist-2.0 spec, is available. We'd need to support whatever > stopgap proposal we recommend for backward compatibility in those new > proposals, but that's a necessary cost of not wanting to delay the > current PEP on those other ones. One of the reasons I went ahead and created the specifications page at https://packaging.python.org/en/latest/specifications/ was to let us tweak interoperability requirements as needed, without wasting people's time with excessive PEP wrangling by requiring a separate PEP for each interface affected by a proposal. In this case, the build system abstraction PEP should propose some additional text for https://packaging.python.org/en/latest/specifications/#source-distribution-format defining how to publish source archives containing a pypa.json file and the setup.py shim. At that point, it will effectively become the spec for sdist 1.0, since that's never previously been officially defined. The key difference from setuptools is that the setup.py shim will be a standard one that flit (and other source archive creation tools) can inject when building the sdist, rather than needing to be a custom file stored in each project's source repository. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From mal at egenix.com Wed Feb 10 08:36:36 2016 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 10 Feb 2016 14:36:36 +0100 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> Message-ID: <56BB3CE4.2040906@egenix.com> On 10.02.2016 14:00, Oscar Benjamin wrote: > On 10 February 2016 at 12:21, M.-A. Lemburg wrote: >>> So "easy to achieve" still needs someone to take the time to deal with >>> these sorts of issue. It's the usual process of the people willing to >>> put in the effort get to choose the direction (which is also why I >>> just provide feedback, and don't tend to offer my own proposals, >>> because I'm not able to commit that sort of time). >> >> Wait. You are missing the point that the setup.py interface >> already does work, so no extra effort is needed. All that's >> needed is some documentation of what's currently being used, >> so that other tools can support the interface going forward. > > You can see an example of a minimal setup.py file here: > > https://github.com/oscarbenjamin/setuppytest/blob/master/setuppytest/setup.py > > I wrote that some time ago and don't know if it still works (that's > the problem with just having a de facto standard). Agreed, and something that we should address in a PEP. >> At the moment, pip this interface is only defined by >> "what pip uses" and that's a moving target. > > The setup.py interface is a terrible interface for tools like pip to > use and for tools like flit to emulate. I'm not saying that it's a great interface, but it's one that by far most sdists out there support. > Currently what pip does is to > invoke > > $ python setup.py egg_info --egg-base $TEMPDIR > > to get the metadata. It is not possible to get the metadata without > executing the setup.py which is problematic for many applications. > Providing a static pypa.json file is much better: tools can read a > static file to get the metadata. Depends on which kind of meta data you're after. sdist packages do include the static PKG-INFO file which has the version 1.0 meta data. This doesn't include dependencies or namespace details, but it does have important data such as version, package name, description, etc. > To install a distribution pip runs: > > $ python setup.py install --record $RECORD_FILE \ > --single-version-externally-managed > > So the setup.py is entirely responsible not just for building but also > for installing everything. This makes it very difficult to develop a > system where different installer tools and different build tools can > cooperate to allow end users to specify installation options. It also > means that the installer has no direct control over where any of the > files are installed. Why is that ? The install command is very flexible in allowing you to define where the various parts are installed. When defining a minimal set of supported options, the various --install-* options should be part of this. It would also be possible to separate the build and install steps, since distutils is well capable to do this. However, I'm not sure where this aspect fits in relation to the proposed PEP, since it is targeting the operation of building the package and wrapping it into a wheel file, so the bdist_wheel command would have to be used instead. pip wheel pkg runs this command: python setup.py bdist_wheel -d targetdir > If you were designing this from scratch then there are some obvious > things that you would want to do differently here. The setup.py > interface also has so much legacy usage that it's difficult for > setuptools and pip to evolve. The idea with this proposal is to > decouple things by introducing a new interface with well defined and > sensible behaviour. In the end, you'll just be defining a different standard to express the same thing in different ways. The setup.py interface was never designed with integration in mind (only with the idea to provide an extensible interface; and I'm not going get into the merits of setuptools additions such as --single-version-externally-managed :-)) but it's still quite usable for the intended purpose. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Feb 10 2016) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ 2016-01-19: Released eGenix pyOpenSSL 0.13.13 ... http://egenix.com/go86 ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From p.f.moore at gmail.com Wed Feb 10 08:43:06 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 10 Feb 2016 13:43:06 +0000 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On 10 February 2016 at 13:23, Nick Coghlan wrote: > On 10 February 2016 at 20:53, Paul Moore wrote: >> We don't have to solve the whole "sdist 2.0" issue right now. Simply >> saying that in order to publish pypa.json-based source trees you need >> to zip up the source directory, name the file "project-version.zip" >> and upload to PyPI, would be sufficient as a short-term answer >> (assuming that this *would* be a viable "source file" that pip could >> use - and I must be clear that I *haven't checked this*!!!) until >> something like Nathaniel's source distribution proposal, or a >> full-blown sdist-2.0 spec, is available. We'd need to support whatever >> stopgap proposal we recommend for backward compatibility in those new >> proposals, but that's a necessary cost of not wanting to delay the >> current PEP on those other ones. > > One of the reasons I went ahead and created the specifications page at > https://packaging.python.org/en/latest/specifications/ was to let us > tweak interoperability requirements as needed, without wasting > people's time with excessive PEP wrangling by requiring a separate PEP > for each interface affected by a proposal. > > In this case, the build system abstraction PEP should propose some > additional text for > https://packaging.python.org/en/latest/specifications/#source-distribution-format > defining how to publish source archives containing a pypa.json file > and the setup.py shim. At that point, it will effectively become the > spec for sdist 1.0, since that's never previously been officially > defined. > > The key difference from setuptools is that the setup.py shim will be a > standard one that flit (and other source archive creation tools) can > inject when building the sdist, rather than needing to be a custom > file stored in each project's source repository. Neat! That sounds like a sensible approach, and if the build system abstraction PEP adds this, then that addresses my remaining objections. Paul From p.f.moore at gmail.com Wed Feb 10 08:52:14 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 10 Feb 2016 13:52:14 +0000 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On 10 February 2016 at 13:43, Paul Moore wrote: >> In this case, the build system abstraction PEP should propose some >> additional text for >> https://packaging.python.org/en/latest/specifications/#source-distribution-format >> defining how to publish source archives containing a pypa.json file >> and the setup.py shim. At that point, it will effectively become the >> spec for sdist 1.0, since that's never previously been officially >> defined. >> >> The key difference from setuptools is that the setup.py shim will be a >> standard one that flit (and other source archive creation tools) can >> inject when building the sdist, rather than needing to be a custom >> file stored in each project's source repository. > > Neat! That sounds like a sensible approach, and if the build system > abstraction PEP adds this, then that addresses my remaining > objections. We should probably also check with the flit people that the proposed approach works for them. (Are there any other alternative build systems apart from flit that exist at present?) Paul From dholth at gmail.com Wed Feb 10 08:52:56 2016 From: dholth at gmail.com (Daniel Holth) Date: Wed, 10 Feb 2016 13:52:56 +0000 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> Message-ID: Let me speak up about a different and pressing problem: the problem of source code that is not distributed with a GNU automake script. First, any alleged "software" that doesn't use GNU automake is not real and/or should be considered closed source. Second, automake is the best build system that I can imagine. Third, WeirdHat Linux does not understand how to package software that uses CMake. Q.E.D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Feb 10 09:30:25 2016 From: cournape at gmail.com (David Cournapeau) Date: Wed, 10 Feb 2016 14:30:25 +0000 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On Wed, Feb 10, 2016 at 1:52 PM, Paul Moore wrote: > On 10 February 2016 at 13:43, Paul Moore wrote: > >> In this case, the build system abstraction PEP should propose some > >> additional text for > >> > https://packaging.python.org/en/latest/specifications/#source-distribution-format > >> defining how to publish source archives containing a pypa.json file > >> and the setup.py shim. At that point, it will effectively become the > >> spec for sdist 1.0, since that's never previously been officially > >> defined. > >> > >> The key difference from setuptools is that the setup.py shim will be a > >> standard one that flit (and other source archive creation tools) can > >> inject when building the sdist, rather than needing to be a custom > >> file stored in each project's source repository. > > > > Neat! That sounds like a sensible approach, and if the build system > > abstraction PEP adds this, then that addresses my remaining > > objections. > > We should probably also check with the flit people that the proposed > approach works for them. (Are there any other alternative build > systems apart from flit that exist at present?) > I am not working on it ATM, but bento was fairly complete and could interoperate w/ pip (a few years ago at least): https://cournape.github.io/Bento/ > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Wed Feb 10 13:42:02 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 11 Feb 2016 07:42:02 +1300 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: <56BB2B4B.1040806@egenix.com> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> Message-ID: On 11 February 2016 at 01:21, M.-A. Lemburg wrote: > On 10.02.2016 12:10, Paul Moore wrote: >> On 10 February 2016 at 10:23, M.-A. Lemburg wrote: >>> IMO, that's easy to achieve, though, with the existing de-facto >>> standard interface we already have: the setup.py command line API. >>> We'd just need to publish the minimal set of commands and options, >>> installer will want to see implemented in order to initiate >>> the builds. >> >> No-one who's investing time in writing PEPs is willing to thrash out >> the details of how to use the setup.py interface in a formal proposal >> that sticks to the sort of "minimum required" spec that alternative >> tools would be willing to support. And there's no indication that tool >> developers are willing to implement a setup.py compatible interface >> format as you suggest. And finally, you'd need a way to declare that >> pip installs tool X before trying to run setup.py. > > I don't think that installing 3rd party tools is within the scope > of such a proposal. The setup.py of packages using such tools would > have to either define a dependency to have the installer get the > extra tool, download and install it directly when needed, or tell > the user how to install the tool. > > Alternatively, the package distro could simply ship the tool > embedded in the package. That's what we're doing with > mxSetup.py. > >> So "easy to achieve" still needs someone to take the time to deal with >> these sorts of issue. It's the usual process of the people willing to >> put in the effort get to choose the direction (which is also why I >> just provide feedback, and don't tend to offer my own proposals, >> because I'm not able to commit that sort of time). > > Wait. You are missing the point that the setup.py interface > already does work, so no extra effort is needed. All that's > needed is some documentation of what's currently being used, > so that other tools can support the interface going forward. > > At the moment, pip this interface is only defined by > "what pip uses" and that's a moving target. I disagree with the claim that setup.py already works. Right now the vast majority of setup.py's will end up invoking easy-install, which has entirely separate configuration to pip for networking. It also means that pip is utterly incapable of caching and reusing setup_requires dependencies, so when e.g. numpy is a setup-requires dependency, build times can be arbitrarily high as numpy gets built 3, 4, 5 or more times. Secondly, the setup.py interface includes 'install' as a verb, and there is pretty solid consensus that that is a bug - any definition of 'what pip uses' has to include that as a fallback, for the same reasons pip has to fallback to that - but there is a substantial difference between the end user UX setuptools provides, the interface pip is forced to use, and the smaller one pip would /like/ to use. So all the variants of the PEP we've been discussing are about a modest step forward, capturing pip's existing needs and addressing the setup-requires/easy-install bug as well as the use of install bug. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Wed Feb 10 13:46:02 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 11 Feb 2016 07:46:02 +1300 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: <56BB3CE4.2040906@egenix.com> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> <56BB3CE4.2040906@egenix.com> Message-ID: On 11 February 2016 at 02:36, M.-A. Lemburg wrote: >> Currently what pip does is to >> invoke >> >> $ python setup.py egg_info --egg-base $TEMPDIR >> >> to get the metadata. It is not possible to get the metadata without >> executing the setup.py which is problematic for many applications. >> Providing a static pypa.json file is much better: tools can read a >> static file to get the metadata. > > Depends on which kind of meta data you're after. sdist packages > do include the static PKG-INFO file which has the version 1.0 > meta data. This doesn't include dependencies or namespace > details, but it does have important data such as version, > package name, description, etc. For pip to use it, it needs to include - reliably - version, name, and dependencies; for it to be in any way better, we also need setup-requires or a functional equivalent. Today, we can't rely on the PKG-INFO being complete, so we assume they are all wrong and start over. One of the things we'll get by being strict in subsequent iterations is the ability to rely on things. > In the end, you'll just be defining a different standard > to express the same thing in different ways. > > The setup.py interface was never designed with integration in mind > (only with the idea to provide an extensible interface; and I'm not > going get into the merits of setuptools additions such as > --single-version-externally-managed :-)) but it's still > quite usable for the intended purpose. However we *are defining an integration point*, which is yet another reason not to use the setup.py interface. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From doko at ubuntu.com Wed Feb 10 15:18:40 2016 From: doko at ubuntu.com (Matthias Klose) Date: Wed, 10 Feb 2016 21:18:40 +0100 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <945A58B8-87DC-42DD-B215-6DCB0B047B9E@stufft.io> References: <56AFEC55.30706@ubuntu.com> <945A58B8-87DC-42DD-B215-6DCB0B047B9E@stufft.io> Message-ID: <56BB9B20.2030709@ubuntu.com> On 02.02.2016 01:30, Donald Stufft wrote: > >> On Feb 1, 2016, at 6:37 PM, Matthias Klose wrote: >> >> On 30.01.2016 00:29, Nathaniel Smith wrote: >>> Hi all, >>> >>> I think this is ready for pronouncement now -- thanks to everyone for >>> all their feedback over the last few weeks! >> >> I don't think so. I am biased because I'm the maintainer for Python in Debian/Ubuntu. So I would like to have some feedback from maintainers of Python in other Linux distributions (Nick, no, you're not one of these). >> >> The proposal just takes some environment and declares that as a standard. So everybody wanting to supply these wheels basically has to use this environment. Without giving any details, without giving any advise how to produce such wheels in other environments. Without giving any hints how such wheels may be broken with newer environments. > > I?m not sure this is true. It tells you exactly what versions of glibc and other libraries it is allowed to link against. It can link against older if it wants, it can?t link against newer. > >> Without mentioning this is am64/i386 only. > > First sentence: This PEP proposes the creation of a new platform tag for Python package built distributions, such as wheels, calledmanylinux1_{x86_64,i686} with external dependencies limited to a standardized, restricted subset of the Linux kernel and core userspace ABI. I read "such as wheels, calledmanylinux1_{x86_64,i686}" as not limited to such platforms. > Later on: Because CentOS 5 is only available for x86_64 and i686 architectures, these are the only architectures currently supported by the manylinux1 policy. sorry, didn't see that. > I think it?s a reasonable policy too, AMD64 is responsible for an order of magnitude more downloads than all other architectures on Linux combined (71,424,040 vs 1,086,527 in current data set). If you compare AMD64+i386 against everything else then you?re looking at two orders of magnitude (72,142,511 vs 368,056). I think we can live with a solution that covers 99.5% of all Linux downloads from PyPI. But then why call it manylinux instead of centos5? You build it on this OS, you expect others to build it on this OS. just name it what it is. >> There might be more. Pretty please be specific about your environment. Have a look how the LSB specifies requirements on the runtime environment ... and then ask yourself why the lsb doesn't have any real value. >> > > Instead of vague references to the LSB, can you tell us why you think the LSB doesn?t have any real value, and importantly, how that relates to trying to determine a minimum set of binary ABI. In addition, can you clarify why, if your assertion is this isn?t going to work, can you state why it won?t work when it is working for many user?s in the wild through Anaconda, Enthought, the Holy Build Box, etc? I'm not seeing the LSB used in real life (anymore), and not any recent updates. Furthermore, LSB packages were removed in Debian [1]. [1] https://lists.debian.org/debian-lsb/2015/07/msg00000.html, https://lists.debian.org/debian-lsb/2015/07/msg00002.html From donald at stufft.io Wed Feb 10 15:26:57 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 10 Feb 2016 15:26:57 -0500 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <56BB9B20.2030709@ubuntu.com> References: <56AFEC55.30706@ubuntu.com> <945A58B8-87DC-42DD-B215-6DCB0B047B9E@stufft.io> <56BB9B20.2030709@ubuntu.com> Message-ID: > On Feb 10, 2016, at 3:18 PM, Matthias Klose wrote: > > But then why call it manylinux instead of centos5? You build it on this OS, you expect others to build it on this OS. just name it what it is. Because this is a very specific subset of CentOS 5 that has shown to, in practice, work cross distro into the vast majority of glibc using Linux distributions. The idea here is that if you restrict yourself to these subset of libraries, you?ll produce a wheel that can be installed on say, Debian (assuming a recent enough Debian is used, which is why we?re using the very ancient CentOS 5, so it?s sufficiently old) and not have ABI issues to contend with. ?manylinux? is a nicer name than ?builtoncentos5butinawaythatisusableonmanyotherlinuxsystems?. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From doko at ubuntu.com Wed Feb 10 15:32:36 2016 From: doko at ubuntu.com (Matthias Klose) Date: Wed, 10 Feb 2016 21:32:36 +0100 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: <56AFEC55.30706@ubuntu.com> <945A58B8-87DC-42DD-B215-6DCB0B047B9E@stufft.io> <56BB9B20.2030709@ubuntu.com> Message-ID: <56BB9E64.9020600@ubuntu.com> On 10.02.2016 21:26, Donald Stufft wrote: > >> On Feb 10, 2016, at 3:18 PM, Matthias Klose wrote: >> >> But then why call it manylinux instead of centos5? You build it on this OS, you expect others to build it on this OS. just name it what it is. > > > Because this is a very specific subset of CentOS 5 that has shown to, in practice, work cross distro into the vast majority of glibc using Linux distributions. The idea here is that if you restrict yourself to these subset of libraries, you?ll produce a wheel that can be installed on say, Debian (assuming a recent enough Debian is used, which is why we?re using the very ancient CentOS 5, so it?s sufficiently old) and not have ABI issues to contend with. > > ?manylinux? is a nicer name than ?builtoncentos5butinawaythatisusableonmanyotherlinuxsystems?. the python community has a "good" history calling things if they don't like them. the most longest option names were invented by the setuptools maintainers. From barry at python.org Wed Feb 10 17:12:02 2016 From: barry at python.org (Barry Warsaw) Date: Wed, 10 Feb 2016 17:12:02 -0500 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> Message-ID: <20160210171202.2508a6cf@anarchist.wooz.org> On Feb 10, 2016, at 10:08 AM, Paul Moore wrote: >But those people will then find that distributing their sources isn't >something that flit covers, so they'll make up their own approach (if it were >me, I'd probably just point people at the project's github account). > >Once people get set up with a workflow that goes like this (build >wheels and point people to github for source) it'll be harder to >encourage them later to switch *back* to a process of uploading >sources to PyPI. > >And that I do think is bad - that we end up pushing people who would >otherwise happily use PyPI for source and binary hosting, to end up >with a solution where they host binaries only on PyPI and make the >source available via another (non-standardised) means. That worries me a lot. Think of the downstream consumers who aren't end users, e.g. Linux distro developers. Some distros have strict requirements on the availability of the source, reproducibility of builds, and so on, along with stacks of tooling that are built on downloading tarballs from PyPI. It's not impossible to migrate to something else, but it's impractical to migrate to dozens of something elses. Right now, if we can count on PyPI having the source in an easily consumable lowest common denominator format, the friction of providing those packages to *our* end users, and updating them in a timely manner, is often minimal. Changing that ecosystem upstream of us, either deliberately or otherwise, will likely result in more out of date packages in the distros. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From ncoghlan at gmail.com Wed Feb 10 22:48:43 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Feb 2016 13:48:43 +1000 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: <20160210171202.2508a6cf@anarchist.wooz.org> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <20160210171202.2508a6cf@anarchist.wooz.org> Message-ID: On 11 February 2016 at 08:12, Barry Warsaw wrote: > It's not impossible to migrate to something else, but it's impractical to > migrate to dozens of something elses. Right now, if we can count on PyPI > having the source in an easily consumable lowest common denominator format, > the friction of providing those packages to *our* end users, and updating them > in a timely manner, is often minimal. Changing that ecosystem upstream of us, > either deliberately or otherwise, will likely result in more out of date > packages in the distros. One of my own overarching goals in all this is to help facilitate utilities like pyp2rpm and py2dsc being able to produce policy compliant distro packages from upstream Python projects *without* any manual fiddling (at least in the case of pure Python modules and packages, and hopefully eventually for extension modules as well), so I'm definitely keeping an eye on the "easy, reliable and automatable access to source code" aspect. Maven (and Maven Central) don't actively encourage machine readable access to source code for published binary artifacts, which turns out to be one of the key problems that makes integrating the JVM ecosystem into Linux distributions a bit of a horror show. Improvements in Linux container tech help with that by putting up a clear "Somebody Else's Problem" field at the container boundary, but it's still a workaround for the historical lack of effective collaboration between the two communities, rather than a real solution. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Feb 10 22:58:24 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Feb 2016 13:58:24 +1000 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <20160210171202.2508a6cf@anarchist.wooz.org> Message-ID: On 11 February 2016 at 13:48, Nick Coghlan wrote: > On 11 February 2016 at 08:12, Barry Warsaw wrote: >> It's not impossible to migrate to something else, but it's impractical to >> migrate to dozens of something elses. Right now, if we can count on PyPI >> having the source in an easily consumable lowest common denominator format, >> the friction of providing those packages to *our* end users, and updating them >> in a timely manner, is often minimal. Changing that ecosystem upstream of us, >> either deliberately or otherwise, will likely result in more out of date >> packages in the distros. > > One of my own overarching goals in all this is to help facilitate > utilities like pyp2rpm and py2dsc Hmm, I got the py2dsc reference from https://wiki.debian.org/Python/Packaging but the newer https://wiki.debian.org/Python/LibraryStyleGuide doesn't appear to mention any particular way of generating the initial packaging skeleton from the upstream project. Anyway, the core point is wanting to ensure we can automate not only "direct to binary" installation with Python specific tools, but also the "convert to alternate source archive format and build from there" workflows needed by redistributor ecosystems like Linux distros, conda, Canopy, PyPM, Nix, etc. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ralf.gommers at gmail.com Thu Feb 11 02:43:51 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 11 Feb 2016 08:43:51 +0100 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On Wed, Feb 10, 2016 at 3:30 PM, David Cournapeau wrote: > > > > On Wed, Feb 10, 2016 at 1:52 PM, Paul Moore wrote: > >> We should probably also check with the flit people that the proposed >> approach works for them. (Are there any other alternative build >> systems apart from flit that exist at present?) >> > > I am not working on it ATM, but bento was fairly complete and could > interoperate w/ pip (a few years ago at least): > https://cournape.github.io/Bento/ > I plan to test with Bento (I'm still using it almost daily to work on Scipy) when an implementation is proposed for pip. The interface in the PEP is straightforward though, I don't see any fundamental reason why it wouldn't work for Bento if it works for flit. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Feb 11 02:50:29 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 11 Feb 2016 08:50:29 +0100 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On Wed, Feb 10, 2016 at 2:43 PM, Paul Moore wrote: > On 10 February 2016 at 13:23, Nick Coghlan wrote: > > On 10 February 2016 at 20:53, Paul Moore wrote: > >> We don't have to solve the whole "sdist 2.0" issue right now. Simply > >> saying that in order to publish pypa.json-based source trees you need > >> to zip up the source directory, name the file "project-version.zip" > >> and upload to PyPI, would be sufficient as a short-term answer > >> (assuming that this *would* be a viable "source file" that pip could > >> use - and I must be clear that I *haven't checked this*!!!) > This is exactly what pip itself does right now for "pip install .", so clearly it is viable. until > >> something like Nathaniel's source distribution proposal, or a > >> full-blown sdist-2.0 spec, is available. We'd need to support whatever > >> stopgap proposal we recommend for backward compatibility in those new > >> proposals, but that's a necessary cost of not wanting to delay the > >> current PEP on those other ones. > > > > One of the reasons I went ahead and created the specifications page at > > https://packaging.python.org/en/latest/specifications/ was to let us > > tweak interoperability requirements as needed, without wasting > > people's time with excessive PEP wrangling by requiring a separate PEP > > for each interface affected by a proposal. > > > > In this case, the build system abstraction PEP should propose some > > additional text for > > > https://packaging.python.org/en/latest/specifications/#source-distribution-format > > defining how to publish source archives containing a pypa.json file > > and the setup.py shim. > The setup.py shim should be optional right? If a package author decides to not care about older pip versions, then the shim isn't needed. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Feb 11 05:16:57 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Feb 2016 20:16:57 +1000 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On 11 February 2016 at 17:50, Ralf Gommers wrote: > On Wed, Feb 10, 2016 at 2:43 PM, Paul Moore wrote: >> >> On 10 February 2016 at 13:23, Nick Coghlan wrote: >> > On 10 February 2016 at 20:53, Paul Moore wrote: >> >> We don't have to solve the whole "sdist 2.0" issue right now. Simply >> >> saying that in order to publish pypa.json-based source trees you need >> >> to zip up the source directory, name the file "project-version.zip" >> >> and upload to PyPI, would be sufficient as a short-term answer >> >> (assuming that this *would* be a viable "source file" that pip could >> >> use - and I must be clear that I *haven't checked this*!!!) > > > This is exactly what pip itself does right now for "pip install .", so > clearly it is viable. > >> until >> >> something like Nathaniel's source distribution proposal, or a >> >> full-blown sdist-2.0 spec, is available. We'd need to support whatever >> >> stopgap proposal we recommend for backward compatibility in those new >> >> proposals, but that's a necessary cost of not wanting to delay the >> >> current PEP on those other ones. >> > >> > One of the reasons I went ahead and created the specifications page at >> > https://packaging.python.org/en/latest/specifications/ was to let us >> > tweak interoperability requirements as needed, without wasting >> > people's time with excessive PEP wrangling by requiring a separate PEP >> > for each interface affected by a proposal. >> > >> > In this case, the build system abstraction PEP should propose some >> > additional text for >> > >> > https://packaging.python.org/en/latest/specifications/#source-distribution-format >> > defining how to publish source archives containing a pypa.json file >> > and the setup.py shim. > > > The setup.py shim should be optional right? If a package author decides to > not care about older pip versions, then the shim isn't needed. Given how long it takes for new versions of pip to filter out through the ecosystem, the shim's going to be needed for quite a while. Since we have the power to make things "just work" even for folks on older pip versions that assume use of the setuptools/distutils CLI, it makes sense to nudge sdist creation tools in that direction. The real pay-off here is getting setup.py out of most source repos and replacing it with a declarative format - keeping it out of sdists is a non-goal from my perspective. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From cournape at gmail.com Thu Feb 11 09:02:41 2016 From: cournape at gmail.com (David Cournapeau) Date: Thu, 11 Feb 2016 14:02:41 +0000 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On Thu, Feb 11, 2016 at 7:43 AM, Ralf Gommers wrote: > > > On Wed, Feb 10, 2016 at 3:30 PM, David Cournapeau > wrote: > >> >> >> >> On Wed, Feb 10, 2016 at 1:52 PM, Paul Moore wrote: >> > >>> We should probably also check with the flit people that the proposed >>> approach works for them. (Are there any other alternative build >>> systems apart from flit that exist at present?) >>> >> >> I am not working on it ATM, but bento was fairly complete and could >> interoperate w/ pip (a few years ago at least): >> https://cournape.github.io/Bento/ >> > > I plan to test with Bento (I'm still using it almost daily to work on > Scipy) when an implementation is proposed for pip. The interface in the PEP > is straightforward though, I don't see any fundamental reason why it > wouldn't work for Bento if it works for flit. > It should indeed work, I was just pointing at an alternative build system ;) I am a bit worried about making a PEP for interfacing before we have a few decent alternative implementations. Having the official interface is not necessary to actually interoperate, even if it is ugly. David > Ralf > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Feb 11 09:32:26 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 11 Feb 2016 14:32:26 +0000 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On 11 February 2016 at 14:02, David Cournapeau wrote: >>> On Wed, Feb 10, 2016 at 1:52 PM, Paul Moore wrote: >>>> >>>> We should probably also check with the flit people that the proposed >>>> approach works for them. (Are there any other alternative build >>>> systems apart from flit that exist at present?) >>> >>> I am not working on it ATM, but bento was fairly complete and could >>> interoperate w/ pip (a few years ago at least): >>> https://cournape.github.io/Bento/ >> >> I plan to test with Bento (I'm still using it almost daily to work on >> Scipy) when an implementation is proposed for pip. The interface in the PEP >> is straightforward though, I don't see any fundamental reason why it >> wouldn't work for Bento if it works for flit. > > It should indeed work, I was just pointing at an alternative build system ;) Yes, I knew there was something other than flit, thanks for the reminder. > I am a bit worried about making a PEP for interfacing before we have a few > decent alternative implementations. Having the official interface is not > necessary to actually interoperate, even if it is ugly. Well, a lot of people have complained that setuptools is a problem. We've only really seen bento and now flit appear as alternatives, the only conclusion we've been able to draw is that the barrier to creating alternative build systems is the need to emulate setuptools. This PEP (hopefully!) removes that barrier, but I agree we need some validation that people who want to create alternative build systems (or have done so) can work with the interface in the PEP. There is some value to the PEP even if it doesn't enable new build tools (we can fix the problem of install_requires triggering easy_install) but the key has to be the evolution of (one or more) replacements for setuptools. You suggest getting more alternative build systems before implementing the PEP. That would be nice, but how would we get that to happen? People have been wanting alternatives to setuptools for years, but no-one has delivered anything (as far as I know) except for you and the flit guys. So my preference is to implement the PEP (and remove the "behave like setuptools" pain point), and then wait to see what level of adoption flit/bento achieve with the simpler interface and automated use of the declared build tool. At that stage, all of flit, bento and setuptools, as well as any newly developed tools, will be competing on an equal footing, and either one will emerge as the victor, or people will be free to choose their preferred tool without concern about whether it is going to work with pip. I'm not sure what we gain by waiting (incremental improvements to the spec can me made over time - as long as the basic form of the PEP is acceptable to *current* tool developers, I think that's sufficient). Paul From cournape at gmail.com Thu Feb 11 10:01:56 2016 From: cournape at gmail.com (David Cournapeau) Date: Thu, 11 Feb 2016 15:01:56 +0000 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On Thu, Feb 11, 2016 at 2:32 PM, Paul Moore wrote: > On 11 February 2016 at 14:02, David Cournapeau wrote: > >>> On Wed, Feb 10, 2016 at 1:52 PM, Paul Moore > wrote: > >>>> > >>>> We should probably also check with the flit people that the proposed > >>>> approach works for them. (Are there any other alternative build > >>>> systems apart from flit that exist at present?) > >>> > >>> I am not working on it ATM, but bento was fairly complete and could > >>> interoperate w/ pip (a few years ago at least): > >>> https://cournape.github.io/Bento/ > >> > >> I plan to test with Bento (I'm still using it almost daily to work on > >> Scipy) when an implementation is proposed for pip. The interface in the > PEP > >> is straightforward though, I don't see any fundamental reason why it > >> wouldn't work for Bento if it works for flit. > > > > It should indeed work, I was just pointing at an alternative build > system ;) > > Yes, I knew there was something other than flit, thanks for the reminder. > > > I am a bit worried about making a PEP for interfacing before we have a > few > > decent alternative implementations. Having the official interface is not > > necessary to actually interoperate, even if it is ugly. > > Well, a lot of people have complained that setuptools is a problem. > You won't hear me defending setuptools/distutils, but the main pain points of setuptools are not related to interoperability with `setup.py`. > We've only really seen bento and now flit appear as alternatives, the > only conclusion we've been able to draw is that the barrier to > creating alternative build systems is the need to emulate setuptools. > This PEP (hopefully!) removes that barrier, but I agree we need some > validation that people who want to create alternative build systems > (or have done so) can work with the interface in the PEP. > If this is indeed the main argument for the PEP, that it is IMO misguided. Making new buildsystems for python is hard work (once you go beyond the trivial packages), but interoperability w/ setup.py is not difficult There is some value to the PEP even if it doesn't enable new build > tools (we can fix the problem of install_requires triggering > easy_install) but the key has to be the evolution of (one or more) > replacements for setuptools. > > You suggest getting more alternative build systems before implementing > the PEP. That would be nice, but how would we get that to happen? > People have been wanting alternatives to setuptools for years, but > no-one has delivered anything (as far as I know) except for you and > the flit guys. So my preference is to implement the PEP (and remove > the "behave like setuptools" pain point), and then wait to see what > level of adoption flit/bento achieve with the simpler interface and > automated use of the declared build tool. At that stage, all of flit, > bento and setuptools, as well as any newly developed tools, will be > competing on an equal footing, and either one will emerge as the > victor, or people will be free to choose their preferred tool without > concern about whether it is going to work with pip. > My main worry is about designing an interface before seeing multiple implementations. This often causes trouble. If we think the issue is allowing people to start working on the more "interesting" parts while keeping compat w/ pip, then I would suggest working on an example buildsystem with interoperability that people could steal for their own. I played w/ that idea there: https://github.com/cournape/toydist. The main point of that project was to progressively bootstrap itself by re-implementing the basic features required by pip. E.g. right now, only `python setup.py egg_info` is "distutils clean", but then you just need to implement develop to get `pip install -e .` working. To get `pip install .` to work, you only need to add install. Etc. David > I'm not sure what we gain by waiting (incremental improvements to the > spec can me made over time - as long as the basic form of the PEP is > acceptable to *current* tool developers, I think that's sufficient). > > Paul > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Thu Feb 11 10:19:08 2016 From: barry at python.org (Barry Warsaw) Date: Thu, 11 Feb 2016 10:19:08 -0500 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <20160210171202.2508a6cf@anarchist.wooz.org> Message-ID: <20160211101908.786fba28@subdivisions.wooz.org> On Feb 11, 2016, at 01:58 PM, Nick Coghlan wrote: >Hmm, I got the py2dsc reference from https://wiki.debian.org/Python/Packaging >but the newer https://wiki.debian.org/Python/LibraryStyleGuide doesn't appear >to mention any particular way of generating the initial packaging skeleton >from the upstream project. I'm not sure what the state of py2dsc is, and I personally don't use it. Mostly I cargo-cult other packages I know to be good ;). I do think other people still use py2dsc and python-stdeb though, and if Piotr is maintaining it, it'll be good (he's also the maintainer of pybuild and dh_python{2,3}, the recommended and most popular build tools for Python packages in Debian). And while initial packaging is important, it's a comparatively rare event in contrast to ongoing maintenance (e.g. updating to new upstreams). py2dsc won't help you there, but there are a stack of tools that do, some of which are tied to various team workflows. E.g. the LibraryStyleGuide you mention (and the current git-based workflow[1]) are standards for the Debian Python Modules Team (DPMT) which maintains the majority, but definitely not all, Python packages in the archive. Lots (even pure-Python ones) are maintained elsewhere such as the OpenStack team, or just by individual maintainers using whatever workflows they want. Some maintainers want to do new upstream releases from (signed?) git tags but the consensus for DPMT, and I think most Python package maintainers in Debian, is to use a tarball-based workflow. >Anyway, the core point is wanting to ensure we can automate not only >"direct to binary" installation with Python specific tools, but also >the "convert to alternate source archive format and build from there" >workflows needed by redistributor ecosystems like Linux distros, >conda, Canopy, PyPM, Nix, etc. Cool. Note that not even all Debian-based distros are equal here. For example, in Debian, especially for architecture independent (i.e. pure-Python) packages, the source package to binary package step happens on the maintainer's local system, while in Ubuntu we upload the source package and let the centrally maintained build daemons produce the resulting binary packages. Anyway... -Barry [1] https://wiki.debian.org/Python/GitPackaging -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From mal at egenix.com Thu Feb 11 11:08:10 2016 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 11 Feb 2016 17:08:10 +0100 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> <56BB3CE4.2040906@egenix.com> Message-ID: <56BCB1EA.7050208@egenix.com> On 10.02.2016 19:46, Robert Collins wrote: > On 11 February 2016 at 02:36, M.-A. Lemburg wrote: > >>> Currently what pip does is to >>> invoke >>> >>> $ python setup.py egg_info --egg-base $TEMPDIR >>> >>> to get the metadata. It is not possible to get the metadata without >>> executing the setup.py which is problematic for many applications. >>> Providing a static pypa.json file is much better: tools can read a >>> static file to get the metadata. >> >> Depends on which kind of meta data you're after. sdist packages >> do include the static PKG-INFO file which has the version 1.0 >> meta data. This doesn't include dependencies or namespace >> details, but it does have important data such as version, >> package name, description, etc. > > For pip to use it, it needs to include - reliably - version, name, and > dependencies; for it to be in any way better, we also need > setup-requires or a functional equivalent. > > Today, we can't rely on the PKG-INFO being complete, so we assume they > are all wrong and start over. One of the things we'll get by being > strict in subsequent iterations is the ability to rely on things. Then why not fix distutils' sdist command to add the needed information to PKG-INFO and rely on it ? Or perhaps add a new distutils command which outputs the needed information as JSON file and fix the sdist command to call this by default ? There are many ways to address such issues, but defining a new standard for every issue we have instead of fixing the existing distutils implementation is not the best way to approach this. >> In the end, you'll just be defining a different standard >> to express the same thing in different ways. >> >> The setup.py interface was never designed with integration in mind >> (only with the idea to provide an extensible interface; and I'm not >> going get into the merits of setuptools additions such as >> --single-version-externally-managed :-)) but it's still >> quite usable for the intended purpose. > > However we *are defining an integration point*, which is yet another > reason not to use the setup.py interface. https://xkcd.com/927/ :-) setup.py is the current standard and even though it's not necessarily nice, it works well and it does support adding different build systems (among many other things). mxSetup.py, for example, includes a build system for C libraries completely outside the distutils system, based on the standard Unix configure/make dance. It simply hooks into distutils, takes a few parameters and goes off to build things, feeding the results back into the distutils machinery. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Feb 11 2016) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ 2016-01-19: Released eGenix pyOpenSSL 0.13.13 ... http://egenix.com/go86 ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From donald at stufft.io Thu Feb 11 11:48:18 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 11 Feb 2016 11:48:18 -0500 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: <56BCB1EA.7050208@egenix.com> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> <56BB3CE4.2040906@egenix.com> <56BCB1EA.7050208@egenix.com> Message-ID: > On Feb 11, 2016, at 11:08 AM, M.-A. Lemburg wrote: > > Then why not fix distutils' sdist command to add the needed > information to PKG-INFO and rely on it ? > > Or perhaps add a new distutils command which outputs the needed > information as JSON file and fix the sdist command to call this > by default ? > > There are many ways to address such issues, but defining a new > standard for every issue we have instead of fixing the existing > distutils implementation is not the best way to approach this. The very nature of distutils (later inherited by setuptools) is the problem to be honest. The reason we're adding new standards and moving away from these systems is that fixing them is essentially fundamentally altering them. For instance, adding some new information to PKG-INFO or turning it into a json file doesn't address the fundamental problem with why we can't trust the metadata. The reason we can't trust the metadata is because setup.py means that the metadata can (and does!) change based on what system you're executing the setup.py on. Here's a common pattern: import sys from setuptools import setup install_requires = [] if sys.version_info[:2] < (2,7): install_requires.append("argparse") setup(..., install_requires=install_requires, ...) Any static metadata that is generated by that setup.py is going to change based on what version of Python you're executing it under. This isn't something you can just sort of replace, the setup.py *is* the "source of truth" for this information and as long as it is, we can't trust a byproduct of executing that file. In addition, the idea that a singular build system is really the best fit for every situation is I think, fairly nieve. Even today we have multiple build systems (such as numpy.distutils) even though utilizing them is actually fairly painful. This speaks to me that the need is fairly strong since people are willing to put up with that pain in order to swap out distutils/setuptools for something else. As far as whether setup.py or something else should be the integration point I don't think that either choice would be a bad one. However I prefer using something else for a few reasons: * The setup.py interface is completely entangled with the implementation of distutils and setuptools. We can't change anything about it because of it being baked into the Python standard library. * A script that is executed as part of the packaging of the project is notoriously hard to test. The best layout if we make setup.py the integration point is that the setup.py will be a small shim that will call into some other bit of code to do it's work. At that point though, why bother with the shim instead of just calling that code directly? * Having the script be part of the project encourages small, project specific one off hacks. These hacks have a tendency to be fragile and they regularly break down. Having the build tool be packaged externally and pulled in tends to encourage reusable code. That's on top of the *other* problem, that we can't fundamentally change distutils in anything but Python 3.6 at this point. That's not a terribly great place to be in though, because packaging has network effects and nobody is going to drop support for 2.7 (209,453,609 downloads from PyPI since Jan 14) just to make packaging their library slightly nicer. Even 2.6 (154,27,345) has more of the marketshare of what is downloading from PyPI than any version of Python 3.x. Moving things out of the standard library is how we've been able to progress packaging as quickly and as universally as we have been in the past few years. I am very opposed to any plan of improvement that involves waiting around on 3.6 to be the minimum version of Python. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Thu Feb 11 11:50:00 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 11 Feb 2016 16:50:00 +0000 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On 11 February 2016 at 15:01, David Cournapeau wrote: >> We've only really seen bento and now flit appear as alternatives, the >> only conclusion we've been able to draw is that the barrier to >> creating alternative build systems is the need to emulate setuptools. >> This PEP (hopefully!) removes that barrier, but I agree we need some >> validation that people who want to create alternative build systems >> (or have done so) can work with the interface in the PEP. > > If this is indeed the main argument for the PEP, that it is IMO misguided. > Making new buildsystems for python is hard work (once you go beyond the > trivial packages), but interoperability w/ setup.py is not difficult I'll let the PEP authors comment on that one - what I said above is really only my interpretation and I could easily be wrong. > My main worry is about designing an interface before seeing multiple > implementations. This often causes trouble. If we think the issue is > allowing people to start working on the more "interesting" parts while > keeping compat w/ pip, then I would suggest working on an example > buildsystem with interoperability that people could steal for their own. I understand your point, but I'm not clear what's stopping people doing this right now? For all the complaints about how pip is tightly coupled to setuptools, very few people seem to be providing alternatives. The pip developers in particular aren't looking to develop a build system, so any work in that area will have to come from the community. We hear the message that people wish they could use other build systems - maybe that's actually just people saying "I wish someone else would write a new build system" rather than anyone actually wanting to write one themselves? > I played w/ that idea there: https://github.com/cournape/toydist. The main > point of that project was to progressively bootstrap itself by > re-implementing the basic features required by pip. E.g. right now, only > `python setup.py egg_info` is "distutils clean", but then you just need to > implement develop to get `pip install -e .` working. To get `pip install .` > to work, you only need to add install. Etc. So someone could integrate flit with pip using this approach, is that what you're saying? Or someone could write a new build system and use that code as a basis for the "pip integration layer"? If so, then yes, I'd like to see someone do that and provide feedback to this PEP based on their experience. But I'm not sure I want the PEP to stall (along with all work on decoupling pip from setuptools) until someone gets round to it. Paul From mal at egenix.com Thu Feb 11 13:00:58 2016 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 11 Feb 2016 19:00:58 +0100 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> <56BB3CE4.2040906@egenix.com> <56BCB1EA.7050208@egenix.com> Message-ID: <56BCCC5A.90403@egenix.com> On 11.02.2016 17:48, Donald Stufft wrote: > >> On Feb 11, 2016, at 11:08 AM, M.-A. Lemburg wrote: >> >> Then why not fix distutils' sdist command to add the needed >> information to PKG-INFO and rely on it ? >> >> Or perhaps add a new distutils command which outputs the needed >> information as JSON file and fix the sdist command to call this >> by default ? >> >> There are many ways to address such issues, but defining a new >> standard for every issue we have instead of fixing the existing >> distutils implementation is not the best way to approach this. > > > The very nature of distutils (later inherited by setuptools) is the problem to > be honest. The reason we're adding new standards and moving away from these > systems is that fixing them is essentially fundamentally altering them. Of course. We're doing that constantly in Python, so why not in distutils too ? > For instance, adding some new information to PKG-INFO or turning it into a json > file doesn't address the fundamental problem with why we can't trust the > metadata. pip is running on the target platform, so why would that be an issue ? Right now it's using the egg_info command to generate the meta data, so it's well possible to add a better command which then outputs JSON for pip and other installers to use. > The reason we can't trust the metadata is because setup.py means that > the metadata can (and does!) change based on what system you're executing the > setup.py on. Here's a common pattern: > > > import sys > > from setuptools import setup > > install_requires = [] > > if sys.version_info[:2] < (2,7): > install_requires.append("argparse") > > setup(..., install_requires=install_requires, ...) > > > Any static metadata that is generated by that setup.py is going to change based > on what version of Python you're executing it under. This isn't something you > can just sort of replace, the setup.py *is* the "source of truth" for this > information and as long as it is, we can't trust a byproduct of executing that > file. Again, there's nothing stopping us from adding a new command which then allows defining meta data in a platform independent way. The reason for the above code is that it's convenient to write. If there were an interface to provide such requirements in a platform dependent way, which is then also understood by the setup() command, we could get people to use the new interface. > In addition, the idea that a singular build system is really the best fit for > every situation is I think, fairly nieve. Even today we have multiple build > systems (such as numpy.distutils) even though utilizing them is actually fairly > painful. This speaks to me that the need is fairly strong since people are > willing to put up with that pain in order to swap out distutils/setuptools for > something else. AFAIK, numpy.distutils is just a customized version of distutils, not a completely new system. We have customized distutils too, since it allows us to do things stock distutils doesn't support. That's a great freedom to have. distutils allows using different builds system already (and has ever since it became part of the stdlib). You don't have to use the stock distutils build_* command implementations. Each of those can be overridden or replaced. It's also possible to add completely new ones. Same for the binary distribution format bdist_* commands. The complete PEP could be implemented straight in distutils as new build command, or we could make things easier for package authors and simply provide dedicated build commands for the different build tools, so that authors only have to configure build system to use in the setup.cfg file. > As far as whether setup.py or something else should be the integration point > I don't think that either choice would be a bad one. However I prefer using > something else for a few reasons: > > * The setup.py interface is completely entangled with the implementation of > distutils and setuptools. We can't change anything about it because of it > being baked into the Python standard library. > > * A script that is executed as part of the packaging of the project is > notoriously hard to test. The best layout if we make setup.py the integration > point is that the setup.py will be a small shim that will call into some > other bit of code to do it's work. At that point though, why bother with the > shim instead of just calling that code directly? > > * Having the script be part of the project encourages small, project specific > one off hacks. These hacks have a tendency to be fragile and they regularly > break down. Having the build tool be packaged externally and pulled in tends > to encourage reusable code. > > That's on top of the *other* problem, that we can't fundamentally change > distutils in anything but Python 3.6 at this point. That's not a terribly > great place to be in though, because packaging has network effects and nobody > is going to drop support for 2.7 (209,453,609 downloads from PyPI since Jan 14) > just to make packaging their library slightly nicer. Even 2.6 (154,27,345) has > more of the marketshare of what is downloading from PyPI than any version of > Python 3.x. Moving things out of the standard library is how we've been able to > progress packaging as quickly and as universally as we have been in the past > few years. I am very opposed to any plan of improvement that involves waiting > around on 3.6 to be the minimum version of Python. But isn't that the reason why setuptools is kept on PyPI instead of being added to the stdlib ? With setuptools providing a way to support several different Python versions, we don't have the backwards compatibility issues you are describing. New commands and support can be added to new releases of setuptools and are then available for packages and installers to use and rely on. Yes, adding new features will have to be done with some care, but that's no different to what we're doing with Python itself as well. It's certainly possible. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Feb 11 2016) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ 2016-01-19: Released eGenix pyOpenSSL 0.13.13 ... http://egenix.com/go86 ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From brett at python.org Thu Feb 11 13:23:30 2016 From: brett at python.org (Brett Cannon) Date: Thu, 11 Feb 2016 18:23:30 +0000 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: <56BCCC5A.90403@egenix.com> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> <56BB3CE4.2040906@egenix.com> <56BCB1EA.7050208@egenix.com> <56BCCC5A.90403@egenix.com> Message-ID: On Thu, 11 Feb 2016 at 10:01 M.-A. Lemburg wrote: > On 11.02.2016 17:48, Donald Stufft wrote: > > > >> On Feb 11, 2016, at 11:08 AM, M.-A. Lemburg wrote: > >> > >> Then why not fix distutils' sdist command to add the needed > >> information to PKG-INFO and rely on it ? > >> > >> Or perhaps add a new distutils command which outputs the needed > >> information as JSON file and fix the sdist command to call this > >> by default ? > >> > >> There are many ways to address such issues, but defining a new > >> standard for every issue we have instead of fixing the existing > >> distutils implementation is not the best way to approach this. > > > > > > The very nature of distutils (later inherited by setuptools) is the > problem to > > be honest. The reason we're adding new standards and moving away from > these > > systems is that fixing them is essentially fundamentally altering them. > > Of course. We're doing that constantly in Python, so why not > in distutils too ? > IMO, I think we should work towards a goal where we strip distutils down to only the parts that are required to be provided by Python to make it easier to maintain. Distutils served its purpose, but now I think we should push what distutils does out to the community to allow for more rapid updates and easier maintenance and evolution which will better align with things like compiler release schedules and support alternative compilers and such more easily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Thu Feb 11 14:07:24 2016 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 11 Feb 2016 13:07:24 -0600 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> <56BB3CE4.2040906@egenix.com> <56BCB1EA.7050208@egenix.com> <56BCCC5A.90403@egenix.com> Message-ID: On Thu, Feb 11, 2016 at 12:23 PM, Brett Cannon wrote: > > > On Thu, 11 Feb 2016 at 10:01 M.-A. Lemburg wrote: > >> On 11.02.2016 17:48, Donald Stufft wrote: >> > >> >> On Feb 11, 2016, at 11:08 AM, M.-A. Lemburg wrote: >> >> >> >> Then why not fix distutils' sdist command to add the needed >> >> information to PKG-INFO and rely on it ? >> >> >> >> Or perhaps add a new distutils command which outputs the needed >> >> information as JSON file and fix the sdist command to call this >> >> by default ? >> > #PEP426JSONLD needs a new hashtag. - https://www.google.com/webhp?#q=pep426jsonld - https://github.com/pypa/interoperability-peps/issues/31 PEP 426: Define a JSON-LD context as part of the proposal. - [ ] egg-info > metadata.jsonld / pydist.jsonld - These would then need to be [re]built before the new jsonld metadata would be present e.g. for PyPi / Warehouse to read the package metadata JSONLD at /pkg/jsonld. - The CSVW.jsonld: https://www.w3.org/ns/csvw.jsonld - schema.org 2.2 release: https://github.com/schemaorg/schemaorg/tree/sdo-phobos/data/releases/2.2 - http://schema.org/SoftwareApplication - http://schema.org/Code - http://schema.org/Thing + schema:name (rdfs:label) + schema:url (rdf:Subject) + schema:description DOAP `````````` | Standard: https://github.com/edumbill/doap/blob/master/schema/doap.rdf DOAP Description of a Project RDFS vocabulary doap:Project, doap:GitBranch * the debian RDF record for the python3 package as turtle: https://packages.qa.debian.org/p/python3-defaults.ttl * https://wiki.debian.org/RDF { ... https://github.com/pypa/interoperability-peps/issues/31 ... -> { ... } } > >> >> >> There are many ways to address such issues, but defining a new >> >> standard for every issue we have instead of fixing the existing >> >> distutils implementation is not the best way to approach this. >> > >> > >> > The very nature of distutils (later inherited by setuptools) is the >> problem to >> > be honest. The reason we're adding new standards and moving away from >> these >> > systems is that fixing them is essentially fundamentally altering them. >> >> Of course. We're doing that constantly in Python, so why not >> in distutils too ? >> > > IMO, I think we should work towards a goal where we strip distutils down > to only the parts that are required to be provided by Python to make it > easier to maintain. Distutils served its purpose, but now I think we should > push what distutils does out to the community to allow for more rapid > updates and easier maintenance and evolution which will better align with > things like compiler release schedules and support alternative compilers > and such more easily. > Minimally, what do we need here: - Archive and/or compress the source code - Sometimes, ideally, for zipimport (to avoid all of these ENOENT in an strace) - Determine sys.path (``python -m site``) - Run named functions with edges and a topological sort - distutils.cmd.Command, setuptools.cmd.Command - python setup.py install' can be overloaded, IIUC Distutils/setuptools: - distutils.cmd.Command https://docs.python.org/2/distutils/apiref.html#creating-a-new-distutils-command - setuptools Command reference https://pythonhosted.org/setuptools/setuptools.html#command-reference - pbr http://docs.openstack.org/developer/pbr/ https://github.com/openstack-dev/pbr/blob/master/setup.py setuptools.setup(**pbr.util.cfg_to_args() Tool feature sets - https://pypaio.readthedocs.org/en/latest/roadmap/ - https://python-packaging-user-guide.readthedocs.org/en/latest/specifications/ - https://python-packaging-user-guide.readthedocs.org/en/latest/distributing/ - "Tool Recommendations" - https://python-packaging-user-guide.readthedocs.org/en/latest/current/ cookiecutter project templating - https://wrdrd.com/docs/tools/index#cookiecutter PEX w/ pants - https://wrdrd.com/docs/tools/index#pants - https://pantsbuild.github.io/python-readme.html > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Thu Feb 11 14:07:35 2016 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 12 Feb 2016 08:07:35 +1300 Subject: [Distutils] deprecating pip install --target Message-ID: This is fairly broken - it doesn't handle platlib vs purelib (see pip PR 3450), doesn't handle data files, or any other layout. Donald says pip uses it to maintain the _vendor subtrees only, which doesn't seem like a particularly strong use case. Certainly the help description for it is misleading - since what we're actually doing is copying only a subset of what the package installed - so at a minimum we need to be much clearer about what it does. But, I think it would be better to deprecate it and remove it... so I'm pinging here to see if anyone can explain a sensible use case for it in the context of pip :) -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Thu Feb 11 14:11:46 2016 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 12 Feb 2016 08:11:46 +1300 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: <20160210171202.2508a6cf@anarchist.wooz.org> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <20160210171202.2508a6cf@anarchist.wooz.org> Message-ID: On 11 February 2016 at 11:12, Barry Warsaw wrote: > On Feb 10, 2016, at 10:08 AM, Paul Moore wrote: > >>But those people will then find that distributing their sources isn't >>something that flit covers, so they'll make up their own approach (if it were >>me, I'd probably just point people at the project's github account). >> >>Once people get set up with a workflow that goes like this (build >>wheels and point people to github for source) it'll be harder to >>encourage them later to switch *back* to a process of uploading >>sources to PyPI. >> >>And that I do think is bad - that we end up pushing people who would >>otherwise happily use PyPI for source and binary hosting, to end up >>with a solution where they host binaries only on PyPI and make the >>source available via another (non-standardised) means. > > That worries me a lot. Think of the downstream consumers who aren't end > users, e.g. Linux distro developers. Some distros have strict requirements on > the availability of the source, reproducibility of builds, and so on, along > with stacks of tooling that are built on downloading tarballs from PyPI. > > It's not impossible to migrate to something else, but it's impractical to > migrate to dozens of something elses. Right now, if we can count on PyPI > having the source in an easily consumable lowest common denominator format, > the friction of providing those packages to *our* end users, and updating them > in a timely manner, is often minimal. Changing that ecosystem upstream of us, > either deliberately or otherwise, will likely result in more out of date > packages in the distros. But again, as we already covered, none of the draft peps encourage sourceless uploads to PyPI: a big chunk of it is in fact largely about enabling source uploads without requiring implementing an undocumented changed-at-will interface. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Thu Feb 11 14:22:29 2016 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 12 Feb 2016 08:22:29 +1300 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: <56BCB1EA.7050208@egenix.com> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> <56BB3CE4.2040906@egenix.com> <56BCB1EA.7050208@egenix.com> Message-ID: I'm a little over this particular subthread of the topic - we did it to death late last year. So apologies in advance if I get a little terse. On 12 February 2016 at 05:08, M.-A. Lemburg wrote: > On 10.02.2016 19:46, Robert Collins wrote: >> On 11 February 2016 at 02:36, M.-A. Lemburg wrote: >> >>>> Currently what pip does is to >>>> invoke >>>> >>>> $ python setup.py egg_info --egg-base $TEMPDIR >>>> >>>> to get the metadata. It is not possible to get the metadata without >>>> executing the setup.py which is problematic for many applications. >>>> Providing a static pypa.json file is much better: tools can read a >>>> static file to get the metadata. >>> >>> Depends on which kind of meta data you're after. sdist packages >>> do include the static PKG-INFO file which has the version 1.0 >>> meta data. This doesn't include dependencies or namespace >>> details, but it does have important data such as version, >>> package name, description, etc. >> >> For pip to use it, it needs to include - reliably - version, name, and >> dependencies; for it to be in any way better, we also need >> setup-requires or a functional equivalent. >> >> Today, we can't rely on the PKG-INFO being complete, so we assume they >> are all wrong and start over. One of the things we'll get by being >> strict in subsequent iterations is the ability to rely on things. > > Then why not fix distutils' sdist command to add the needed > information to PKG-INFO and rely on it ? How can we tell it is reliable? There's some 100K's of sdists that aren't reliable on PyPI already. We don't want to break installing those. > Or perhaps add a new distutils command which outputs the needed > information as JSON file and fix the sdist command to call this > by default ? We already have a command which outputs the needed info (as egg info) - and my draft PEP has a similar one, using PEP427 wheel METADATA format. > There are many ways to address such issues, but defining a new > standard for every issue we have instead of fixing the existing > distutils implementation is not the best way to approach this. I think you need to make a much stronger case for this, given how consistent the disagreement in this forum with that position has been. I understand you consider it to be not-the-best way, but so far, noone else seems to agree. Specifics that Donald raised with any 'fix distutils' approach: - how do we fix people running pip install on Python 2.7, 3.4, 3.5, for the next 5 years? - how do we address the needs of folk who are using bento or flit - and how do we address the needs of the *authors* of bento or flit? Plus the one I've already mentioned - how do we fix the bootstrap issue setup.py has ? >>> In the end, you'll just be defining a different standard >>> to express the same thing in different ways. >>> >>> The setup.py interface was never designed with integration in mind >>> (only with the idea to provide an extensible interface; and I'm not >>> going get into the merits of setuptools additions such as >>> --single-version-externally-managed :-)) but it's still >>> quite usable for the intended purpose. >> >> However we *are defining an integration point*, which is yet another >> reason not to use the setup.py interface. > > https://xkcd.com/927/ :-) > > setup.py is the current standard and even though it's not > necessarily nice, it works well and it does support adding > different build systems (among many other things). I don't think 'well' is a useful adjective here. setup.py has some very sharp limits as an interface: - bootstrap dependencies are broken in non-standard networking environments - bootstrap dependencies are very slow due to multiple builds - setup.py is very complex for both implementors and package authors Folk that want/need/like setup.py can keep using it! The draft we're discussing can obviously thunk back through to setup.py for setuptools projects, and no harm occurs to them. For other projects, they gain choice and the ecosystem can expand. > mxSetup.py, for example, includes a build system for C libraries > completely outside the distutils system, based on the standard > Unix configure/make dance. It simply hooks into distutils, takes > a few parameters and goes off to build things, feeding the > results back into the distutils machinery. Thats great; and mxSetup.py can keep doing that, while adding compat for pypa.json very easily. Can I ask - why the push-back on a new, clean, crisp, separate interface here? All the drafts had this in common, perhaps we're just too close to it! What harm is it going to cause? -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From p.f.moore at gmail.com Thu Feb 11 14:43:05 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 11 Feb 2016 19:43:05 +0000 Subject: [Distutils] deprecating pip install --target In-Reply-To: References: Message-ID: On 11 February 2016 at 19:07, Robert Collins wrote: > But, I think it would be better to deprecate it and remove it... so > I'm pinging here to see if anyone can explain a sensible use case for > it in the context of pip :) I have used it in the past to bundle libraries in an application I'm building using zipapp. So I'd write my __main__.py, create a "libs" subdirectory, and "pip install --target libs" any dependencies I have. Because it's a zipped application, I'm only including pure-Python libraries so as long as they don't have data files they are fine. That's basically the same use case as vendoring. It's not something I do a lot, but I'd like there to be at a minimum, a documented replacement approach (that's roughly as convenient - "install your dependencies somewhere and copy them" or "download a wheel and unzip it manually" aren't really what I have in mind). I do think that creating zipped applications *should* be a reasonable use case - I think it's a woefully under-used deployment option, and I'd hate to see more obstacles put in the way of people using it. Alternatively, it should be possible to *detect* the problem cases, so why not do that, and reject them? Effectively, reduce the scope of --target to pure-python wheels only. Paul From wes.turner at gmail.com Thu Feb 11 14:45:55 2016 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 11 Feb 2016 13:45:55 -0600 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> <56BB3CE4.2040906@egenix.com> <56BCB1EA.7050208@egenix.com> Message-ID: On Feb 11, 2016 1:23 PM, "Robert Collins" wrote: > > I'm a little over this particular subthread of the topic - we did it > to death late last year. So apologies in advance if I get a little > terse. > > On 12 February 2016 at 05:08, M.-A. Lemburg wrote: > > On 10.02.2016 19:46, Robert Collins wrote: > >> On 11 February 2016 at 02:36, M.-A. Lemburg wrote: > >> > >>>> Currently what pip does is to > >>>> invoke > >>>> > >>>> $ python setup.py egg_info --egg-base $TEMPDIR > >>>> > >>>> to get the metadata. It is not possible to get the metadata without > >>>> executing the setup.py which is problematic for many applications. > >>>> Providing a static pypa.json file is much better: tools can read a > >>>> static file to get the metadata. > >>> > >>> Depends on which kind of meta data you're after. sdist packages > >>> do include the static PKG-INFO file which has the version 1.0 > >>> meta data. This doesn't include dependencies or namespace > >>> details, but it does have important data such as version, > >>> package name, description, etc. > >> > >> For pip to use it, it needs to include - reliably - version, name, and > >> dependencies; for it to be in any way better, we also need > >> setup-requires or a functional equivalent. > >> > >> Today, we can't rely on the PKG-INFO being complete, so we assume they > >> are all wrong and start over. One of the things we'll get by being > >> strict in subsequent iterations is the ability to rely on things. > > > > Then why not fix distutils' sdist command to add the needed > > information to PKG-INFO and rely on it ? > > How can we tell it is reliable? There's some 100K's of sdists that > aren't reliable on PyPI already. We don't want to break installing > those. > > > Or perhaps add a new distutils command which outputs the needed > > information as JSON file and fix the sdist command to call this > > by default ? > > We already have a command which outputs the needed info (as egg info) > - and my draft PEP has a similar one, using PEP427 wheel METADATA > format. with JSON-LD.org, warehouse can be built with RDFJS.org for the UI; and anything could index the catalog metadata (e.g. pip, as it does things) > > > There are many ways to address such issues, but defining a new > > standard for every issue we have instead of fixing the existing > > distutils implementation is not the best way to approach this. > > I think you need to make a much stronger case for this, given how > consistent the disagreement in this forum with that position has been. > I understand you consider it to be not-the-best way, but so far, noone > else seems to agree. > > Specifics that Donald raised with any 'fix distutils' approach: > - how do we fix people running pip install on Python 2.7, 3.4, 3.5, > for the next 5 years? > - how do we address the needs of folk who are using bento or flit > - and how do we address the needs of the *authors* of bento or flit? > > Plus the one I've already mentioned > > - how do we fix the bootstrap issue setup.py has ? https://bootstrap.pypa.io/get-pip.py > > >>> In the end, you'll just be defining a different standard > >>> to express the same thing in different ways. is this a tagged build of a source revision for a given platform (or manylinux) > >>> > >>> The setup.py interface was never designed with integration in mind > >>> (only with the idea to provide an extensible interface; and I'm not > >>> going get into the merits of setuptools additions such as > >>> --single-version-externally-managed :-)) but it's still > >>> quite usable for the intended purpose. > >> > >> However we *are defining an integration point*, which is yet another > >> reason not to use the setup.py interface. .register_callback and name.spaced event points with a context object would probably just about do it. > > > > https://xkcd.com/927/ :-) > > > > setup.py is the current standard and even though it's not > > necessarily nice, it works well and it does support adding > > different build systems (among many other things). > > I don't think 'well' is a useful adjective here. setup.py has some > very sharp limits as an interface: > - bootstrap dependencies are broken in non-standard networking environments > - bootstrap dependencies are very slow due to multiple builds > - setup.py is very complex for both implementors and package authors - a way to specify distro system package dependencies by pkgname and extraname (platform consts, logic, YAML-LD, URNs, URIs) - a way to specify the source URI - a way to specify the source URI of a given tagged version of a package ('git', 'https://bitbucket.org/pypa/.') doap:GitRepository - because I want to see what it does from source > > Folk that want/need/like setup.py can keep using it! The draft we're > discussing can obviously thunk back through to setup.py for setuptools > projects, and no harm occurs to them. For other projects, they gain > choice and the ecosystem can expand. > > > mxSetup.py, for example, includes a build system for C libraries > > completely outside the distutils system, based on the standard > > Unix configure/make dance. It simply hooks into distutils, takes > > a few parameters and goes off to build things, feeding the > > results back into the distutils machinery. > > Thats great; and mxSetup.py can keep doing that, while adding compat > for pypa.json very easily. > > Can I ask - why the push-back on a new, clean, crisp, separate > interface here? All the drafts had this in common, perhaps we're just > too close to it! > > What harm is it going to cause? > > -Rob > > > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Thu Feb 11 15:53:35 2016 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 12 Feb 2016 09:53:35 +1300 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <56BB0FBB.3080105@egenix.com> <56BB2B4B.1040806@egenix.com> <56BB3CE4.2040906@egenix.com> <56BCB1EA.7050208@egenix.com> Message-ID: On 12 February 2016 at 08:45, Wes Turner wrote: > > On Feb 11, 2016 1:23 PM, "Robert Collins" wrote: >> We already have a command which outputs the needed info (as egg info) >> - and my draft PEP has a similar one, using PEP427 wheel METADATA >> format. > > with JSON-LD.org, warehouse can be built with RDFJS.org for the UI; and > anything could index the catalog metadata (e.g. pip, as it does things) This is a non-sequitor and entirely unrelated to the discussion with Marc-Andre about reusing setup.py as the interface vs a new interface. >> Plus the one I've already mentioned >> >> - how do we fix the bootstrap issue setup.py has ? > > https://bootstrap.pypa.io/get-pip.py How does that address the issue? I think perhaps you are thinking of 'how do you install pip'. The thing I'm referring to is 'how do you install flit', or 'numpy', or other build time requirements. It must be automated, such that pip can do it. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From ralf.gommers at gmail.com Thu Feb 11 16:33:55 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 11 Feb 2016 22:33:55 +0100 Subject: [Distutils] PEP: Build system abstraction for pip/conda etc In-Reply-To: References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> Message-ID: On Thu, Feb 11, 2016 at 11:16 AM, Nick Coghlan wrote: > On 11 February 2016 at 17:50, Ralf Gommers wrote: > > On Wed, Feb 10, 2016 at 2:43 PM, Paul Moore wrote: > >> > >> On 10 February 2016 at 13:23, Nick Coghlan wrote: > >> > On 10 February 2016 at 20:53, Paul Moore wrote: > >> >> We don't have to solve the whole "sdist 2.0" issue right now. Simply > >> >> saying that in order to publish pypa.json-based source trees you need > >> >> to zip up the source directory, name the file "project-version.zip" > >> >> and upload to PyPI, would be sufficient as a short-term answer > >> >> (assuming that this *would* be a viable "source file" that pip could > >> >> use - and I must be clear that I *haven't checked this*!!!) > > > > > > This is exactly what pip itself does right now for "pip install .", so > > clearly it is viable. > > > >> until > >> >> something like Nathaniel's source distribution proposal, or a > >> >> full-blown sdist-2.0 spec, is available. We'd need to support > whatever > >> >> stopgap proposal we recommend for backward compatibility in those new > >> >> proposals, but that's a necessary cost of not wanting to delay the > >> >> current PEP on those other ones. > >> > > >> > One of the reasons I went ahead and created the specifications page at > >> > https://packaging.python.org/en/latest/specifications/ was to let us > >> > tweak interoperability requirements as needed, without wasting > >> > people's time with excessive PEP wrangling by requiring a separate PEP > >> > for each interface affected by a proposal. > >> > > >> > In this case, the build system abstraction PEP should propose some > >> > additional text for > >> > > >> > > https://packaging.python.org/en/latest/specifications/#source-distribution-format > >> > defining how to publish source archives containing a pypa.json file > >> > and the setup.py shim. > > > > > > The setup.py shim should be optional right? If a package author decides > to > > not care about older pip versions, then the shim isn't needed. > > Given how long it takes for new versions of pip to filter out through > the ecosystem, the shim's going to be needed for quite a while. Since > we have the power to make things "just work" even for folks on older > pip versions that assume use of the setuptools/distutils CLI, it makes > sense to nudge sdist creation tools in that direction. > > The real pay-off here is getting setup.py out of most source repos and > replacing it with a declarative format - keeping it out of sdists is a > non-goal from my perspective. > I don't feel too strongly about this, but: - there's also a usability argument for no setup.py in sdists (people will still unzip an sdist and run python setup.py install on it) - it makes implementing something like 'flit sdist' more complicated; without the shim it can be as simply as just zipping the non-hidden files in the source tree. - if flit decides not to implement sdist (good chance of that), then people *will* still need to add the shim to their own source repos to comply to this 'spec'. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Thu Feb 11 18:17:49 2016 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 11 Feb 2016 23:17:49 +0000 (UTC) Subject: [Distutils] deprecating pip install --target References: Message-ID: Robert Collins robertcollins.net> writes: > > This is fairly broken - it doesn't handle platlib vs purelib (see pip > PR 3450), doesn't handle data files, or any other layout. Donald says > pip uses it to maintain the _vendor subtrees only, which doesn't seem > like a particularly strong use case. > > Certainly the help description for it is misleading - since what we're > actually doing is copying only a subset of what the package installed > - so at a minimum we need to be much clearer about what it does. > > But, I think it would be better to deprecate it and remove it... so > I'm pinging here to see if anyone can explain a sensible use case for > it in the context of pip :) I use it in pretty much the same way as Paul mentioned - I wouldn't like it to go unless something equivalent is available. Updating the help / documentation for it to better reflect what it does would be uncontroversial for me, but I see no strong reason for deprecation and removal. As Paul suggests, it can get stricter about what it'll handle. Regards, Vinay Sajip From holger at merlinux.eu Fri Feb 12 06:13:36 2016 From: holger at merlinux.eu (holger krekel) Date: Fri, 12 Feb 2016 11:13:36 +0000 Subject: [Distutils] devpi-server/web-3.0: generalized mirroring, speed, new backends Message-ID: <20160212111336.GE15751@merlinux.eu> The 3.0 releases of devpi-server and devpi-web, the python packaging and work flow system for handling release files, documentation, testing and staging, bring several major improvements: - Due to popular demand we now support generalized mirroring, i.e. you can create mirror indexes which proxy and cache release files from other pypi servers. Even if the mirror goes down, pip-installing will continue to work with your devpi-server instance. Previously we only supported mirroring of pypi.python.org. Using it is simple: http://doc.devpi.net/3.0/userman/devpi_indices.html#mirror-index - For our enterprise clients we majorly worked on improving the speed of serving simple pages which is now several times faster with private indexes. We now also support multiple worker processes both on master and replica sites. http://doc.devpi.net/3.0/adminman/server.html#multiple-server-instances - For our enterprise clients we also introduced a new backend architecture which allows to store server state in sqlite or postgres (which is supported through a separately released plugin). The default remains to use the "sqlite" backend and store files on the filesystem. See http://doc.devpi.net/3.0/adminman/server.html#storage-backend-selection - we started a new "admin" manual for devpi-server which describes features relating to server configuration, replication and security aspects. It's a bit work-in-progress but should already be helpful. http://doc.devpi.net/3.0/adminman/ - A few option names changed and we also released devpi-client-2.5 where we took great care to keep it forward and backward compatible so it should run against devpi-server-2.1 and upwards all the way to 3.0. - The "3.0" major release number increase means that you will need to run through an export/import cycle to upgrade your devpi-2.X installation. For more details, see the changelog and the referenced documentation with the main entry point here: http://doc.devpi.net Many thanks to my partner Florian Schulze and to the several companies who funded parts of the work on 3.0. We are especially grateful for their support to not only cover their own direct issues but also support community driven demands. I'd also like to express my gratitude to Rackspace and Jesse Noller who provide VMs for our open source work and which help a lot with the testing of our releases. We are open towards entering more support contracts to make sure you get what you need out of devpi, tox and pytest which together provide a mature tool chain for professional python development. And speaking of showing support, if you or your company is interested to donate to or attend the largest python testing sprint in history with a particular focus to pytest or tox, please see https://www.indiegogo.com/projects/python-testing-sprint-mid-2016/ have fun, holger krekel, http://merlinux.eu server-3.0.0 (2016-02-12) ------------------------- - dropped support for python2.6 - block most ascii symbols for user and index names except ``-. at _``. unicode characters are fine. - add ``--no-root-pypi`` option which prevents the creation of the ``root/pypi`` mirror instance on first startup. - added optional ``title`` and ``description`` options to users and indexes. - new indexes have no bases by default anymore. If you want to be able to install pypi packages, then you have to explicitly add ``root/pypi`` to the ``bases`` option of your index. - added optional ``custom_data`` option to users. - generalized mirroring to allow adding mirror indexes other than only PyPI - renamed ``pypi_whitelist`` to ``mirror_whitelist`` - speed up simple-page serving for private indexes. A private index with 200 release files should now be some 5 times faster. - internally use normalized project names everywhere, simplifying code and slightly speeding up some operations. - change {name} in route_urls to {project} to disambiguate. This is potentially incompatible for plugins which have registered on existing route_urls. - use "project" variable naming consistently in APIs - drop calling of devpi_pypi_initial hook in favor of the new "devpi_mirror_initialnames(stage, projectnames)" hook which is called when a mirror is initialized. - introduce new "devpiserver_stage_created(stage)" hook which is called for each index which is created. - simplify and unify internal mirroring code some more with "normal" stage handling. - don't persist the list of mirrored project names anymore but rely on a per-process RAM cache and the fact that neither the UI nor pip/easy_install typically need the projectnames list, anyway. - introduce new "devpiserver_storage_backend" hook which allows plugins to provide custom storage backends. When there is more than one backend available, the "--storage" option becomes required for startup. - introduce new "--requests-only" option to start devpi-server in "worker" mode. It can be used both for master and replica sites. It starts devpi-server without event processing and replication threads and thus depends on respective "main" instances (those not using "--request-only") to perform event and hook processing. Each worker instance needs to share the filesystem with a main instance. Worker instances can not serve the "/+status" URL which must always be routed to the main instance. - add more info when importing data. Thanks Marc Abramowitz for the PR. web-3.0.0 (2016-02-12) ---------------------- - dropped support for python2.6 - index.pt, root.pt, style.css: added title and description to users and indexes. - root.pt, style.css: more compact styling of user/index overview using flexbox, resulting in three columns at most sizes - cleanup previously unpacked documentation to remove obsolete files. - store hash of doczip with the unpacked data to avoid unpacking if the data already exists. - project.pt, version.pt: renamed ``pypi_whitelist`` related things to ``mirror_whitelist``. - require and adapt to devpi-server-3.0.0 which always uses normalized project names internally and offers new hooks. devpi-web-3.0.0 is incompatible to devpi-server-2.X. - doc.pt, macros.pt, style.css, docview.js: use scrollbar of documentation iframe, so documentation that contains dynamically resizing elements works correctly. For that to work, the search from and navigation was moved into a wrapping div with class ``header``, so it can overlap the top of the iframe. 2.5.0 (2016-02-08) ------------------ - the ``user`` command now behaves slightly more like ``index`` to show current user settings and modify them. - fix issue309: print server versions with ``devpi --version`` if available. This is only supported on Python 3.x because of shortcomings in older argparse versions for Python 2.x. - fix issue310: with --set-cfg the ``index`` setting in the ``[search]`` section would be set multiple times. - fix getjson to work when no index but a server is selected - allow full urls for getjson - "devpi quickstart" is not documented anymore and will be removed in a later release. -- about me: http://holgerkrekel.net/about-me/ contracting: http://merlinux.eu From ncoghlan at gmail.com Fri Feb 12 07:09:05 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 12 Feb 2016 22:09:05 +1000 Subject: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc In-Reply-To: <20160211101908.786fba28@subdivisions.wooz.org> References: <816A15ED-814C-4587-8A56-BED1C96365DD@stufft.io> <85vb5x763w.fsf_-_@benfinney.id.au> <56BB042A.1010100@egenix.com> <20160210171202.2508a6cf@anarchist.wooz.org> <20160211101908.786fba28@subdivisions.wooz.org> Message-ID: On 12 February 2016 at 01:19, Barry Warsaw wrote: > On Feb 11, 2016, at 01:58 PM, Nick Coghlan wrote: >>Anyway, the core point is wanting to ensure we can automate not only >>"direct to binary" installation with Python specific tools, but also >>the "convert to alternate source archive format and build from there" >>workflows needed by redistributor ecosystems like Linux distros, >>conda, Canopy, PyPM, Nix, etc. > > Cool. > > Note that not even all Debian-based distros are equal here. For example, in > Debian, especially for architecture independent (i.e. pure-Python) packages, > the source package to binary package step happens on the maintainer's local > system, while in Ubuntu we upload the source package and let the centrally > maintained build daemons produce the resulting binary packages. Yeah, Fedora is closer to the Ubuntu set up, with a centralised git backed build service (Koji), and tools for submitting build requests to that: * https://fedoraproject.org/wiki/Package_maintenance_guide * https://fedoraproject.org/wiki/Package_update_HOWTO In my ideal world, for previously reviewed packages, the path from PyPI release, through the Anitya release monitoring service and the Taskotron CI service, to updating Fedora Rawhide would be, if not fully automated, at least not much more work than clicking an "Approve" button after getting a green light from Taskotron and reading the upstream release notes. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From python at stevedower.id.au Fri Feb 12 09:54:34 2016 From: python at stevedower.id.au (Steve Dower) Date: Fri, 12 Feb 2016 06:54:34 -0800 Subject: [Distutils] deprecating pip install --target In-Reply-To: References: Message-ID: I was also planning to use it in an upcoming project that has to "do its own" package management. The aim was to install different versions of packages in different directories and use sys.path modifications to resolve them at runtime (kind of like what setuptools did in the older days). An alternative would be great, though I can probably fake things somehow for my purposes. Cheers, Steve Top-posted from my Windows Phone -----Original Message----- From: "Vinay Sajip" Sent: ?2/?11/?2016 15:18 To: "Distutils-Sig at Python.Org" Subject: Re: [Distutils] deprecating pip install --target Robert Collins robertcollins.net> writes: > > This is fairly broken - it doesn't handle platlib vs purelib (see pip > PR 3450), doesn't handle data files, or any other layout. Donald says > pip uses it to maintain the _vendor subtrees only, which doesn't seem > like a particularly strong use case. > > Certainly the help description for it is misleading - since what we're > actually doing is copying only a subset of what the package installed > - so at a minimum we need to be much clearer about what it does. > > But, I think it would be better to deprecate it and remove it... so > I'm pinging here to see if anyone can explain a sensible use case for > it in the context of pip :) I use it in pretty much the same way as Paul mentioned - I wouldn't like it to go unless something equivalent is available. Updating the help / documentation for it to better reflect what it does would be uncontroversial for me, but I see no strong reason for deprecation and removal. As Paul suggests, it can get stricter about what it'll handle. Regards, Vinay Sajip _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Feb 12 10:12:49 2016 From: dholth at gmail.com (Daniel Holth) Date: Fri, 12 Feb 2016 15:12:49 +0000 Subject: [Distutils] deprecating pip install --target In-Reply-To: References: Message-ID: My setup-requires wrapper, adding pre-setup.py dependency installation to setup.py, relies on this feature. It needs to install something in a directory that is only added to PYTHONPATH during the installation and does not interfere with the normal environment. On Fri, Feb 12, 2016 at 9:55 AM Steve Dower wrote: > I was also planning to use it in an upcoming project that has to "do its > own" package management. The aim was to install different versions of > packages in different directories and use sys.path modifications to resolve > them at runtime (kind of like what setuptools did in the older days). > > An alternative would be great, though I can probably fake things somehow > for my purposes. > > Cheers, > Steve > > Top-posted from my Windows Phone > ------------------------------ > From: Vinay Sajip > Sent: ?2/?11/?2016 15:18 > To: Distutils-Sig at Python.Org > Subject: Re: [Distutils] deprecating pip install --target > > Robert Collins robertcollins.net> writes: > > > > > This is fairly broken - it doesn't handle platlib vs purelib (see pip > > PR 3450), doesn't handle data files, or any other layout. Donald says > > pip uses it to maintain the _vendor subtrees only, which doesn't seem > > like a particularly strong use case. > > > > Certainly the help description for it is misleading - since what we're > > actually doing is copying only a subset of what the package installed > > - so at a minimum we need to be much clearer about what it does. > > > > But, I think it would be better to deprecate it and remove it... so > > I'm pinging here to see if anyone can explain a sensible use case for > > it in the context of pip :) > > I use it in pretty much the same way as Paul mentioned - I wouldn't like > it > to go unless something equivalent is available. Updating the help / > documentation for it to better reflect what it does would be > uncontroversial > for me, but I see no strong reason for deprecation and removal. As Paul > suggests, it can get stricter about what it'll handle. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamiel.almeida at gmail.com Fri Feb 12 11:48:20 2016 From: jamiel.almeida at gmail.com (Jamiel Almeida) Date: Fri, 12 Feb 2016 08:48:20 -0800 Subject: [Distutils] devpi-server/web-3.0: generalized mirroring, speed, new backends In-Reply-To: <20160212111336.GE15751@merlinux.eu> References: <20160212111336.GE15751@merlinux.eu> Message-ID: Holger, great work! I wanted to send kudos your way and did it at the indiegogo campaign, take pictures and have fun! I wanted to ask if there was an xmlrpc api like pypi where we have package_releases along with other similar calls? I use those sometimes for helper scripts and dashboards and would like to have that in the devpi instance too, tho I have not done my homework on checking code yet I have not found it in the docs. Anyways, awesome work, and keep rocking! On Fri, Feb 12, 2016 at 3:13 AM, holger krekel wrote: > > The 3.0 releases of devpi-server and devpi-web, the python packaging and > work flow system for handling release files, documentation, testing and staging, > bring several major improvements: > > - Due to popular demand we now support generalized mirroring, i.e. you can > create mirror indexes which proxy and cache release files from other pypi > servers. Even if the mirror goes down, pip-installing will continue to work > with your devpi-server instance. Previously we only supported mirroring > of pypi.python.org. Using it is simple: > http://doc.devpi.net/3.0/userman/devpi_indices.html#mirror-index > > - For our enterprise clients we majorly worked on improving the speed > of serving simple pages which is now several times faster with > private indexes. We now also support multiple worker processes > both on master and replica sites. > http://doc.devpi.net/3.0/adminman/server.html#multiple-server-instances > > - For our enterprise clients we also introduced a new backend > architecture which allows to store server state in sqlite or > postgres (which is supported through a separately released plugin). > The default remains to use the "sqlite" backend and store files > on the filesystem. See > http://doc.devpi.net/3.0/adminman/server.html#storage-backend-selection > > - we started a new "admin" manual for devpi-server which describes > features relating to server configuration, replication and security > aspects. It's a bit work-in-progress but should already be helpful. > http://doc.devpi.net/3.0/adminman/ > > - A few option names changed and we also released devpi-client-2.5 > where we took great care to keep it forward and backward compatible > so it should run against devpi-server-2.1 and upwards all the way > to 3.0. > > - The "3.0" major release number increase means that you will need to run > through an export/import cycle to upgrade your devpi-2.X installation. > > For more details, see the changelog and the referenced documentation > with the main entry point here: > > http://doc.devpi.net > > Many thanks to my partner Florian Schulze and to the several companies > who funded parts of the work on 3.0. We are especially grateful for > their support to not only cover their own direct issues but also support > community driven demands. I'd also like to express my gratitude to > Rackspace and Jesse Noller who provide VMs for our open source work and > which help a lot with the testing of our releases. > > We are open towards entering more support contracts to make sure you get > what you need out of devpi, tox and pytest which together provide a > mature tool chain for professional python development. And speaking of > showing support, if you or your company is interested to donate to or > attend the largest python testing sprint in history with a particular > focus to pytest or tox, please see > > https://www.indiegogo.com/projects/python-testing-sprint-mid-2016/ > > have fun, > > holger krekel, http://merlinux.eu > > > > server-3.0.0 (2016-02-12) > ------------------------- > > - dropped support for python2.6 > > - block most ascii symbols for user and index names except ``-. at _``. > unicode characters are fine. > > - add ``--no-root-pypi`` option which prevents the creation of the > ``root/pypi`` mirror instance on first startup. > > - added optional ``title`` and ``description`` options to users and indexes. > > - new indexes have no bases by default anymore. If you want to be able to > install pypi packages, then you have to explicitly add ``root/pypi`` to > the ``bases`` option of your index. > > - added optional ``custom_data`` option to users. > > - generalized mirroring to allow adding mirror indexes other than only PyPI > > - renamed ``pypi_whitelist`` to ``mirror_whitelist`` > > - speed up simple-page serving for private indexes. A private index > with 200 release files should now be some 5 times faster. > > - internally use normalized project names everywhere, simplifying > code and slightly speeding up some operations. > > - change {name} in route_urls to {project} to disambiguate. > This is potentially incompatible for plugins which have registered > on existing route_urls. > > - use "project" variable naming consistently in APIs > > - drop calling of devpi_pypi_initial hook in favor of > the new "devpi_mirror_initialnames(stage, projectnames)" hook > which is called when a mirror is initialized. > > - introduce new "devpiserver_stage_created(stage)" hook which is > called for each index which is created. > > - simplify and unify internal mirroring code some more > with "normal" stage handling. > > - don't persist the list of mirrored project names anymore > but rely on a per-process RAM cache and the fact > that neither the UI nor pip/easy_install typically > need the projectnames list, anyway. > > - introduce new "devpiserver_storage_backend" hook which allows plugins to > provide custom storage backends. When there is more than one backend > available, the "--storage" option becomes required for startup. > > - introduce new "--requests-only" option to start devpi-server in > "worker" mode. It can be used both for master and replica sites. It > starts devpi-server without event processing and replication threads and > thus depends on respective "main" instances (those not using > "--request-only") to perform event and hook processing. Each > worker instance needs to share the filesystem with a main instance. > Worker instances can not serve the "/+status" URL which must > always be routed to the main instance. > > - add more info when importing data. Thanks Marc Abramowitz for the PR. > > > web-3.0.0 (2016-02-12) > ---------------------- > > - dropped support for python2.6 > > - index.pt, root.pt, style.css: added title and description to > users and indexes. > > - root.pt, style.css: more compact styling of user/index overview using > flexbox, resulting in three columns at most sizes > > - cleanup previously unpacked documentation to remove obsolete files. > > - store hash of doczip with the unpacked data to avoid unpacking if the data > already exists. > > - project.pt, version.pt: renamed ``pypi_whitelist`` related things to > ``mirror_whitelist``. > > - require and adapt to devpi-server-3.0.0 which always uses > normalized project names internally and offers new hooks. > devpi-web-3.0.0 is incompatible to devpi-server-2.X. > > - doc.pt, macros.pt, style.css, docview.js: use scrollbar of documentation > iframe, so documentation that contains dynamically resizing elements works > correctly. For that to work, the search from and navigation was moved into a > wrapping div with class ``header``, so it can overlap the top of the iframe. > > > 2.5.0 (2016-02-08) > ------------------ > > - the ``user`` command now behaves slightly more like ``index`` to show > current user settings and modify them. > > - fix issue309: print server versions with ``devpi --version`` if available. > This is only supported on Python 3.x because of shortcomings in older > argparse versions for Python 2.x. > > - fix issue310: with --set-cfg the ``index`` setting in the ``[search]`` > section would be set multiple times. > > - fix getjson to work when no index but a server is selected > > - allow full urls for getjson > > - "devpi quickstart" is not documented anymore and will be removed > in a later release. > > -- > about me: http://holgerkrekel.net/about-me/ > contracting: http://merlinux.eu > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -- Jamiel Almeida From wes.turner at gmail.com Fri Feb 12 13:09:56 2016 From: wes.turner at gmail.com (Wes Turner) Date: Fri, 12 Feb 2016 12:09:56 -0600 Subject: [Distutils] devpi-server/web-3.0: generalized mirroring, speed, new backends In-Reply-To: <20160212111336.GE15751@merlinux.eu> References: <20160212111336.GE15751@merlinux.eu> Message-ID: On Feb 12, 2016 5:14 AM, "holger krekel" wrote: > > > The 3.0 releases of devpi-server and devpi-web, the python packaging and > work flow system for handling release files, documentation, testing and staging, > bring several major improvements: > > - Due to popular demand we now support generalized mirroring, i.e. you can > create mirror indexes which proxy and cache release files from other pypi > servers. Even if the mirror goes down, pip-installing will continue to work > with your devpi-server instance. Previously we only supported mirroring > of pypi.python.org. Using it is simple: > http://doc.devpi.net/3.0/userman/devpi_indices.html#mirror-index Thanks! > > - For our enterprise clients we majorly worked on improving the speed > of serving simple pages which is now several times faster with > private indexes. We now also support multiple worker processes > both on master and replica sites. > http://doc.devpi.net/3.0/adminman/server.html#multiple-server-instances > > - For our enterprise clients we also introduced a new backend > architecture which allows to store server state in sqlite or > postgres (which is supported through a separately released plugin). > The default remains to use the "sqlite" backend and store files > on the filesystem. See > http://doc.devpi.net/3.0/adminman/server.html#storage-backend-selection > > - we started a new "admin" manual for devpi-server which describes > features relating to server configuration, replication and security > aspects. It's a bit work-in-progress but should already be helpful. > http://doc.devpi.net/3.0/adminman/ > > - A few option names changed and we also released devpi-client-2.5 > where we took great care to keep it forward and backward compatible > so it should run against devpi-server-2.1 and upwards all the way > to 3.0. > > - The "3.0" major release number increase means that you will need to run > through an export/import cycle to upgrade your devpi-2.X installation. > > For more details, see the changelog and the referenced documentation > with the main entry point here: > > http://doc.devpi.net > > Many thanks to my partner Florian Schulze and to the several companies > who funded parts of the work on 3.0. We are especially grateful for > their support to not only cover their own direct issues but also support > community driven demands. I'd also like to express my gratitude to > Rackspace and Jesse Noller who provide VMs for our open source work and > which help a lot with the testing of our releases. > > We are open towards entering more support contracts to make sure you get > what you need out of devpi, tox and pytest which together provide a > mature tool chain for professional python development. And speaking of > showing support, if you or your company is interested to donate to or > attend the largest python testing sprint in history with a particular > focus to pytest or tox, please see > > https://www.indiegogo.com/projects/python-testing-sprint-mid-2016/ > > have fun, > > holger krekel, http://merlinux.eu > > > > server-3.0.0 (2016-02-12) > ------------------------- > > - dropped support for python2.6 > > - block most ascii symbols for user and index names except ``-. at _``. > unicode characters are fine. > > - add ``--no-root-pypi`` option which prevents the creation of the > ``root/pypi`` mirror instance on first startup. > > - added optional ``title`` and ``description`` options to users and indexes. > > - new indexes have no bases by default anymore. If you want to be able to > install pypi packages, then you have to explicitly add ``root/pypi`` to > the ``bases`` option of your index. > > - added optional ``custom_data`` option to users. > > - generalized mirroring to allow adding mirror indexes other than only PyPI > > - renamed ``pypi_whitelist`` to ``mirror_whitelist`` > > - speed up simple-page serving for private indexes. A private index > with 200 release files should now be some 5 times faster. > > - internally use normalized project names everywhere, simplifying > code and slightly speeding up some operations. > > - change {name} in route_urls to {project} to disambiguate. > This is potentially incompatible for plugins which have registered > on existing route_urls. > > - use "project" variable naming consistently in APIs > > - drop calling of devpi_pypi_initial hook in favor of > the new "devpi_mirror_initialnames(stage, projectnames)" hook > which is called when a mirror is initialized. > > - introduce new "devpiserver_stage_created(stage)" hook which is > called for each index which is created. > > - simplify and unify internal mirroring code some more > with "normal" stage handling. > > - don't persist the list of mirrored project names anymore > but rely on a per-process RAM cache and the fact > that neither the UI nor pip/easy_install typically > need the projectnames list, anyway. > > - introduce new "devpiserver_storage_backend" hook which allows plugins to > provide custom storage backends. When there is more than one backend > available, the "--storage" option becomes required for startup. > > - introduce new "--requests-only" option to start devpi-server in > "worker" mode. It can be used both for master and replica sites. It > starts devpi-server without event processing and replication threads and > thus depends on respective "main" instances (those not using > "--request-only") to perform event and hook processing. Each > worker instance needs to share the filesystem with a main instance. > Worker instances can not serve the "/+status" URL which must > always be routed to the main instance. > > - add more info when importing data. Thanks Marc Abramowitz for the PR. > > > web-3.0.0 (2016-02-12) > ---------------------- > > - dropped support for python2.6 > > - index.pt, root.pt, style.css: added title and description to > users and indexes. > > - root.pt, style.css: more compact styling of user/index overview using > flexbox, resulting in three columns at most sizes > > - cleanup previously unpacked documentation to remove obsolete files. > > - store hash of doczip with the unpacked data to avoid unpacking if the data > already exists. > > - project.pt, version.pt: renamed ``pypi_whitelist`` related things to > ``mirror_whitelist``. > > - require and adapt to devpi-server-3.0.0 which always uses > normalized project names internally and offers new hooks. > devpi-web-3.0.0 is incompatible to devpi-server-2.X. > > - doc.pt, macros.pt, style.css, docview.js: use scrollbar of documentation > iframe, so documentation that contains dynamically resizing elements works > correctly. For that to work, the search from and navigation was moved into a > wrapping div with class ``header``, so it can overlap the top of the iframe. > > > 2.5.0 (2016-02-08) > ------------------ > > - the ``user`` command now behaves slightly more like ``index`` to show > current user settings and modify them. > > - fix issue309: print server versions with ``devpi --version`` if available. > This is only supported on Python 3.x because of shortcomings in older > argparse versions for Python 2.x. > > - fix issue310: with --set-cfg the ``index`` setting in the ``[search]`` > section would be set multiple times. > > - fix getjson to work when no index but a server is selected > > - allow full urls for getjson > > - "devpi quickstart" is not documented anymore and will be removed > in a later release. > > -- > about me: http://holgerkrekel.net/about-me/ > contracting: http://merlinux.eu > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Mon Feb 15 22:10:43 2016 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 16 Feb 2016 16:10:43 +1300 Subject: [Distutils] abstract build system PEP update Message-ID: I've just pushed this up. - provide space for config of other things in pypa.json via convention - document PATH as a reliable variable - capture the sdist discussion - remove --root from develop: it isn't needed after discussion on IRC. -Rob diff --git a/build-system-abstraction.rst b/build-system-abstraction.rst index a6e4712..56464f1 100644 --- a/build-system-abstraction.rst +++ b/build-system-abstraction.rst @@ -68,12 +68,15 @@ modelled on pip's existing use of the setuptools setup.py interface. pypa.json --------- -The file ``pypa.json`` acts as neutron configuration file for pip and other +The file ``pypa.json`` acts as neutral configuration file for pip and other tools that want to build source trees to consult for configuration. The absence of a ``pypa.json`` file in a Python source tree implies a setuptools or setuptools compatible build system. -The JSON has the following schema. Extra keys are ignored. +The JSON has the following schema. Extra keys are ignored, which permits the +use of ``pypa.json`` as a configuration file for other related tools. If doing +that the chosen keys must be namespaced - e.g. ``flit`` with keys under that +rather than (say) ``build`` or other generic keys. schema The version of the schema. This PEP defines version "1". Defaults to "1" @@ -130,6 +133,9 @@ Available environment variables These variables are set by the caller of the build system and will always be available. +PATH + The standard system path. + PYTHON As for format variables. @@ -176,7 +182,7 @@ wheel -d OUTPUT_DIR flit wheel -d /tmp/pip-build_1234 -develop [--prefix PREFIX] [--root ROOT] +develop [--prefix PREFIX] Command to do an in-place 'development' installation of the project. Stdout and stderr have no semantic meaning. @@ -185,8 +191,11 @@ develop [--prefix PREFIX] [--root ROOT] that doing so will cause use operations like ``pip install -e foo`` to fail. - The prefix option is used for defining an alternative prefix within the - installation root. + The prefix option is used for defining an alternative prefix for the + installation. While setuptools has ``--root`` and ``--user`` options, + they can be done equivalently using ``--prefix``, and pip or other + tools that accept ``--root`` or ``--user`` options should translate + appropriately. The root option is used to define an alternative root within which the command should operate. @@ -403,6 +412,13 @@ the setuptools shim is in use (with older pip versions), or an environment marker ready pip is in use. The setuptools shim can take care of exploiting the difference older pip versions require. +We discussed having an sdist verb. The main driver for this was to make sure +that build systems were able to produce sdists that pip can build - but this is +circular: the whole point of this PEP is to let pip consume such sdists +reliably and without requiring an implementation of setuptools. Further, while +most everyone agrees that encouraging sdists to be uploaded to PyPI, there +wasn't complete consensus on that. + References ========== -- Robert Collins Distinguished Technologist HP Converged Cloud From p.f.moore at gmail.com Tue Feb 16 04:40:36 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Feb 2016 09:40:36 +0000 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: On 16 February 2016 at 03:10, Robert Collins wrote: > -The file ``pypa.json`` acts as neutron configuration file for pip and other > +The file ``pypa.json`` acts as neutral configuration file for pip and other Aw, I was looking forward to controlling my nuclear power plant with pip :-( Oh, and "acts as a" rather than just "acts as". > +We discussed having an sdist verb. The main driver for this was to make sure > +that build systems were able to produce sdists that pip can build - but this is > +circular: the whole point of this PEP is to let pip consume such sdists > +reliably and without requiring an implementation of setuptools. Further, while > +most everyone agrees that encouraging sdists to be uploaded to PyPI, there s/most/almost/ "to be uploaded to PyPI"... what? The phrase isn't complete. Presumably "is a good thing". > +wasn't complete consensus on that. And I didn't think there was any dispute over this. There were people who didn't want to disallow binary-only projects, but that's hardly the same as not encouraging people who *are* making sources public to put them in the same place as the binaries. I thought the key point was that we'd agreed to Nick's suggestion that we add some comments to the existing specifications to note that you could bundle up a source tree with a pypa.json and get something sufficient for new pips to install, so this provided a sufficiently well-defined "source upload format" to work until discussions on a new source format came to fruition? Specifically, my expectation is that this PEP require that the specification changes proposed by Nick be implemented. Sure, it's an informational change to a document, but it's important that this PEP acknowledge that the action was part of the consensus. Paul From doko at ubuntu.com Tue Feb 16 06:05:09 2016 From: doko at ubuntu.com (Matthias Klose) Date: Tue, 16 Feb 2016 12:05:09 +0100 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> Message-ID: <56C30265.6020008@ubuntu.com> On 02.02.2016 02:35, Glyph Lefkowitz wrote: > >> On Feb 1, 2016, at 3:37 PM, Matthias Klose wrote: >> >> On 30.01.2016 00:29, Nathaniel Smith wrote: >>> Hi all, >>> >>> I think this is ready for pronouncement now -- thanks to everyone for >>> all their feedback over the last few weeks! >> >> I don't think so. I am biased because I'm the maintainer for Python in Debian/Ubuntu. So I would like to have some feedback from maintainers of Python in other Linux distributions (Nick, no, you're not one of these). > > Possibly, but it would be very helpful for such maintainers to limit their critique to "in what scenarios will this fail for users" and not have the whole peanut gallery chiming in with "well on _my_ platform we would have done it _this_ way". > > I respect what you've done for Debian and Ubuntu, Matthias, and I use the heck out of that work, but honestly this whole message just comes across as sour grapes that someone didn't pick a super-old Debian instead of a super-old Red Hat. I don't think it's promoting any progress. You may call this sour grapes, but in the light of people installing these wheels to replace/upgrade system installed eggs, it becomes an issue. It's fine to use such wheels in a virtual environment, however people tell users to use these wheels to replace system installed packages, distros will have a problem identifying issues. There is a substantial amount of extensions built using C++; I didn't check how many of these in c++0x/c++11 mode. Until GCC 5, the c++11 ABI wasn't stable, and upstream never promised forward compatibility, something that even distros have to care about (usually by rebuilding packages before a release). So if you want a lowest common denominator, then maybe limit or recommend the use of c++98 only. >> The proposal just takes some environment and declares that as a standard. So everybody wanting to supply these wheels basically has to use this environment. > > There's already been lots of discussion about how this environment is a lowest common denominator. Many other similar environments could _also_ be lowest common denominator. sure, but then please call it what it is. centos5 or somelinux1. > In the future, more specific and featureful distro tags sound like a good idea. But could we please stop making the default position on distutils-sig "this doesn't cater to my one specific environment in the most optimal possible way, so let's give up on progress entirely"? This is a good proposal that addresses environment portability and gives Python a substantially better build-artifact story than it currently has, in the environment most desperately needing one (server-side linux). Could it be better? Of course. It could be lots better. There are lots of use-cases for dynamically linked wheels and fancy new platform library features in newer linuxes. But that can all come later, and none of it needs to have an impact on this specific proposal, right now. I'm unsure how more specific and featureful distro tags will help, unless you start building more than one binary version of a wheel. From a distro point of view I only can discourage using such wheels outside a virtual environment, and I would like to see a possibility to easily identify such wheels, something like loading a binary kernel module is tainting the kernel. This way distros can point users to the provider of such wheels. This is not a "this doesn't cater to my one specific environment" position. Of course you probably get less push back from other distributions closer to the specific environment specified in the pep. But please acknowledge that there might be issues that you and me don't yet even see, and provide a possibility to identify such issues. At least Debian/Ubuntu took a long ride to avoid "accidental" interaction with local Python installations and local installations into the system installed Python. For me this PEP feels like a step back, promising too much (manylinux), not pointing out possible issues, and giving no help in identifying possible issues with these wheels. Matthias From p.f.moore at gmail.com Tue Feb 16 07:26:39 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Feb 2016 12:26:39 +0000 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <56C30265.6020008@ubuntu.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> Message-ID: On 16 February 2016 at 11:05, Matthias Klose wrote: > You may call this sour grapes, but in the light of people installing > these wheels to replace/upgrade system installed eggs, it becomes an issue. > It's fine to use such wheels in a virtual environment, however people tell > users to use these wheels to replace system installed packages, distros will > have a problem identifying issues. OK, so are you not simply saying that people shouldn't be using (sudo) pip to install packages into the system environment instead of using the system packages? As a non-Linux user my opinion isn't that relevant, but I don't see an issue with a statement like this. I gather that people typically do this when the distro packages aren't available, or don't provide a sufficiently up to date version, but that's a separate issue. I don't know to what extent using "sudo pip install" to install packages into the system Python is considered "supported" by the pip developers. While we're not going to get into a situation of trying to enforce distro policies, but there's equally an implicit "but your distro may not support this" around *any* use of "sudo pip", so it's not as if we're *encouraging* people to do it. IMO, manylinux wheels should be viewed as a solution for people using virtualenvs, for people with their own Python builds, or using pyenv, or similar. As far as interaction with the distro-packaged Python is concerned, "you should use the system package manager to manage the system Python" is the basic message - but if (for whatever reason) people can't or won't do that, then I'm OK with pip supporting people who choose to install such wheels into the system Python - if they have issues, then we'll likely fall back on "you should take it up with your distro", and we'll be perfectly OK with the distro's response being "get those manylinux wheels out of our managed environment". Paul From waynejwerner at gmail.com Tue Feb 16 09:14:14 2016 From: waynejwerner at gmail.com (Wayne Werner) Date: Tue, 16 Feb 2016 08:14:14 -0600 (CST) Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> Message-ID: On Tue, 16 Feb 2016, Paul Moore wrote: > On 16 February 2016 at 11:05, Matthias Klose wrote: >> You may call this sour grapes, but in the light of people installing >> these wheels to replace/upgrade system installed eggs, it becomes an issue. >> It's fine to use such wheels in a virtual environment, however people tell >> users to use these wheels to replace system installed packages, distros will >> have a problem identifying issues. > > OK, so are you not simply saying that people shouldn't be using (sudo) > pip to install packages into the system environment instead of using > the system packages? > > As a non-Linux user my opinion isn't that relevant, but I don't see an > issue with a statement like this. I gather that people typically do > this when the distro packages aren't available, or don't provide a > sufficiently up to date version, but that's a separate issue. I've learned that *usually* linux distro repos lag way behind in updating their Python packages, so unless I *can't* install the package via pip, that's what I do. Of course, to my knowledge I've never replaced a system installed version of anything. Though, considering I've been using Python3 since it was available and most distros use Python 2, that may not really be saying much :) -W From p.f.moore at gmail.com Tue Feb 16 09:20:53 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Feb 2016 14:20:53 +0000 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> Message-ID: On 16 February 2016 at 14:14, Wayne Werner wrote: > I've learned that *usually* linux distro repos lag way behind in updating > their Python packages, so unless I *can't* install the package via pip, > that's what I do. Yeah, and that's what I'd count as an issue between you and your distro. If they don't provide sufficiently up to date versions for you, and you choose to deal with that in whatever way you prefer, that's fine by me. I don't see why the Python community shouldn't provide a solution that you can use in such a situation, simply because it's not the solution your distro would prefer you to use. > Of course, to my knowledge I've never replaced a system installed version > of anything. Though, considering I've been using Python3 since it was available > and most distros use Python 2, that may not really be saying much :) I thought the distro "hands off" rules applied even to adding things to system-managed directories, not just to overwriting files? Anyway, I've already made more inflammatory comments than an outsider should, so I'll leave the debate to the Unix users at this point. Paul From cournape at gmail.com Tue Feb 16 09:43:12 2016 From: cournape at gmail.com (David Cournapeau) Date: Tue, 16 Feb 2016 14:43:12 +0000 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <56C30265.6020008@ubuntu.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> Message-ID: On Tue, Feb 16, 2016 at 11:05 AM, Matthias Klose wrote: > On 02.02.2016 02:35, Glyph Lefkowitz wrote: > >> >> On Feb 1, 2016, at 3:37 PM, Matthias Klose wrote: >>> >>> On 30.01.2016 00:29, Nathaniel Smith wrote: >>> >>>> Hi all, >>>> >>>> I think this is ready for pronouncement now -- thanks to everyone for >>>> all their feedback over the last few weeks! >>>> >>> >>> I don't think so. I am biased because I'm the maintainer for Python in >>> Debian/Ubuntu. So I would like to have some feedback from maintainers of >>> Python in other Linux distributions (Nick, no, you're not one of these). >>> >> >> Possibly, but it would be very helpful for such maintainers to limit >> their critique to "in what scenarios will this fail for users" and not have >> the whole peanut gallery chiming in with "well on _my_ platform we would >> have done it _this_ way". >> >> I respect what you've done for Debian and Ubuntu, Matthias, and I use the >> heck out of that work, but honestly this whole message just comes across as >> sour grapes that someone didn't pick a super-old Debian instead of a >> super-old Red Hat. I don't think it's promoting any progress. >> > > You may call this sour grapes, but in the light of people installing > these wheels to replace/upgrade system installed eggs, it becomes an > issue. It's fine to use such wheels in a virtual environment, however > people tell users to use these wheels to replace system installed packages, > distros will have a problem identifying issues. > FWIW, I often point out when people put "sudo" and "pip" in the same sentence. What about adding some language around this to the PEP ? In the future, more specific and featureful distro tags sound like a good >> idea. But could we please stop making the default position on >> distutils-sig "this doesn't cater to my one specific environment in the >> most optimal possible way, so let's give up on progress entirely"? This is >> a good proposal that addresses environment portability and gives Python a >> substantially better build-artifact story than it currently has, in the >> environment most desperately needing one (server-side linux). Could it be >> better? Of course. It could be lots better. There are lots of use-cases >> for dynamically linked wheels and fancy new platform library features in >> newer linuxes. But that can all come later, and none of it needs to have >> an impact on this specific proposal, right now. >> > > I'm unsure how more specific and featureful distro tags will help, unless > you start building more than one binary version of a wheel. From a distro > point of view I only can discourage using such wheels outside a virtual > environment, and I would like to see a possibility to easily identify such > wheels, something like loading a binary kernel module is tainting the > kernel. This way distros can point users to the provider of such wheels. > This sounds like a good idea to me. Do you have a specific idea on how you would like to see the feature work ? David > > This is not a "this doesn't cater to my one specific environment" > position. Of course you probably get less push back from other > distributions closer to the specific environment specified in the pep. But > please acknowledge that there might be issues that you and me don't yet > even see, and provide a possibility to identify such issues. > > At least Debian/Ubuntu took a long ride to avoid "accidental" interaction > with local Python installations and local installations into the system > installed Python. For me this PEP feels like a step back, promising too > much (manylinux), not pointing out possible issues, and giving no help in > identifying possible issues with these wheels. > > Matthias > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doko at ubuntu.com Tue Feb 16 11:48:24 2016 From: doko at ubuntu.com (Matthias Klose) Date: Tue, 16 Feb 2016 17:48:24 +0100 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> Message-ID: <56C352D8.6020907@ubuntu.com> On 16.02.2016 15:20, Paul Moore wrote: > On 16 February 2016 at 14:14, Wayne Werner wrote: >> I've learned that *usually* linux distro repos lag way behind in updating >> their Python packages, so unless I *can't* install the package via pip, >> that's what I do. that's how distros work. They provide a stable set of packages which is known to work together. Updates are limited usually to bug fixes and security updates. Distros do provide updated packages using e.g. backports or e.g. software collections. Distros rely themself on a stable python, a lot of system software already is written in Python. > Yeah, and that's what I'd count as an issue between you and your > distro. If they don't provide sufficiently up to date versions for > you, and you choose to deal with that in whatever way you prefer, > that's fine by me. It depends what you are interested in. A Java developer probably will never update python modules and just use what it available, while you would not update Java libraries. Substitute Java with whatever you are not that interested in. > I don't see why the Python community shouldn't provide a solution that > you can use in such a situation, simply because it's not the solution > your distro would prefer you to use. Sure, however I gave examples what can go wrong, and best thing, it would be good to see these addressed, or at least given better diagnostics if something goes wrong. >> Of course, to my knowledge I've never replaced a system installed version >> of anything. Though, considering I've been using Python3 since it was available >> and most distros use Python 2, that may not really be saying much :) > > I thought the distro "hands off" rules applied even to adding things > to system-managed directories, not just to overwriting files? yes, you can do this, and usually each distro at least includes some locations to install such things, usually in /usr/local, some in /opt. For Debian/Ubuntu you can install additional modules into /usr/local/lib/pythonX.Y/dist-packages Yes, this *can* break system installed modules. But it allows you to distinguish what was installed by the system, and what by yourself. And it doesn't get in the way with a python configure with just ./configure. Matthias From doko at ubuntu.com Tue Feb 16 12:01:51 2016 From: doko at ubuntu.com (Matthias Klose) Date: Tue, 16 Feb 2016 18:01:51 +0100 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> Message-ID: <56C355FF.4020904@ubuntu.com> On 16.02.2016 15:43, David Cournapeau wrote: > On Tue, Feb 16, 2016 at 11:05 AM, Matthias Klose wrote: > >> On 02.02.2016 02:35, Glyph Lefkowitz wrote: >> >>> >>> On Feb 1, 2016, at 3:37 PM, Matthias Klose wrote: >>>> >>>> On 30.01.2016 00:29, Nathaniel Smith wrote: >>>> >>>>> Hi all, >>>>> >>>>> I think this is ready for pronouncement now -- thanks to everyone for >>>>> all their feedback over the last few weeks! >>>>> >>>> >>>> I don't think so. I am biased because I'm the maintainer for Python in >>>> Debian/Ubuntu. So I would like to have some feedback from maintainers of >>>> Python in other Linux distributions (Nick, no, you're not one of these). >>>> >>> >>> Possibly, but it would be very helpful for such maintainers to limit >>> their critique to "in what scenarios will this fail for users" and not have >>> the whole peanut gallery chiming in with "well on _my_ platform we would >>> have done it _this_ way". >>> >>> I respect what you've done for Debian and Ubuntu, Matthias, and I use the >>> heck out of that work, but honestly this whole message just comes across as >>> sour grapes that someone didn't pick a super-old Debian instead of a >>> super-old Red Hat. I don't think it's promoting any progress. >>> >> >> You may call this sour grapes, but in the light of people installing >> these wheels to replace/upgrade system installed eggs, it becomes an >> issue. It's fine to use such wheels in a virtual environment, however >> people tell users to use these wheels to replace system installed packages, >> distros will have a problem identifying issues. >> > > FWIW, I often point out when people put "sudo" and "pip" in the same > sentence. > > What about adding some language around this to the PEP ? that's one thing. But maybe pip itself could error out on such a situation, and maybe overriden with a non-default flag. I know we had such an issue where pip accidentally modified system installed files, maybe Barry still remembers the details ... > In the future, more specific and featureful distro tags sound like a good >>> idea. But could we please stop making the default position on >>> distutils-sig "this doesn't cater to my one specific environment in the >>> most optimal possible way, so let's give up on progress entirely"? This is >>> a good proposal that addresses environment portability and gives Python a >>> substantially better build-artifact story than it currently has, in the >>> environment most desperately needing one (server-side linux). Could it be >>> better? Of course. It could be lots better. There are lots of use-cases >>> for dynamically linked wheels and fancy new platform library features in >>> newer linuxes. But that can all come later, and none of it needs to have >>> an impact on this specific proposal, right now. >>> >> >> I'm unsure how more specific and featureful distro tags will help, unless >> you start building more than one binary version of a wheel. From a distro >> point of view I only can discourage using such wheels outside a virtual >> environment, and I would like to see a possibility to easily identify such >> wheels, something like loading a binary kernel module is tainting the >> kernel. This way distros can point users to the provider of such wheels. >> > > This sounds like a good idea to me. Do you have a specific idea on how you > would like to see the feature work ? Not really yet. lsmod shows you this information on demand for the running kernel. So maybe you want to add such vendor information into the egg, or in the compiled extension, and in the interpreter? Then give a warning when the interpreter is called with a new option and the vendor informations don't match, or even error out. Of course interested distros would have to backport such a patch to older releases, because that's the target market for the pep. Matthias From barry at python.org Tue Feb 16 13:33:13 2016 From: barry at python.org (Barry Warsaw) Date: Tue, 16 Feb 2016 13:33:13 -0500 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <56C355FF.4020904@ubuntu.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <56C355FF.4020904@ubuntu.com> Message-ID: <20160216133313.1c172810@anarchist.wooz.org> On Feb 16, 2016, at 06:01 PM, Matthias Klose wrote: >I know we had such an issue where pip accidentally modified system installed >files, maybe Barry still remembers the details ... I don't, but I hope that should not be a problem these days, with modern pip on modern Debian/Ubuntu systems. We prevent pip from installing in system locations in various ways, including making --user the default. I don't think we're doing anything that isn't at least within the pip roadmap. E.g. --user will probably someday be the default in stock pip, but because of reasons it hasn't happened yet. IIRC, Donald was also talking about better ways to detect system owned files, so pip would also refuse to overwrite them. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From chris at simplistix.co.uk Tue Feb 16 13:37:49 2016 From: chris at simplistix.co.uk (Chris Withers) Date: Tue, 16 Feb 2016 18:37:49 +0000 Subject: [Distutils] multiple backports of ipaddress and a world of pain Message-ID: <56C36C7D.1000307@simplistix.co.uk> Hi All, (Apologies for copying in the maintainers of the two backports and django-netfields directly, I'm not sure you're on this distutils list...) This is painful and horrible, and I wish pip would prevent modules/packages with the same name being installed by different distributions at the same time, but even if it did, that would just force something to happen rather than this: So, RHEL7, for worse or worse, ships with Python 2.7.5. That means to keep pip happy, you need to do these dances in all the virtualenvs you create: http://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning http://urllib3.readthedocs.org/en/latest/security.html#pyopenssl One of those extra packages drags in this backport: https://pypi.python.org/pypi/ipaddress Yay! Now we have a happy pip talking to both PyPI and our internal DevPI server! Right, so in a Django project I need to use https://pypi.python.org/pypi/django-netfields. This, however, chooses this backport instead: https://pypi.python.org/pypi/py2-ipaddress So, now we have two packages installing ipaddress.py, except they're two very different versions and make different assumptions about what to do with Python 2 strings. What should happen here? (other than me crying a lot...) Chris From njs at pobox.com Tue Feb 16 15:45:41 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 16 Feb 2016 12:45:41 -0800 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <56C30265.6020008@ubuntu.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> Message-ID: On Feb 16, 2016 3:05 AM, "Matthias Klose" wrote: [...] > > You may call this sour grapes, but in the light of people installing > these wheels to replace/upgrade system installed eggs, it becomes an issue. It's fine to use such wheels in a virtual environment, however people tell users to use these wheels to replace system installed packages, distros will have a problem identifying issues. Like Paul, I don't understand how wheels change any this -- everything in the above paragraph is just as true if you s/wheel/sdist/, isn't it? "sudo pip install" was a bad idea before and is a bad now, for sure. > There is a substantial amount of extensions built using C++; I didn't check how many of these in c++0x/c++11 mode. Until GCC 5, the c++11 ABI wasn't stable, and upstream never promised forward compatibility, something that even distros have to care about (usually by rebuilding packages before a release). So if you want a lowest common denominator, then maybe limit or recommend the use of c++98 only. I'm not sure what the situation with the C++11 abi is in detail (do you mean backwards compatibility?), but in any case your wish is preemptively granted :-). The libstdc++ in centos5 doesn't contain c++11 support, so in manylinux1 wheels you are allowed to use C++11 if and only if you can find some way to provide your own self-contained runtime support that doesn't rely on the system libstdc++. (The devtoolset compilers are handy in that they are configured to do this automatically.) [...] > I'm unsure how more specific and featureful distro tags will help, unless you start building more than one binary version of a wheel. From a distro point of view I only can discourage using such wheels outside a virtual environment, and I would like to see a possibility to easily identify such wheels, something like loading a binary kernel module is tainting the kernel. This way distros can point users to the provider of such wheels. > > This is not a "this doesn't cater to my one specific environment" position. Of course you probably get less push back from other distributions closer to the specific environment specified in the pep. But please acknowledge that there might be issues that you and me don't yet even see, and provide a possibility to identify such issues. I guess it should be possible to implement such a tool by scanning the .dist-info directories of installed packages? It's certainly true that installing random packages off pypi into system directories (and running those packages' setup.py code as root!) can create arbitrarily terrible breakage. (I guess wheels at least remove the root code execution. Why does pip even allow itself to be run as root? It's certainly not written in a paranoid security-first style.) And I agree that unexpected issues may arise in particular packages. But again it's not clear to me how wheels create a special/unique problem that calls for a special/unique countermeasure. Surely the first line of defense for detecting such things is just "does the traceback mention /usr/local", which is the same with or without wheels? I assume you are already used to detecting "sudo pip" users in bug reports and pointing them to the provider of these wheels or sdists? What about your workflow are you hoping to change? -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Tue Feb 16 17:49:08 2016 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 17 Feb 2016 11:49:08 +1300 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: On 16 February 2016 at 22:40, Paul Moore wrote: > On 16 February 2016 at 03:10, Robert Collins wrote: >> -The file ``pypa.json`` acts as neutron configuration file for pip and other >> +The file ``pypa.json`` acts as neutral configuration file for pip and other > > Aw, I was looking forward to controlling my nuclear power plant with pip :-( > > Oh, and "acts as a" rather than just "acts as". Fixed. >> +We discussed having an sdist verb. The main driver for this was to make sure >> +that build systems were able to produce sdists that pip can build - but this is >> +circular: the whole point of this PEP is to let pip consume such sdists >> +reliably and without requiring an implementation of setuptools. Further, while >> +most everyone agrees that encouraging sdists to be uploaded to PyPI, there > > s/most/almost/ > > "to be uploaded to PyPI"... what? The phrase isn't complete. > Presumably "is a good thing". > >> +wasn't complete consensus on that. > > And I didn't think there was any dispute over this. There were people > who didn't want to disallow binary-only projects, but that's hardly > the same as not encouraging people who *are* making sources public to > put them in the same place as the binaries. Yes, badly phrased. I've removed that whole bit, and just kept it to the core: the PEP describes how to consume source trees, it doesn't change pip's relationship to source trees - sdist or vcs or other. > I thought the key point was that we'd agreed to Nick's suggestion that > we add some comments to the existing specifications to note that you > could bundle up a source tree with a pypa.json and get something > sufficient for new pips to install, so this provided a sufficiently > well-defined "source upload format" to work until discussions on a new > source format came to fruition? > > Specifically, my expectation is that this PEP require that the > specification changes proposed by Nick be implemented. Sure, it's an > informational change to a document, but it's important that this PEP > acknowledge that the action was part of the consensus. So, I don't know what note you're looking for. The PEP /already/ documents that pip will be able to consume this, and the context is from sdists / source trees. Yes, we're going to document in the other relevant specs that we expect folk to upload sdists - thats also my understanding. My note in the PEP was specifically about the inclusion or not of an sdist verb in the interface. -Rob +The file ``pypa.json`` acts as a neutral configuration file for pip and other tools that want to build source trees to consult for configuration. The absence of a ``pypa.json`` file in a Python source tree implies a setuptools or setuptools compatible build system. @@ -414,10 +414,13 @@ the difference older pip versions require. We discussed having an sdist verb. The main driver for this was to make sure that build systems were able to produce sdists that pip can build - but this is -circular: the whole point of this PEP is to let pip consume such sdists -reliably and without requiring an implementation of setuptools. Further, while -most everyone agrees that encouraging sdists to be uploaded to PyPI, there -wasn't complete consensus on that. +circular: the whole point of this PEP is to let pip consume such sdists or VCS +source trees reliably and without requiring an implementation of setuptools. +Being able to create new sdists from existing source trees isn't a thing pip +does today, and while there is a PR to do that as part of building from +source, it is contentious and lacks consensus. Rather than impose a +requirement on all build systems, we are treating it as a YAGNI, and will add +such a verb in a future version of the interface if required. References ========== Press ENTER or type command to continue robertc at lifeless-z140:~/work/interoperability-peps$ robertc at lifeless-z140:~/work/interoperability-peps$ git diff diff --git a/build-system-abstraction.rst b/build-system-abstraction.rst index 56464f1..a69c150 100644 --- a/build-system-abstraction.rst +++ b/build-system-abstraction.rst @@ -68,7 +68,7 @@ modelled on pip's existing use of the setuptools setup.py interface. pypa.json --------- -The file ``pypa.json`` acts as neutral configuration file for pip and other +The file ``pypa.json`` acts as a neutral configuration file for pip and other tools that want to build source trees to consult for configuration. The absence of a ``pypa.json`` file in a Python source tree implies a setuptools or setuptools compatible build system. @@ -414,10 +414,13 @@ the difference older pip versions require. We discussed having an sdist verb. The main driver for this was to make sure that build systems were able to produce sdists that pip can build - but this is -circular: the whole point of this PEP is to let pip consume such sdists -reliably and without requiring an implementation of setuptools. Further, while -most everyone agrees that encouraging sdists to be uploaded to PyPI, there -wasn't complete consensus on that. +circular: the whole point of this PEP is to let pip consume such sdists or VCS +source trees reliably and without requiring an implementation of setuptools. +Being able to create new sdists from existing source trees isn't a thing pip +does today, and while there is a PR to do that as part of building from +source, it is contentious and lacks consensus. Rather than impose a +requirement on all build systems, we are treating it as a YAGNI, and will add +such a verb in a future version of the interface if required. References ========== -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Tue Feb 16 17:51:09 2016 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 17 Feb 2016 11:51:09 +1300 Subject: [Distutils] deprecating pip install --target In-Reply-To: References: Message-ID: On 13 February 2016 at 04:12, Daniel Holth wrote: > My setup-requires wrapper, adding pre-setup.py dependency installation to > setup.py, relies on this feature. It needs to install something in a > directory that is only added to PYTHONPATH during the installation and does > not interfere with the normal environment. Python packages that create (and need) scripts, or use data_files, won't work with --target. Using a --prefix and setting PYTHONPATH *and* PATH appropriately would be better. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Tue Feb 16 17:52:11 2016 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 17 Feb 2016 11:52:11 +1300 Subject: [Distutils] deprecating pip install --target In-Reply-To: References: Message-ID: On 13 February 2016 at 03:54, Steve Dower wrote: > I was also planning to use it in an upcoming project that has to "do its > own" package management. The aim was to install different versions of > packages in different directories and use sys.path modifications to resolve > them at runtime (kind of like what setuptools did in the older days). > > An alternative would be great, though I can probably fake things somehow for > my purposes. Sounds similar to Daniel's need - and again, --prefix + setting PATH and PYTHONPATH would be better. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From encukou at gmail.com Tue Feb 16 09:55:07 2016 From: encukou at gmail.com (Petr Viktorin) Date: Tue, 16 Feb 2016 15:55:07 +0100 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> Message-ID: <56C3384B.8010700@gmail.com> On 02/16/2016 03:20 PM, Paul Moore wrote: > On 16 February 2016 at 14:14, Wayne Werner wrote: >> I've learned that *usually* linux distro repos lag way behind in updating >> their Python packages, so unless I *can't* install the package via pip, >> that's what I do. > > Yeah, and that's what I'd count as an issue between you and your > distro. If they don't provide sufficiently up to date versions for > you, and you choose to deal with that in whatever way you prefer, > that's fine by me. As a fedora packager, I'd definitely prefer if you used "pip install --user" instead if "sudo pip install". > I don't see why the Python community shouldn't provide a solution that > you can use in such a situation, simply because it's not the solution > your distro would prefer you to use. So, what is the argument against "pip install --user"? Does that not leave everyone happy? Of course it's your system and you're free to do whatever you want, and I'm sure you can debug any resulting issues successfully. But there's a bunch of people spending time to get all kinds of packages working well together for everyone (even people who don't care about latest versions of Python packages), and recommending "sudo pip" is making that job harder. >> Of course, to my knowledge I've never replaced a system installed version >> of anything. Though, considering I've been using Python3 since it was available >> and most distros use Python 2, that may not really be saying much :) > > I thought the distro "hands off" rules applied even to adding things > to system-managed directories, not just to overwriting files? Definitely. You can't know what the distro will add in the future; some unrelated package might bring in a (possibly newly packaged) dependency that replaces whatever you installed. > Anyway, I've already made more inflammatory comments than an outsider > should, so I'll leave the debate to the Unix users at this point. From phihag at phihag.de Tue Feb 16 17:31:22 2016 From: phihag at phihag.de (Philipp Hagemeister) Date: Tue, 16 Feb 2016 23:31:22 +0100 Subject: [Distutils] multiple backports of ipaddress and a world of pain In-Reply-To: <56C36C7D.1000307@simplistix.co.uk> References: <56C36C7D.1000307@simplistix.co.uk> Message-ID: <56C3A33A.50907@phihag.de> Code that uses py2-ipaddress will break upon migrating to Python 3, and potentially in really subtle ways. For instance, import py2_ipaddress as ipaddress ipaddress.ip_address(b'\x3a\x3a\x31\x32') ipaddress.ip_address(open('file', 'rb').read(4)) has different semantics in Python 2 and Python 3. Also note that if you actually want to generate an ipaddress object from a binary representation, py2-ipaddress' "solution" ipaddress.ip_address(bytearray(b'\xff\x00\x00\x01')) will break as well under Python 3, but at least it will throw an exception and not silently do something different. Therefore, code that uses py2-ipaddress needs to be fixed anyways in order to work correctly under Python 3 - might as well do it now. py2-ipaddress' API is incompatible with the ipaddress from the stdlib, so I don't think it should claim the module name ipaddress in the first place. Why one would actively introduce incompatibilities between Python 2 and Python 3 *after Python 3 has long been released* is beyond my understanding anyways. Specifically for django-netfields, a workaround is to always use character strings (unicode type in Python 2, str in Python 3). Greetings from D?sseldorf, Philipp On 16.02.2016 19:37, Chris Withers wrote: > Hi All, > > (Apologies for copying in the maintainers of the two backports and > django-netfields directly, I'm not sure you're on this distutils list...) > > This is painful and horrible, and I wish pip would prevent > modules/packages with the same name being installed by different > distributions at the same time, but even if it did, that would just > force something to happen rather than this: > > So, RHEL7, for worse or worse, ships with Python 2.7.5. That means to > keep pip happy, you need to do these dances in all the virtualenvs you > create: > > http://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning > > http://urllib3.readthedocs.org/en/latest/security.html#pyopenssl > > One of those extra packages drags in this backport: > > https://pypi.python.org/pypi/ipaddress > > Yay! Now we have a happy pip talking to both PyPI and our internal DevPI > server! > > Right, so in a Django project I need to use > https://pypi.python.org/pypi/django-netfields. This, however, chooses > this backport instead: > > https://pypi.python.org/pypi/py2-ipaddress > > So, now we have two packages installing ipaddress.py, except they're two > very different versions and make different assumptions about what to do > with Python 2 strings. > > What should happen here? (other than me crying a lot...) > > Chris > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From glyph at twistedmatrix.com Tue Feb 16 19:10:34 2016 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Tue, 16 Feb 2016 16:10:34 -0800 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <56C30265.6020008@ubuntu.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> Message-ID: <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> > On Feb 16, 2016, at 3:05 AM, Matthias Klose wrote: > > On 02.02.2016 02:35, Glyph Lefkowitz wrote: >> >>> On Feb 1, 2016, at 3:37 PM, Matthias Klose wrote: >>> >>> On 30.01.2016 00:29, Nathaniel Smith wrote: >>>> Hi all, >>>> >>>> I think this is ready for pronouncement now -- thanks to everyone for >>>> all their feedback over the last few weeks! >>> >>> I don't think so. I am biased because I'm the maintainer for Python in Debian/Ubuntu. So I would like to have some feedback from maintainers of Python in other Linux distributions (Nick, no, you're not one of these). >> >> Possibly, but it would be very helpful for such maintainers to limit their critique to "in what scenarios will this fail for users" and not have the whole peanut gallery chiming in with "well on _my_ platform we would have done it _this_ way". >> >> I respect what you've done for Debian and Ubuntu, Matthias, and I use the heck out of that work, but honestly this whole message just comes across as sour grapes that someone didn't pick a super-old Debian instead of a super-old Red Hat. I don't think it's promoting any progress. > > You may call this sour grapes, but in the light of people installing > these wheels to replace/upgrade system installed eggs, it becomes an issue. It's fine to use such wheels in a virtual environment, however people tell users to use these wheels to replace system installed packages, distros will have a problem identifying issues. I am 100% on board with telling people "don't use `sudo pip install?". Frankly I have been telling the pip developers to just break this for years (see https://pip2014.com, which, much to my chagrin, still exists); `sudo pip install? should just exit immediately with an error; to the extent that packagers need it, the only invocation that should work should be `sudo pip install --i-am-building-an-operating-system?. But `sudo pip install? of arbitrary packages is now, and always has been, basically broken; this PEP doesn't change that in any way I can see. Specifically, since there are tools in place to ensure that the extension modules will load just fine, this won't be any more broken than `sudo pip install?-ing random C extension modules is today. If anything it will be more reliable, since a lot of people already build and ship wheels to their production linux environments, and don't always understand the nuances around having to build on a system with a native package set that exactly matches their target environment. > There is a substantial amount of extensions built using C++; I didn't check how many of these in c++0x/c++11 mode. Until GCC 5, the c++11 ABI wasn't stable, and upstream never promised forward compatibility, something that even distros have to care about (usually by rebuilding packages before a release). So if you want a lowest common denominator, then maybe limit or recommend the use of c++98 only. Isn't this irrelevant as long as your entry-points are all 'extern "C"' and your C++ code statically links libstdc++? The build toolchain in question doesn't include a dynamic libstdc++, does it? If so, that's a pretty concrete problem with this proposal and it should be addressed. >>> The proposal just takes some environment and declares that as a standard. So everybody wanting to supply these wheels basically has to use this environment. >> >> There's already been lots of discussion about how this environment is a lowest common denominator. Many other similar environments could _also_ be lowest common denominator. > > sure, but then please call it what it is. centos5 or somelinux1. The point of the wheel tag is that its output should work on many linuxes. A 'centos5' tag would imply that you can use arbitrary dynamic libraries (and perhaps even arbitrary packages!) from centos5, of which there are many; you can't, because auditwheel will yell at you. It's the build environment plus restrictions around what you can depend on from that environment. >> In the future, more specific and featureful distro tags sound like a good idea. But could we please stop making the default position on distutils-sig "this doesn't cater to my one specific environment in the most optimal possible way, so let's give up on progress entirely"? This is a good proposal that addresses environment portability and gives Python a substantially better build-artifact story than it currently has, in the environment most desperately needing one (server-side linux). Could it be better? Of course. It could be lots better. There are lots of use-cases for dynamically linked wheels and fancy new platform library features in newer linuxes. But that can all come later, and none of it needs to have an impact on this specific proposal, right now. > > I'm unsure how more specific and featureful distro tags will help, unless you start building more than one binary version of a wheel. Yes, that would be the point of the tags. When an ISV is targeting multiple platforms, they build multiple artifacts. Distros are different platforms (with a common small subset platform, which we are calling 'manylinux' here) and so eventually being able to build things which target more than one of them would be a good thing. But this is starting to get a bit wide of the real point... > From a distro point of view I only can discourage using such wheels outside a virtual environment, and I would like to see a possibility to easily identify such wheels, something like loading a binary kernel module is tainting the kernel. This way distros can point users to the provider of such wheels. Something like https://github.com/pypa/auditwheel you mean? > This is not a "this doesn't cater to my one specific environment" position. Of course you probably get less push back from other distributions closer to the specific environment specified in the pep. But please acknowledge that there might be issues that you and me don't yet even see, and provide a possibility to identify such issues. The possibility to identify such issues would be to report bugs on https://github.com/pypa/auditwheel/issues and https://github.com/pypa/manylinux/issues and fix them. Fundamentally I don't see anything wrong here; it is of course possible that it misses something, but if it misses something, it would be something *specific* (like the potential C++ issue you pointed about, which I'm not 100% sure about). > At least Debian/Ubuntu took a long ride to avoid "accidental" interaction with local Python installations and local installations into the system installed Python. For me this PEP feels like a step back, promising too much (manylinux), not pointing out possible issues, and giving no help in identifying possible issues with these wheels. This whole section is about a tool to automatically identify possible issues with these wheels - https://www.python.org/dev/peps/pep-0513/#auditwheel - so I don't even really know what you mean by this comment. I thought that the existence of this tool is one of the best parts of this PEP! -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Tue Feb 16 19:13:35 2016 From: noah at coderanger.net (Noah Kantrowitz) Date: Tue, 16 Feb 2016 16:13:35 -0800 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> Message-ID: <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> > On Feb 16, 2016, at 4:10 PM, Glyph Lefkowitz wrote: > >> >> On Feb 16, 2016, at 3:05 AM, Matthias Klose wrote: >> >> On 02.02.2016 02:35, Glyph Lefkowitz wrote: >>> >>>> On Feb 1, 2016, at 3:37 PM, Matthias Klose wrote: >>>> >>>> On 30.01.2016 00:29, Nathaniel Smith wrote: >>>>> Hi all, >>>>> >>>>> I think this is ready for pronouncement now -- thanks to everyone for >>>>> all their feedback over the last few weeks! >>>> >>>> I don't think so. I am biased because I'm the maintainer for Python in Debian/Ubuntu. So I would like to have some feedback from maintainers of Python in other Linux distributions (Nick, no, you're not one of these). >>> >>> Possibly, but it would be very helpful for such maintainers to limit their critique to "in what scenarios will this fail for users" and not have the whole peanut gallery chiming in with "well on _my_ platform we would have done it _this_ way". >>> >>> I respect what you've done for Debian and Ubuntu, Matthias, and I use the heck out of that work, but honestly this whole message just comes across as sour grapes that someone didn't pick a super-old Debian instead of a super-old Red Hat. I don't think it's promoting any progress. >> >> You may call this sour grapes, but in the light of people installing >> these wheels to replace/upgrade system installed eggs, it becomes an issue. It's fine to use such wheels in a virtual environment, however people tell users to use these wheels to replace system installed packages, distros will have a problem identifying issues. > > I am 100% on board with telling people "don't use `sudo pip install?". Frankly I have been telling the pip developers to just break this for years (see https://pip2014.com, which, much to my chagrin, still exists); `sudo pip install? should just exit immediately with an error; to the extent that packagers need it, the only invocation that should work should be `sudo pip install --i-am-building-an-operating-system?. As someone that handles the tooling side, I don't care how it works as long as there is an override for tooling a la Chef/Puppet. For stuff like Supervisord, it is usually the least broken path to install the code globally. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: Message signed with OpenPGP using GPGMail URL: From glyph at twistedmatrix.com Tue Feb 16 19:27:22 2016 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Tue, 16 Feb 2016 16:27:22 -0800 Subject: [Distutils] =?utf-8?q?Don=27t_Use_=60sudo_pip_install=C2=B4_=28wa?= =?utf-8?b?cyBSZTogIFtmaW5hbCB2ZXJzaW9uP10gUEVQIDUxM+KApik=?= In-Reply-To: <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> Message-ID: > On Feb 16, 2016, at 4:13 PM, Noah Kantrowitz wrote: > > As someone that handles the tooling side, I don't care how it works as long as there is an override for tooling a la Chef/Puppet. For stuff like Supervisord, it is usually the least broken path to install the code globally. I don't know if this is the right venue for this discussion, but I do think it would be super valuable to hash this out for good. Why does supervisord need to be installed in the global Python environment? -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Tue Feb 16 19:33:53 2016 From: noah at coderanger.net (Noah Kantrowitz) Date: Tue, 16 Feb 2016 16:33:53 -0800 Subject: [Distutils] =?utf-8?q?Don=27t_Use_=60sudo_pip_install=C2=B4_=28w?= =?utf-8?b?YXMgUmU6ICBbZmluYWwgdmVyc2lvbj9dIFBFUCA1MTPigKYp?= In-Reply-To: References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> Message-ID: > On Feb 16, 2016, at 4:27 PM, Glyph Lefkowitz wrote: > > >> On Feb 16, 2016, at 4:13 PM, Noah Kantrowitz wrote: >> >> As someone that handles the tooling side, I don't care how it works as long as there is an override for tooling a la Chef/Puppet. For stuff like Supervisord, it is usually the least broken path to install the code globally. > > I don't know if this is the right venue for this discussion, but I do think it would be super valuable to hash this out for good. > > Why does supervisord need to be installed in the global Python environment? Where else would it go? I wouldn't want to assume virtualenv is installed unless absolutely needed. Virtualenv is a project-centric view of the world which breaks down for stuff that is actually global like system command line tools. Compare with `npm install -g grunt-cli`. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: Message signed with OpenPGP using GPGMail URL: From glyph at twistedmatrix.com Tue Feb 16 19:46:31 2016 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Tue, 16 Feb 2016 16:46:31 -0800 Subject: [Distutils] =?utf-8?q?Don=27t_Use_=60sudo_pip_install=C2=B4_=28w?= =?utf-8?b?YXMgUmU6ICBbZmluYWwgdmVyc2lvbj9dIFBFUCA1MTPigKYp?= In-Reply-To: References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> Message-ID: <4D061B1B-9BA8-4D4B-B87D-7A2528B51C46@twistedmatrix.com> > On Feb 16, 2016, at 4:33 PM, Noah Kantrowitz > wrote: > > >> On Feb 16, 2016, at 4:27 PM, Glyph Lefkowitz > wrote: >> >> >>> On Feb 16, 2016, at 4:13 PM, Noah Kantrowitz > wrote: >>> >>> As someone that handles the tooling side, I don't care how it works as long as there is an override for tooling a la Chef/Puppet. For stuff like Supervisord, it is usually the least broken path to install the code globally. >> >> I don't know if this is the right venue for this discussion, but I do think it would be super valuable to hash this out for good. >> >> Why does supervisord need to be installed in the global Python environment? > > Where else would it go? I wouldn't want to assume virtualenv is installed unless absolutely needed. This I can understand, but: in this case, it is needed ;). > Virtualenv is a project-centric view of the world which breaks down for stuff that is actually global like system command line tools. [citation needed]. In what way does it "break down"? https://pypi.python.org/pypi/pipsi is a nice proof-of-concept that dedicated virtualenvs are a better model for tooling than a big-ball-of-mud integrated system environment that may have multiple conflicting requirements. Unfortunately it doesn't directly address this use-case because it assumes that it is doing per-user installations and not a system-global one, but the same principle holds: what version of `ipaddress? that supervisord wants to use is irrelevant to the tools that came with your operating system, and similarly irrelevant to your application. To be clear, what I'm proposing here is not "shove supervisord into a venv with the rest of your application", but rather, "each application should have its own venv". In supervisord's case, "python" is an implementation detail, and therefore the public interface is /usr/bin/supervisord and /usr/bin/supervisorctl, not 'import supervisord'; those should just be symlinks into /usr/lib/supervisord/environment/bin/ In fact, given that it is security-sensitive code that runs as root, it is extra important to isolate supervisord from your system environment for defense in depth, so that, for example, if, due to a bug, it can be coerced into importing an arbitrarily-named module, it has a restricted set and won't just load anything off the system. > Compare with `npm install -g grunt-cli`. npm is different because npm doesn't create top-level script binaries unless you pass the -g option, so you need to install global tooling stuff with -g. virtualenv is different (and, at least in this case, better). -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Tue Feb 16 20:00:24 2016 From: noah at coderanger.net (Noah Kantrowitz) Date: Tue, 16 Feb 2016 17:00:24 -0800 Subject: [Distutils] =?utf-8?q?Don=27t_Use_=60sudo_pip_install=C2=B4_=28w?= =?utf-8?b?YXMgUmU6ICBbZmluYWwgdmVyc2lvbj9dIFBFUCA1MTPigKYp?= In-Reply-To: <4D061B1B-9BA8-4D4B-B87D-7A2528B51C46@twistedmatrix.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> <4D061B1B-9BA8-4D4B-B87D-7A2528B51C46@twistedmatrix.com> Message-ID: > On Feb 16, 2016, at 4:46 PM, Glyph Lefkowitz wrote: > >> >> On Feb 16, 2016, at 4:33 PM, Noah Kantrowitz wrote: >> >> >>> On Feb 16, 2016, at 4:27 PM, Glyph Lefkowitz wrote: >>> >>> >>>> On Feb 16, 2016, at 4:13 PM, Noah Kantrowitz wrote: >>>> >>>> As someone that handles the tooling side, I don't care how it works as long as there is an override for tooling a la Chef/Puppet. For stuff like Supervisord, it is usually the least broken path to install the code globally. >>> >>> I don't know if this is the right venue for this discussion, but I do think it would be super valuable to hash this out for good. >>> >>> Why does supervisord need to be installed in the global Python environment? >> >> Where else would it go? I wouldn't want to assume virtualenv is installed unless absolutely needed. > > This I can understand, but: in this case, it is needed ;). > >> Virtualenv is a project-centric view of the world which breaks down for stuff that is actually global like system command line tools. > > [citation needed]. In what way does it "break down"? https://pypi.python.org/pypi/pipsi is a nice proof-of-concept that dedicated virtualenvs are a better model for tooling than a big-ball-of-mud integrated system environment that may have multiple conflicting requirements. Unfortunately it doesn't directly address this use-case because it assumes that it is doing per-user installations and not a system-global one, but the same principle holds: what version of `ipaddress? that supervisord wants to use is irrelevant to the tools that came with your operating system, and similarly irrelevant to your application. > > To be clear, what I'm proposing here is not "shove supervisord into a venv with the rest of your application", but rather, "each application should have its own venv". In supervisord's case, "python" is an implementation detail, and therefore the public interface is /usr/bin/supervisord and /usr/bin/supervisorctl, not 'import supervisord'; those should just be symlinks into /usr/lib/supervisord/environment/bin/ That isn't a thing that exists currently, I would have to make it myself and I wouldn't expect users to assume that is how I made it work. Given the various flavors of user expectations and standards that exist for deploying Python code, global does the least harm right now. > In fact, given that it is security-sensitive code that runs as root, it is extra important to isolate supervisord from your system environment for defense in depth, so that, for example, if, due to a bug, it can be coerced into importing an arbitrarily-named module, it has a restricted set and won't just load anything off the system. Sounds cute but the threats that actually helps with seem really minor. If a user can install stuff as root, they can probably do whatever they want thanks to .pth files and other terrible things. >> Compare with `npm install -g grunt-cli`. > > npm is different because npm doesn't create top-level script binaries unless you pass the -g option, so you need to install global tooling stuff with -g. virtualenv is different (and, at least in this case, better). Pip also doesn't generate binstubs in /usr/bin unless you install globally so pretty much same difference. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: Message signed with OpenPGP using GPGMail URL: From glyph at twistedmatrix.com Tue Feb 16 21:12:55 2016 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Tue, 16 Feb 2016 18:12:55 -0800 Subject: [Distutils] =?utf-8?q?Don=27t_Use_=60sudo_pip_install=C2=B4_=28w?= =?utf-8?b?YXMgUmU6ICBbZmluYWwgdmVyc2lvbj9dIFBFUCA1MTPigKYp?= In-Reply-To: References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> <4D061B1B-9BA8-4D4B-B87D-7A2528B51C46@twistedmatrix.com> Message-ID: <7B72FAD8-4882-4445-8769-3BF01DDFD794@twistedmatrix.com> > On Feb 16, 2016, at 5:00 PM, Noah Kantrowitz > wrote: > > >> On Feb 16, 2016, at 4:46 PM, Glyph Lefkowitz > wrote: >> >>> >>> On Feb 16, 2016, at 4:33 PM, Noah Kantrowitz > wrote: >>> >>> >>>> On Feb 16, 2016, at 4:27 PM, Glyph Lefkowitz > wrote: >>>> >>>> >>>>> On Feb 16, 2016, at 4:13 PM, Noah Kantrowitz > wrote: >>>>> >>>>> As someone that handles the tooling side, I don't care how it works as long as there is an override for tooling a la Chef/Puppet. For stuff like Supervisord, it is usually the least broken path to install the code globally. >>>> >>>> I don't know if this is the right venue for this discussion, but I do think it would be super valuable to hash this out for good. >>>> >>>> Why does supervisord need to be installed in the global Python environment? >>> >>> Where else would it go? I wouldn't want to assume virtualenv is installed unless absolutely needed. >> >> This I can understand, but: in this case, it is needed ;). >> >>> Virtualenv is a project-centric view of the world which breaks down for stuff that is actually global like system command line tools. >> >> [citation needed]. In what way does it "break down"? https://pypi.python.org/pypi/pipsi is a nice proof-of-concept that dedicated virtualenvs are a better model for tooling than a big-ball-of-mud integrated system environment that may have multiple conflicting requirements. Unfortunately it doesn't directly address this use-case because it assumes that it is doing per-user installations and not a system-global one, but the same principle holds: what version of `ipaddress? that supervisord wants to use is irrelevant to the tools that came with your operating system, and similarly irrelevant to your application. >> >> To be clear, what I'm proposing here is not "shove supervisord into a venv with the rest of your application", but rather, "each application should have its own venv". In supervisord's case, "python" is an implementation detail, and therefore the public interface is /usr/bin/supervisord and /usr/bin/supervisorctl, not 'import supervisord'; those should just be symlinks into /usr/lib/supervisord/environment/bin/ > > That isn't a thing that exists currently, I would have to make it myself and I wouldn't expect users to assume that is how I made it work. Given the various flavors of user expectations and standards that exist for deploying Python code, global does the least harm right now. I don't think users who install supervisord necessarily think they ought to be able to import supervisord. If they do expect that, they should probably revise their expectations. Here, I'll make it for you. Assuming virtualenv is installed: python -m virtualenv /usr/lib/supervisord/environment /usr/lib/supervisord/environment/bin/pip install supervisord ln -vs /usr/lib/supervisord/environment/bin/supervisor* /usr/bin More tooling around this idiom would of course be nifty, but this is really all it takes. >> In fact, given that it is security-sensitive code that runs as root, it is extra important to isolate supervisord from your system environment for defense in depth, so that, for example, if, due to a bug, it can be coerced into importing an arbitrarily-named module, it has a restricted set and won't just load anything off the system. > > Sounds cute but the threats that actually helps with seem really minor. If a user can install stuff as root, they can probably do whatever they want thanks to .pth files and other terrible things. Once malicious code is installed in a root-executable location it's game over; I didn't mean to imply otherwise. I'm saying that since supervisord might potentially import anything in its site-packages dir, this is just less code for you to worry about that might have security bugs in it. One specific example of how you might do this is by specifying a protocol-defined codec; if you ever do .decode(user_data) on a string you're doing an attacker-controlled dynamic import. This is a bug, of course, but a harmless one if you have a small set of modules with no surprises lurking in store. But, if the attacker can 'import qt' (whose default behavior was to abort() if it couldn't open $DISPLAY for many years, not sure if it still is) from the system, or anything like that, you have potential crashes on your hands. >>> Compare with `npm install -g grunt-cli`. >> >> npm is different because npm doesn't create top-level script binaries unless you pass the -g option, so you need to install global tooling stuff with -g. virtualenv is different (and, at least in this case, better). > > Pip also doesn't generate binstubs in /usr/bin unless you install globally so pretty much same difference. Pip always generates binstubs into whatever prefix you're installing into, whereas npm sometimes doesn't generate binstubs at all; when it does generate them, it puts them in a package-specific directory and not in a common location. (I don't fully understand the specifics; npm generates local binstubs for coffeescript but not for grunt, for example.) It's fine to symlink pip's stubs. Is making the symlink really the sticking point? So far, in the "use virtualenv" column, I've got: don't break tooling written in python in the host operating system by installing a conflicting dependency by accident don't break the host operating system's package database by potentially overwriting packages installed by the package manager don't break other pip-installed tools using the system or --user environments don't let installing or upgrading any of those things accidentally break the tool (supervisord in this case) later make provisioning possible by an unprivileged user, reducing the amount of code that needs to run as root (you can make /usr/lib/supervisord/environment writable by a dedicated user during the install process, to ensure that it doesn't provision anything outside of that directory). This is potentially useful because some setup.py scripts "helpfully" end up doing other weird stuff besides installing the package - there are a few who will remain nameless to protect the guilty which write a bunch of files into root's home directory, for example. reduce the potential attack surface of any application with plugins by reducing the number of things that can get imported. and on the "use sudo pip install" side, I've got: don't have to make a symlink users expect applications to install importable modules What am I missing? -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Tue Feb 16 21:22:29 2016 From: noah at coderanger.net (Noah Kantrowitz) Date: Tue, 16 Feb 2016 18:22:29 -0800 Subject: [Distutils] =?utf-8?q?Don=27t_Use_=60sudo_pip_install=C2=B4_=28w?= =?utf-8?b?YXMgUmU6ICBbZmluYWwgdmVyc2lvbj9dIFBFUCA1MTPigKYp?= In-Reply-To: <7B72FAD8-4882-4445-8769-3BF01DDFD794@twistedmatrix.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> <4D061B1B-9BA8-4D4B-B87D-7A2528B51C46@twistedmatrix.com> <7B72FAD8-4882-4445-8769-3BF01DDFD794@twistedmatrix.com> Message-ID: <87ECB934-BE24-42FF-9259-2336FCAA61F8@coderanger.net> > On Feb 16, 2016, at 6:12 PM, Glyph Lefkowitz wrote: > >> >> On Feb 16, 2016, at 5:00 PM, Noah Kantrowitz wrote: >> >> >>> On Feb 16, 2016, at 4:46 PM, Glyph Lefkowitz wrote: >>> >>>> >>>> On Feb 16, 2016, at 4:33 PM, Noah Kantrowitz wrote: >>>> >>>> >>>>> On Feb 16, 2016, at 4:27 PM, Glyph Lefkowitz wrote: >>>>> >>>>> >>>>>> On Feb 16, 2016, at 4:13 PM, Noah Kantrowitz wrote: >>>>>> >>>>>> As someone that handles the tooling side, I don't care how it works as long as there is an override for tooling a la Chef/Puppet. For stuff like Supervisord, it is usually the least broken path to install the code globally. >>>>> >>>>> I don't know if this is the right venue for this discussion, but I do think it would be super valuable to hash this out for good. >>>>> >>>>> Why does supervisord need to be installed in the global Python environment? >>>> >>>> Where else would it go? I wouldn't want to assume virtualenv is installed unless absolutely needed. >>> >>> This I can understand, but: in this case, it is needed ;). >>> >>>> Virtualenv is a project-centric view of the world which breaks down for stuff that is actually global like system command line tools. >>> >>> [citation needed]. In what way does it "break down"? https://pypi.python.org/pypi/pipsi is a nice proof-of-concept that dedicated virtualenvs are a better model for tooling than a big-ball-of-mud integrated system environment that may have multiple conflicting requirements. Unfortunately it doesn't directly address this use-case because it assumes that it is doing per-user installations and not a system-global one, but the same principle holds: what version of `ipaddress? that supervisord wants to use is irrelevant to the tools that came with your operating system, and similarly irrelevant to your application. >>> >>> To be clear, what I'm proposing here is not "shove supervisord into a venv with the rest of your application", but rather, "each application should have its own venv". In supervisord's case, "python" is an implementation detail, and therefore the public interface is /usr/bin/supervisord and /usr/bin/supervisorctl, not 'import supervisord'; those should just be symlinks into /usr/lib/supervisord/environment/bin/ >> >> That isn't a thing that exists currently, I would have to make it myself and I wouldn't expect users to assume that is how I made it work. Given the various flavors of user expectations and standards that exist for deploying Python code, global does the least harm right now. > > I don't think users who install supervisord necessarily think they ought to be able to import supervisord. If they do expect that, they should probably revise their expectations. > > Here, I'll make it for you. Assuming virtualenv is installed: > > python -m virtualenv /usr/lib/supervisord/environment > /usr/lib/supervisord/environment/bin/pip install supervisord > ln -vs /usr/lib/supervisord/environment/bin/supervisor* /usr/bin > > More tooling around this idiom would of course be nifty, but this is really all it takes. > >>> In fact, given that it is security-sensitive code that runs as root, it is extra important to isolate supervisord from your system environment for defense in depth, so that, for example, if, due to a bug, it can be coerced into importing an arbitrarily-named module, it has a restricted set and won't just load anything off the system. >> >> Sounds cute but the threats that actually helps with seem really minor. If a user can install stuff as root, they can probably do whatever they want thanks to .pth files and other terrible things. > > Once malicious code is installed in a root-executable location it's game over; I didn't mean to imply otherwise. I'm saying that since supervisord might potentially import anything in its site-packages dir, this is just less code for you to worry about that might have security bugs in it. > > One specific example of how you might do this is by specifying a protocol-defined codec; if you ever do .decode(user_data) on a string you're doing an attacker-controlled dynamic import. This is a bug, of course, but a harmless one if you have a small set of modules with no surprises lurking in store. But, if the attacker can 'import qt' (whose default behavior was to abort() if it couldn't open $DISPLAY for many years, not sure if it still is) from the system, or anything like that, you have potential crashes on your hands. > >>>> Compare with `npm install -g grunt-cli`. >>> >>> npm is different because npm doesn't create top-level script binaries unless you pass the -g option, so you need to install global tooling stuff with -g. virtualenv is different (and, at least in this case, better). >> >> Pip also doesn't generate binstubs in /usr/bin unless you install globally so pretty much same difference. > > Pip always generates binstubs into whatever prefix you're installing into, whereas npm sometimes doesn't generate binstubs at all; when it does generate them, it puts them in a package-specific directory and not in a common location. (I don't fully understand the specifics; npm generates local binstubs for coffeescript but not for grunt, for example.) It's fine to symlink pip's stubs. Is making the symlink really the sticking point? > > So far, in the "use virtualenv" column, I've got: > > ? don't break tooling written in python in the host operating system by installing a conflicting dependency by accident > ? don't break the host operating system's package database by potentially overwriting packages installed by the package manager > ? don't break other pip-installed tools using the system or --user environments > ? don't let installing or upgrading any of those things accidentally break the tool (supervisord in this case) later > ? make provisioning possible by an unprivileged user, reducing the amount of code that needs to run as root (you can make /usr/lib/supervisord/environment writable by a dedicated user during the install process, to ensure that it doesn't provision anything outside of that directory). This is potentially useful because some setup.py scripts "helpfully" end up doing other weird stuff besides installing the package - there are a few who will remain nameless to protect the guilty which write a bunch of files into root's home directory, for example. > ? reduce the potential attack surface of any application with plugins by reducing the number of things that can get imported. > > and on the "use sudo pip install" side, I've got: > > ? don't have to make a symlink > ? users expect applications to install importable modules I'm not concerned with if the module is importable specifically, but I am concerned with where the files will live overall. When building generic ops tooling, being unsurprising is almost always the right move and I would be surprised if supervisor installed to a custom virtualenv. It's a weird side effect of Python not having a great solution for "application packaging" I guess? We've got standards for web-ish applications, but not much for system services. I'm not saying I think creating an isolated "global-ish" environment would be worse, I'm saying nothing does that right now and I personally don't want to be the first because that bring a lot of pain with it :-) --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: Message signed with OpenPGP using GPGMail URL: From njs at pobox.com Tue Feb 16 20:32:46 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 16 Feb 2016 17:32:46 -0800 Subject: [Distutils] Alternative build system abstraction PEP Message-ID: Hi all, Finally found the time to sit down and take the various drafts I've sent of this to the list before, add a more detailed rationale section, and turn it into a pull request: https://github.com/pypa/interoperability-peps/pull/63 I've also included the text below for reference, in case people want to quote-and-comment here rather than in the PR. The main differences from Robert's proposal are described in the sections "Build backend interface" and "Comparison to competing proposals". -n --- PEP: ?? Title: A build-system independent format for source trees Version: $Revision$ Last-Modified: $Date$ Author: Nathaniel J. Smith Status: Draft Type: Standards-Track Content-Type: text/x-rst Created: 30-Sep-2015 Post-History: 1 Oct 2015, 25 Oct 2015 Discussions-To: ========== Abstract ========== While ``distutils`` / ``setuptools`` have taken us a long way, they suffer from three serious problems: (a) they're missing important features like usable build-time dependency declaration, autoconfiguration, and even basic ergonomic niceties like `DRY `_-compliant version number management, and (b) extending them is difficult, so while there do exist various solutions to the above problems, they're often quirky, fragile, and expensive to maintain, and yet (c) it's very difficult to use anything else, because distutils/setuptools provide the standard interface for installing packages expected by both users and installation tools like ``pip``. Previous efforts (e.g. distutils2 or setuptools itself) have attempted to solve problems (a) and/or (b). This proposal aims to solve (c). The goal of this PEP is get distutils-sig out of the business of being a gatekeeper for Python build systems. If you want to use distutils, great; if you want to use something else, then that should be easy to do using standardized methods. The difficulty of interfacing with distutils means that there aren't many such systems right now, but to give a sense of what we're thinking about see `flit `_ or `bento `_. Fortunately, wheels have now solved many of the hard problems here -- e.g. it's no longer necessary that a build system also know about every possible installation configuration -- so pretty much all we really need from a build system is that it have some way to spit out standard-compliant wheels and sdists. We therefore propose a new, relatively minimal interface for installation tools like ``pip`` to interact with package source trees and source distributions. ======================= Terminology and goals ======================= A *source tree* is something like a VCS checkout. We need a standard interface for installing from this format, to support usages like ``pip install some-directory/``. A *source distribution* is a static snapshot representing a particular release of some source code, like ``lxml-3.4.4.zip``. Source distributions serve many purposes: they form an archival record of releases, they provide a stupid-simple de facto standard for tools that want to ingest and process large corpora of code, possibly written in many languages (e.g. code search), they act as the input to downstream packaging systems like Debian/Fedora/Conda/..., and so forth. In the Python ecosystem they additionally have a particularly important role to play, because packaging tools like ``pip`` are able to use source distributions to fulfill binary dependencies, e.g. if there is a distribution ``foo.whl`` which declares a dependency on ``bar``, then we need to support the case where ``pip install bar`` or ``pip install foo`` automatically locates the sdist for ``bar``, downloads it, builds it, and installs the resulting package. Source distributions are also known as *sdists* for short. A *build frontend* is a tool that users might run that takes arbitrary source trees or source distributions and builds wheels from them. The actual building is done by each source tree's *build backend*. In a command like ``pip wheel some-directory/``, pip is acting as a build frontend. An *integration frontend* is a tool that users might run that takes a set of package requirements (e.g. a requirements.txt file) and attempts to update a working environment to satisfy those requirements. This may require locating, building, and installing a combination of wheels and sdists. In a command like ``pip install lxml==2.4.0``, pip is acting as an integration frontend. ============== Source trees ============== We retroactively declare the legacy source tree format involving ``setup.py`` to be "version 0". We don't try to specify it further; its de facto specification is encoded in the source code and documentation of ``distutils``, ``setuptools``, ``pip``, and other tools. A "version 1" (or greater) source tree is any directory which contains a file named ``pypackage.json``. (If a tree contains both ``pypackage.json`` and ``setup.py`` then it is a version 1+ source tree and the ``setup.py`` is ignored; this allows packages to include a ``setup.py`` for compatibility with old build frontends, while using the new system with new build frontends.) This file has the following schema. Extra keys are ignored. schema The version of the schema. This PEP defines version "1". Defaults to "1" when absent. All tools reading the file MUST error on an unrecognised schema version. bootstrap_requires Optional list of PEP 508 dependency specifications that the build frontend must ensure are available before invoking the build backend. For instance, if using flit, then the requirements might be:: "bootstrap_requires": ["flit"] build_backend A mandatory string naming a Python object that will be used to perform the build (see below for details). This is formatted following the same ``module:object`` syntax as a ``setuptools`` entry point. For instance, if using flit, then the build system might be specified as:: "build_system": "flit.api:main" and this object would be looked up by executing the equivalent of:: import flit.api backend = flit.api.main It's also legal to leave out the ``:object`` part, e.g. :: "build_system": "flit.api" which acts like:: import flit.api backend = flit.api Formally, the string should satisfy this grammar:: identifier = (letter | '_') (letter | '_' | digit)* module_path = identifier ('.' identifier)* object_path = identifier ('.' identifier)* entry_point = module_path (':' object_path)? And we import ``module_path`` and then lookup ``module_path.object_path`` (or just ``module_path`` if ``object_path`` is missing). ========================= Build backend interface ========================= The build backend object is expected to have attributes which provide some or all of the following hooks. The common ``config_settings`` argument is described after the individual hooks:: def get_build_requires(config_settings): ... This hook MUST return an additional list of strings containing PEP 508 dependency specifications, above and beyond those specified in the ``pypackage.json`` file. Example:: def get_build_requires(config_settings): return ["wheel >= 0.25", "setuptools"] Optional. If not defined, the default implementation is equivalent to ``return []``. :: def get_wheel_metadata(metadata_directory, config_settings): ... Must create a ``.dist-info`` directory containing wheel metadata inside the specified ``metadata_directory`` (i.e., creates a directory like ``{metadata_directory}/{package}-{version}.dist-info/``. This directory MUST be a valid ``.dist-info`` directory as defined in the wheel specification, except that it need not contain ``RECORD`` or signatures. The hook MAY also create other files inside this directory, and a build frontend MUST ignore such files; the intention here is that in cases where the metadata depends on build-time decisions, the build backend may need to record these decisions in some convenient format for re-use by the actual wheel-building step. Return value is ignored. Optional. If a build frontend needs this information and the method is not defined, it should call ``build_wheel`` and look at the resulting metadata directly. :: def build_wheel(wheel_directory, config_settings, metadata_directory=None): ... Must build a ``.whl`` file, and place it in the specified ``wheel_directory``. If the build frontend has previously called ``get_wheel_metadata`` and depends on the wheel resulting from this call to have metadata matching this earlier call, then it should provide the path to the previous ``metadata_directory`` as an argument. If this argument is provided, then ``build_wheel`` MUST produce a wheel with identical metadata. The directory passed in by the build frontend MUST be identical to the directory created by ``get_wheel_metadata``, including any unrecognized files it created. Mandatory. :: def install_editable(prefix, config_settings, metadata_directory=None): ... Must perform whatever actions are necessary to install the current project into the Python installation at ``install_prefix`` in an "editable" fashion. This is intentionally underspecified, because it's included as a stopgap to avoid regressing compared to the current equally underspecified setuptools ``develop`` command; hopefully a future PEP will replace this hook with something that works better and is better specified. (Unfortunately, cleaning up editable installs to actually work well and be well-specified turns out to be a large and difficult job, so we prefer not to do a half-way job here.) For the meaning and requirements of the ``metadata_directory`` argument, see ``build_wheel`` above. Optional. If not defined, then this build backend does not support editable builds. :: config_settings This argument, which is passed to all hooks, is an arbitrary dictionary provided as an "escape hatch" for users to pass ad-hoc configuration into individual package builds. Build backends MAY assign any semantics they like to this dictionary. Build frontends SHOULD provide some mechanism for users to specify arbitrary string-key/string-value pairs to be placed in this dictionary. For example, they might support some syntax like ``--package-config CC=gcc``. Build frontends MAY also provide arbitrary other mechanisms for users to place entries in this dictionary. For example, ``pip`` might choose to map a mix of modern and legacy command line arguments like:: pip install \ --package-config CC=gcc \ --global-option="--some-global-option" \ --build-option="--build-option1" \ --build-option="--build-option2" into a ``config_settings`` dictionary like:: { "CC": "gcc", "--global-option": ["--some-global-option"], "--build-option": ["--build-option1", "--build-option2"], } Of course, it's up to users to make sure that they pass options which make sense for the particular build backend and package that they are building. All hooks are run with working directory set to the root of the source tree, and MAY print arbitrary informational text on stdout and stderr. They MUST NOT read from stdin, and the build frontend MAY close stdin before invoking the hooks. If a hook raises an exception, or causes the process to terminate, then this indicates an error. Build environment ================= One of the responsibilities of a build frontend is to set up the Python environment in which the build backend will run. We do not require that any particular "virtual environment" mechanism be used; a build frontend might use virtualenv, or venv, or no special mechanism at all. But whatever mechanism is used MUST meet the following criteria: - All requirements specified by the project's build-requirements must be available for import from Python. In particular: - The ``get_build_requires`` hook is executed in an environment which contains the bootstrap requirements specified in the ``pypackage.json`` file. - All other hooks are executed in an environment which contains both the bootstrap requirements specified in the ``pypackage.json`` hook and those specified by the ``get_build_requires`` hook. - This must remain true even for new Python subprocesses spawned by the build environment, e.g. code like:: import sys, subprocess subprocess.check_call([sys.executable, ...]) must spawn a Python process which has access to all the project's build-requirements. This is necessary e.g. for build backends that want to run legacy ``setup.py`` scripts in a subprocess. - All command-line scripts provided by the build-required packages must be present in the build environment's PATH. For example, if a project declares a build-requirement on `flit `_, then the following must work as a mechanism for running the flit command-line tool:: import subprocess subprocess.check_call(["flit", ...]) A build backend MUST be prepared to function in any environment which meets the above criteria. In particular, it MUST NOT assume that it has access to any packages except those that are present in the stdlib, or that are explicitly declared as build-requirements. Recommendations for build frontends (non-normative) --------------------------------------------------- A build frontend MAY use any mechanism for setting up a build environment that meets the above criteria. For example, simply installing all build-requirements into the global environment would be sufficient to build any compliant package -- but this would be sub-optimal for a number of reasons. This section contains non-normative advice to frontend implementors. A build frontend SHOULD, by default, create an isolated environment for each build, containing only the standard library and any explicitly requested build-dependencies. This has two benefits: - It allows for a single installation run to build multiple packages that have contradictory build-requirements. E.g. if package1 build-requires pbr==1.8.1, and package2 build-requires pbr==1.7.2, then these cannot both be installed simultaneously into the global environment -- which is a problem when the user requests ``pip install package1 package2``. Or if the user already has pbr==1.8.1 installed in their global environment, and a package build-requires pbr==1.7.2, then downgrading the user's version would be rather rude. - It acts as a kind of public health measure to maximize the number of packages that actually do declare accurate build-dependencies. We can write all the strongly worded admonitions to package authors we want, but if build frontends don't enforce isolation by default, then we'll inevitably end up with lots of packages on PyPI that build fine on the original author's machine and nowhere else, which is a headache that no-one needs. However, there will also be situations where build-requirements are problematic in various ways. For example, a package author might accidentally leave off some crucial requirement despite our best efforts; or, a package might declare a build-requirement on `foo >= 1.0` which worked great when 1.0 was the latest version, but now 1.1 is out and it has a showstopper bug; or, the user might decide to build a package against numpy==1.7 -- overriding the package's preferred numpy==1.8 -- to guarantee that the resulting build will be compatible at the C ABI level with an older version of numpy (even if this means the resulting build is unsupported upstream). Therefore, build frontends SHOULD provide some mechanism for users to override the above defaults. For example, a build frontend could have a ``--build-with-system-site-packages`` option that causes the ``--system-site-packages`` option to be passed to virtualenv-or-equivalent when creating build environments, or a ``--build-requirements-override=my-requirements.txt`` option that overrides the project's normal build-requirements. The general principle here is that we want to enforce hygiene on package *authors*, while still allowing *end-users* to open up the hood and apply duct tape when necessary. ====================== Source distributions ====================== For now, we continue with the legacy sdist format which is mostly undefined, but basically comes down to: a file named ``{NAME}-{VERSION}.{EXT}``, which unpacks into a buildable source tree called ``{NAME}-{VERSION}/``. Traditionally these have always contained "version 0" source trees; we now allow them to also contain version 1+ source trees. Integration frontends require that an sdist named ``{NAME}-{VERSION}.{EXT}`` will generate a wheel named ``{NAME}-{VERSION}-{COMPAT-INFO}.whl``. =================================== Comparison to competing proposals =================================== The primary difference between this and competing proposals (`in particular `_) is that our build backend is defined via a Python hook-based interface rather than a command-line based interface. We do *not* expect that this will, by itself, intrinsically reduce the complexity calling into the backend, because build frontends will in any case want to run hooks inside a child -- this is important to isolate the build frontend itself from the backend code and to better control the build backends execution environment. So under both proposals, there will need to be some code in ``pip`` to spawn a subprocess and talk to some kind of command-line/IPC interface, and there will need to be some code in the subprocess that knows how to parse these command line arguments and call the actual build backend implementation. So this diagram applies to all proposals equally:: +-----------+ +---------------+ +----------------+ | frontend | -spawn-> | child cmdline | -Python-> | backend | | (pip) | | interface | | implementation | +-----------+ +---------------+ +----------------+ The key difference between the two approaches is how these interface boundaries map onto project structure:: .-= This PEP =-. +-----------+ +---------------+ | +----------------+ | frontend | -spawn-> | child cmdline | -Python-> | backend | | (pip) | | interface | | | implementation | +-----------+ +---------------+ | +----------------+ | |______________________________________| | Owned by pip, updated in lockstep | | | PEP-defined interface boundary Changes here require distutils-sig .-= Alternative =-. +-----------+ | +---------------+ +----------------+ | frontend | -spawn-> | child cmdline | -Python-> | backend | | (pip) | | | interface | | implementation | +-----------+ | +---------------+ +----------------+ | | |____________________________________________| | Owned by build backend, updated in lockstep | PEP-defined interface boundary Changes here require distutils-sig By moving the PEP-defined interface boundary into Python code, we gain three key advantages. **First**, because there will likely be only a small number of build frontends (``pip``, and... maybe a few others?), while there will likely be a long tail of custom build backends (since these are chosen separately by each package to match their particular build requirements), the actual diagrams probably look more like:: .-= This PEP =-. +-----------+ +---------------+ +----------------+ | frontend | -spawn-> | child cmdline | -Python+> | backend | | (pip) | | interface | | | implementation | +-----------+ +---------------+ | +----------------+ | | +----------------+ +> | backend | | | implementation | | +----------------+ : : .-= Alternative =-. +-----------+ +---------------+ +----------------+ | frontend | -spawn+> | child cmdline | -Python-> | backend | | (pip) | | | interface | | implementation | +-----------+ | +---------------+ +----------------+ | | +---------------+ +----------------+ +> | child cmdline | -Python-> | backend | | | interface | | implementation | | +---------------+ +----------------+ : : That is, this PEP leads to less total code in the overall ecosystem. And in particular, it reduces the barrier to entry of making a new build system. For example, this is a complete, working build backend:: # mypackage_custom_build_backend.py import os.path def get_build_requires(config_settings, config_directory): return ["wheel"] def build_wheel(wheel_directory, config_settings, config_directory=None): from wheel.archive import archive_wheelfile path = os.path.join(wheel_directory, "mypackage-0.1-py2.py3-none-any") archive_wheelfile(path, "src/") Of course, this is a *terrible* build backend: it requires the user to have manually set up the wheel metadata in ``src/mypackage-0.1.dist-info/``; when the version number changes it must be manually updated in multiple places; it doesn't implement the metadata or develop hooks, ... but it works, and these features can be added incrementally. Much experience suggests that large successful projects often originate as quick hacks (e.g., Linux -- "just a hobby, won't be big and professional"; `IPython/Jupyter `_ -- `a grad student's ``$PYTHONSTARTUP`` file `_), so if our goal is to encourage the growth of a vibrant ecosystem of good build tools, it's important to minimize the barrier to entry. **Second**, because Python provides a simpler yet richer structure for describing interfaces, we remove unnecessary complexity from the specification -- and specifications are the worst place for complexity, because changing specifications requires painful consensus-building across many stakeholders. In the command-line interface approach, we have to come up with ad hoc ways to map multiple different kinds of inputs into a single linear command line (e.g. how do we avoid collisions between user-specified configuration arguments and PEP-defined arguments? how do we specify optional arguments? when working with a Python interface these questions have simple, obvious answers). When spawning and managing subprocesses, there are many fiddly details that must be gotten right, subtle cross-platform differences, and some of the most obvious approaches -- e.g., using stdout to return data for the ``build_requires`` operation -- can create unexpected pitfalls (e.g., what happens when computing the build requirements requires spawning some child processes, and these children occasionally print an error message to stdout? obviously a careful build backend author can avoid this problem, but the most obvious way of defining a Python interface removes this possibility entirely, because the hook return value is clearly demarcated). In general, the need to isolate build backends into their own process means that we can't remove IPC complexity entirely -- but by placing both sides of the IPC channel under the control of a single project, we make it much much cheaper to fix bugs in the IPC interface than if fixing bugs requires coordinated agreement and coordinated changes across the ecosystem. **Third**, and most crucially, the Python hook approach gives us much more powerful options for evolving this specification in the future. For concreteness, imagine that next year we add a new ``install_editable2`` hook, which replaces the current ``install_editable`` hook with something better specified. In order to manage the transition, we want it to be possible for build frontends to transparently use ``install_editable2`` when available and fall back onto ``install_editable`` otherwise; and we want it to be possible for build backends to define both methods, for compatibility with both old and new build frontends. Furthermore, our mechanism should also fulfill two more goals: (a) If new versions of e.g. ``pip`` and ``flit`` are both updated to support the new interface, then this should be sufficient for it to be used; in particular, it should *not* be necessary for every project that *uses* ``flit`` to update its individual ``pypackage.json`` file. (b) We do not want to have to spawn extra processes just to perform this negotation, because process spawns can easily become a bottleneck when deploying large multi-package stacks on some platforms (Windows). In the interface described here, all of these goals are easy to achieve. Because ``pip`` controls the code that runs inside the child process, it can easily write it to do something like:: command, backend, args = parse_command_line_args(...) if command == "do_editable_install": if hasattr(backend, "install_editable2"): backend.install_editable2(...) elif hasattr(backend, "install_editable"): backend.install_editable(...) else: # error handling In the alternative where the public interface boundary is placed at the subprocess call, this is not possible -- either we need to spawn an extra process just to query what interfaces are supported (as was included in an earlier version of `this alternative PEP `_), or else we give up on autonegotiation entirely (as in the current version of that PEP), meaning that any changes in the interface will require N individual packages to update their ``pypackage.json`` files before any change can go live, and that any changes will necessarily be restricted to new releases. One specific consequence of this is that in this PEP, we're able to make the ``get_wheel_metadata`` command optional. In our design, this can easily be worked around by a tool like ``pip``, which can put code in its subprocess runner like:: def get_wheel_metadata(output_dir, config_settings): if hasattr(backend, "get_wheel_metadata"): backend.get_wheel_metadata(output_dir, config_settings) else: backend.build_wheel(output_dir, config_settings) touch(output_dir / "PIP_ALREADY_BUILT_WHEELS") unzip_metadata(output_dir/*.whl) def build_wheel(output_dir, config_settings, metadata_dir): if os.path.exists(metadata_dir / "PIP_ALREADY_BUILT_WHEELS"): copy(metadata_dir / *.whl, output_dir) else: backend.build_wheel(output_dir, config_settings, metadata_dir) and thus expose a totally uniform interface to the rest of ``pip``, with no extra subprocess calls, no duplicated builds, etc. But obviously this is the kind of code that you only want to write as part of a private, within-project interface. (And, of course, making the ``metadata`` command optional is one piece of lowering the barrier to entry, as discussed above.) Other differences ----------------- Besides the key command line versus Python hook difference described above, there are a few other differences in this proposal: * Metadata command is optional (as described above). * We return metadata as a directory, rather than a single METADATA file. This aligns better with the way that in practice wheel metadata is distributed across multiple files (e.g. entry points), and gives us more options in the future. (For example, instead of following the PEP 426 proposal of switching the format of METADATA to JSON, we might decide to keep the existing METADATA the way it is for backcompat, while adding new extensions as JSON "sidecar" files inside the same directory. Or maybe not; the point is it keeps our options more open.) * We provide a mechanism for passing information between the metadata step and the wheel building step. I guess everyone probably will agree this is a good idea? * We call our config file ``pypackage.json`` instead of ``pypa.json``. This is because it describes a package, rather than describing a packaging authority. But really, who cares. * We provide more detailed recommendations about the build environment, but these aren't normative anyway. ==================== Evolutionary notes ==================== A goal here is to make it as simple as possible to convert old-style sdists to new-style sdists. (E.g., this is one motivation for supporting dynamic build requirements.) The ideal would be that there would be a single static pypackage.cfg that could be dropped into any "version 0" VCS checkout to convert it to the new shiny. This is probably not 100% possible, but we can get close, and it's important to keep track of how close we are... hence this section. A rough plan would be: Create a build system package (``setuptools_pypackage`` or whatever) that knows how to speak whatever hook language we come up with, and convert them into calls to ``setup.py``. This will probably require some sort of hooking or monkeypatching to setuptools to provide a way to extract the ``setup_requires=`` argument when needed, and to provide a new version of the sdist command that generates the new-style format. This all seems doable and sufficient for a large proportion of packages (though obviously we'll want to prototype such a system before we finalize anything here). (Alternatively, these changes could be made to setuptools itself rather than going into a separate package.) But there remain two obstacles that mean we probably won't be able to automatically upgrade packages to the new format: 1) There currently exist packages which insist on particular packages being available in their environment before setup.py is executed. This means that if we decide to execute build scripts in an isolated virtualenv-like environment, then projects will need to check whether they do this, and if so then when upgrading to the new system they will have to start explicitly declaring these dependencies (either via ``setup_requires=`` or via static declaration in ``pypackage.cfg``). 2) There currently exist packages which do not declare consistent metadata (e.g. ``egg_info`` and ``bdist_wheel`` might get different ``install_requires=``). When upgrading to the new system, projects will have to evaluate whether this applies to them, and if so they will need to stop doing that. =========== Copyright =========== This document has been placed in the public domain. -- Nathaniel J. Smith -- https://vorpus.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Tue Feb 16 23:43:10 2016 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 17 Feb 2016 17:43:10 +1300 Subject: [Distutils] Alternative build system abstraction PEP In-Reply-To: References: Message-ID: Cool. I've replied on the PR as well, but my understanding from Donald was that from pip's perspective, the spawn interface being the contract was preferred; the other two differences are I think mostly cosmetic - except that I'm very worried about the injection possibilities of the calling code being responsible for preserving metadata between 'metadata' and 'build-wheel'. That seems like an unnecessary risk to me. -Rob On 17 February 2016 at 14:32, Nathaniel Smith wrote: > Hi all, ... From marius at gedmin.as Wed Feb 17 02:13:53 2016 From: marius at gedmin.as (Marius Gedminas) Date: Wed, 17 Feb 2016 09:13:53 +0200 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: <20160217071353.GA20295@platonas> On Tue, Feb 16, 2016 at 04:10:43PM +1300, Robert Collins wrote: > diff --git a/build-system-abstraction.rst b/build-system-abstraction.rst > index a6e4712..56464f1 100644 > --- a/build-system-abstraction.rst > +++ b/build-system-abstraction.rst > @@ -68,12 +68,15 @@ modelled on pip's existing use of the setuptools > setup.py interface. > pypa.json > --------- > > -The file ``pypa.json`` acts as neutron configuration file for pip and other > +The file ``pypa.json`` acts as neutral configuration file for pip and other > tools that want to build source trees to consult for configuration. The > absence of a ``pypa.json`` file in a Python source tree implies a setuptools > or setuptools compatible build system. > > -The JSON has the following schema. Extra keys are ignored. > +The JSON has the following schema. Extra keys are ignored, which permits the > +use of ``pypa.json`` as a configuration file for other related tools. If doing > +that the chosen keys must be namespaced - e.g. ``flit`` with keys under that > +rather than (say) ``build`` or other generic keys. Is this going to be a file that human beings are expected to edit by hand? If so, can we please not use JSON? JSON is rather hostile to humans: no trailing commas, no possibility to add comments. Marius Gedminas -- Debugging a computer program is such an interesting activity because it's not really a matter of fixing a program. It's a matter of fixing your own understanding to the point that the cause of the bug becomes obvious. So debugging means constantly challenging your assumptions, constantly looking for the overlooked insignificant thing that turns out to be crucial. -- Joey Hess -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From rmcgibbo at gmail.com Wed Feb 17 02:47:28 2016 From: rmcgibbo at gmail.com (Robert T. McGibbon) Date: Tue, 16 Feb 2016 23:47:28 -0800 Subject: [Distutils] Alternative build system abstraction PEP In-Reply-To: References: Message-ID: > ... The ideal would be that there would be a single static pypackage.cfg that could be dropped into any "version 0" VCS checkout to convert I think you mean "pypackage.json" here, not .cfg? -Robert On Tue, Feb 16, 2016 at 8:43 PM, Robert Collins wrote: > Cool. I've replied on the PR as well, but my understanding from Donald > was that from pip's perspective, the spawn interface being the > contract was preferred; the other two differences are I think mostly > cosmetic - except that I'm very worried about the injection > possibilities of the calling code being responsible for preserving > metadata between 'metadata' and 'build-wheel'. That seems like an > unnecessary risk to me. > > -Rob > > On 17 February 2016 at 14:32, Nathaniel Smith wrote: > > Hi all, > ... > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- -Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmcgibbo at gmail.com Wed Feb 17 02:55:30 2016 From: rmcgibbo at gmail.com (Robert T. McGibbon) Date: Tue, 16 Feb 2016 23:55:30 -0800 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> Message-ID: On Tue, Feb 16, 2016 at 4:10 PM, Glyph Lefkowitz wrote: > > This whole section is about a tool to automatically identify possible > issues with these wheels - > https://www.python.org/dev/peps/pep-0513/#auditwheel - so I don't even > really know what you mean by this comment. I thought that the existence of > this tool is one of the best parts of this PEP! > Oh cool! Thanks, Glyph! I had a lot of fun writing it. -- -Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at simplistix.co.uk Wed Feb 17 02:51:03 2016 From: chris at simplistix.co.uk (Chris Withers) Date: Wed, 17 Feb 2016 07:51:03 +0000 Subject: [Distutils] multiple backports of ipaddress and a world of pain In-Reply-To: <56C3A33A.50907@phihag.de> References: <56C36C7D.1000307@simplistix.co.uk> <56C3A33A.50907@phihag.de> Message-ID: <56C42667.4010503@simplistix.co.uk> On 16/02/2016 22:31, Philipp Hagemeister wrote: > Code that uses py2-ipaddress will break upon migrating to Python 3, and > potentially in really subtle ways. For instance, > > import py2_ipaddress as ipaddress > ipaddress.ip_address(b'\x3a\x3a\x31\x32') > ipaddress.ip_address(open('file', 'rb').read(4)) > > has different semantics in Python 2 and Python 3. Also note that if you > actually want to generate an ipaddress object from a binary > representation, py2-ipaddress' "solution" > ipaddress.ip_address(bytearray(b'\xff\x00\x00\x01')) > will break as well under Python 3, but at least it will throw an > exception and not silently do something different. > > Therefore, code that uses py2-ipaddress needs to be fixed anyways in > order to work correctly under Python 3 - might as well do it now. Indeed, James has thankfully switched django-netfields to ipaddress... > py2-ipaddress' API is incompatible with the ipaddress from the stdlib, > so I don't think it should claim the module name ipaddress in the first > place. Why one would actively introduce incompatibilities between Python > 2 and Python 3 *after Python 3 has long been released* is beyond my > understanding anyways. > > Specifically for django-netfields, a workaround is to always use > character strings (unicode type in Python 2, str in Python 3). The code James uses as a result isn't that pretty, multiple occurrences of: try: value = unicode(value) except NameError: pass What would you recommend instead? When I monkeypatched this yesterday, I went with: if isinstance(value, bytes): value = value.decode('ascii') ...but I wonder if there's something better? (The context here is data coming back from Postgres, which is always str in Python 2, and only contains ip or netmask, so really shouldn't have anything non-ascii in it!) cheers, Chris From solipsis at pitrou.net Wed Feb 17 03:55:40 2016 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 17 Feb 2016 09:55:40 +0100 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> Message-ID: <20160217095540.7c3f9483@fsol> On Tue, 16 Feb 2016 16:10:34 -0800 Glyph Lefkowitz wrote: > > I am 100% on board with telling people "don't use `sudo pip install?". Frankly I have been telling the pip developers to just break this for years (see https://pip2014.com, which, much to my chagrin, still exists); `sudo pip install? should just exit immediately with an error; to the extent that packagers need it, the only invocation that should work should be `sudo pip install --i-am-building-an-operating-system?. This is frankly ridiculous. The problem is not the use of "sudo" or the invocation under root, it's to install into a system Python. So the solution should be to flag the system Python as not suitable for using pip into, not to forbid using pip under root. Regards Antoine. From p.f.moore at gmail.com Wed Feb 17 05:09:49 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Feb 2016 10:09:49 +0000 Subject: [Distutils] deprecating pip install --target In-Reply-To: References: Message-ID: On 16 February 2016 at 22:52, Robert Collins wrote: >> An alternative would be great, though I can probably fake things somehow for >> my purposes. > > Sounds similar to Daniel's need - and again, --prefix + setting PATH > and PYTHONPATH would be better. Note that if I read the help for --prefix correctly, "pip install --target x foo" puts foo in x, whereas "pip install --prefix x foo" puts foo in x/lib. So how would setting prefix allow me to put foo in x, and not in a subdirectory? That is specifically my requirement (and the vendoring requirement in general). I *know* that means there's no obvious place to put data files or extensions or whatever, and that's fine by me. It seems that if we want to go down this route, we need to include the full set of --install-purelib, --install-platlib, --install-scripts etc arguments to pip. But that's probably the wrong solution - if we want to start playing with the various install location parameters to pip install (--target, --prefix, --root) we should probably do a "proper" job and just find a way to allow user-defined schemes. Paul From eric at trueblade.com Wed Feb 17 05:12:48 2016 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 17 Feb 2016 05:12:48 -0500 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <20160217095540.7c3f9483@fsol> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <20160217095540.7c3f9483@fsol> Message-ID: <56C447A0.8010408@trueblade.com> On 2/17/2016 3:55 AM, Antoine Pitrou wrote: > On Tue, 16 Feb 2016 16:10:34 -0800 > Glyph Lefkowitz wrote: >> >> I am 100% on board with telling people "don't use `sudo pip install?". Frankly I have been telling the pip developers to just break this for years (see https://pip2014.com, which, much to my chagrin, still exists); `sudo pip install? should just exit immediately with an error; to the extent that packagers need it, the only invocation that should work should be `sudo pip install --i-am-building-an-operating-system?. > > This is frankly ridiculous. The problem is not the use of "sudo" or the > invocation under root, it's to install into a system Python. So the > solution should be to flag the system Python as not suitable for using > pip into, not to forbid using pip under root. I agree that there are uses for running pip as root (I do so myself). It's installing into the system Python that needs to be strongly discouraged, if not outright prevented. I'm not sure we have a good way of identifying the system Python, but that's another issue. Eric. From cournape at gmail.com Wed Feb 17 05:17:16 2016 From: cournape at gmail.com (David Cournapeau) Date: Wed, 17 Feb 2016 10:17:16 +0000 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <20160217095540.7c3f9483@fsol> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <20160217095540.7c3f9483@fsol> Message-ID: On Wed, Feb 17, 2016 at 8:55 AM, Antoine Pitrou wrote: > On Tue, 16 Feb 2016 16:10:34 -0800 > Glyph Lefkowitz wrote: > > > > I am 100% on board with telling people "don't use `sudo pip install?". > Frankly I have been telling the pip developers to just break this for years > (see https://pip2014.com, which, much to my chagrin, still exists); `sudo > pip install? should just exit immediately with an error; to the extent that > packagers need it, the only invocation that should work should be `sudo pip > install --i-am-building-an-operating-system?. > > This is frankly ridiculous. The problem is not the use of "sudo" or the > invocation under root, it's to install into a system Python. > Sure, but the people I tend to see using `sudo pip` are not the kind of users where that distinction is very useful. If there were a different simple, reliable way to avoid installing in system python, I would be happy to change my own recommendations during sprints, talks, etc... David -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Feb 17 05:17:49 2016 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 17 Feb 2016 11:17:49 +0100 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <20160217095540.7c3f9483@fsol> <56C447A0.8010408@trueblade.com> Message-ID: <20160217111749.7ccefc3d@fsol> On Wed, 17 Feb 2016 05:12:48 -0500 "Eric V. Smith" wrote: > On 2/17/2016 3:55 AM, Antoine Pitrou wrote: > > On Tue, 16 Feb 2016 16:10:34 -0800 > > Glyph Lefkowitz wrote: > >> > >> I am 100% on board with telling people "don't use `sudo pip install?". Frankly I have been telling the pip developers to just break this for years (see https://pip2014.com, which, much to my chagrin, still exists); `sudo pip install? should just exit immediately with an error; to the extent that packagers need it, the only invocation that should work should be `sudo pip install --i-am-building-an-operating-system?. > > > > This is frankly ridiculous. The problem is not the use of "sudo" or the > > invocation under root, it's to install into a system Python. So the > > solution should be to flag the system Python as not suitable for using > > pip into, not to forbid using pip under root. > > I agree that there are uses for running pip as root (I do so myself). > It's installing into the system Python that needs to be strongly > discouraged, if not outright prevented. I'm not sure we have a good way > of identifying the system Python, but that's another issue. I think it would be reasonable to let vendors do so by placing a specific file into the "site-packages" (or perhaps even a pip config file if we want to put more information in it). Then upstream Python doesn't need to have a say in it. Regards Antoine. From p.f.moore at gmail.com Wed Feb 17 06:20:04 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Feb 2016 11:20:04 +0000 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: On 16 February 2016 at 22:49, Robert Collins wrote: > +Being able to create new sdists from existing source trees isn't a thing pip > +does today, and while there is a PR to do that as part of building from > +source, it is contentious and lacks consensus. Rather than impose a > +requirement on all build systems, we are treating it as a YAGNI, and will add > +such a verb in a future version of the interface if required. Could we add the following clarification to that? """ Currently, however, people using distutils can create sdists using "setup.py sdist". For other tools, this PEP does not specify how a sdist should be created, but it does imply that it is sufficient to make an archive of a source directory, including a "pypa.json" file, and pip will be able to consume that as a sdist. Whether tools provide a command to do this is out of scope for this PEP. """ Paul. From glyph at twistedmatrix.com Wed Feb 17 07:12:10 2016 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Wed, 17 Feb 2016 04:12:10 -0800 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> Message-ID: <6949940E-D8B7-40C1-848F-71E3B5BFF141@twistedmatrix.com> > On Feb 16, 2016, at 11:55 PM, Robert T. McGibbon wrote: > > On Tue, Feb 16, 2016 at 4:10 PM, Glyph Lefkowitz > wrote: > This whole section is about a tool to automatically identify possible issues with these wheels - https://www.python.org/dev/peps/pep-0513/#auditwheel - so I don't even really know what you mean by this comment. I thought that the existence of this tool is one of the best parts of this PEP! > > Oh cool! Thanks, Glyph! I had a lot of fun writing it. It really cuts to the heart of the problem with python builds: you can accidentally depend on some aspect of the platform in a way which requires nuanced understanding of the native build toolchain to understand. For what it's worth this is definitely a problem on OS X and Windows as well (accidentally depending on homebrew or chocolatey for example); any chance you'll be extending it to deal with 'dumpbin' and 'otool' as well as 'ldd'? -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Wed Feb 17 07:16:06 2016 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Wed, 17 Feb 2016 04:16:06 -0800 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <20160217095540.7c3f9483@fsol> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <20160217095540.7c3f9483@fsol> Message-ID: > On Feb 17, 2016, at 12:55 AM, Antoine Pitrou wrote: > > On Tue, 16 Feb 2016 16:10:34 -0800 > Glyph Lefkowitz wrote: >> >> I am 100% on board with telling people "don't use `sudo pip install?". Frankly I have been telling the pip developers to just break this for years (see https://pip2014.com, which, much to my chagrin, still exists); `sudo pip install? should just exit immediately with an error; to the extent that packagers need it, the only invocation that should work should be `sudo pip install --i-am-building-an-operating-system?. > > [...] The problem is not the use of "sudo" or the > invocation under root, it's to install into a system Python. So the > solution should be to flag the system Python as not suitable for using > pip into, not to forbid using pip under root. I didn't mean to suggest that sudo /path/to/venv/bin/pip install should fail, so we are in agreement here. The exact details of how pip detects the suitability of a given environment are up for discussion, it's just that the default behavior of `sudo pip install? (install into package-manager-managed system prefix) is a bad idea. Perhaps certain venvs should set this flag as well, to indicate that pip should not mess with it any more either. -glyph From glyph at twistedmatrix.com Wed Feb 17 07:17:25 2016 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Wed, 17 Feb 2016 04:17:25 -0800 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <20160217095540.7c3f9483@fsol> Message-ID: > On Feb 17, 2016, at 2:17 AM, David Cournapeau wrote: > > Sure, but the people I tend to see using `sudo pip` are not the kind of users where that distinction is very useful. It's hair-splitting but probably correct hair-splitting in terms of how it's detected. > If there were a different simple, reliable way to avoid installing in system python, I would be happy to change my own recommendations during sprints, talks, etc... Are you recommending 'sudo pip' right now? Why not 'sudo virtualenv', then? -g -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Feb 17 07:32:23 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Feb 2016 12:32:23 +0000 Subject: [Distutils] Alternative build system abstraction PEP In-Reply-To: References: Message-ID: On 17 February 2016 at 01:32, Nathaniel Smith wrote: > Finally found the time to sit down and take the various drafts I've sent of > this to the list before, add a more detailed rationale section, and turn it > into a pull request: > > https://github.com/pypa/interoperability-peps/pull/63 Comments added inline to the tracker but roughly: Despite what you say, there's one major difference with Robert's proposal that you *don't* emphasise, and that is that you explicitly document a new sdist format. And I don't like the proposed format because it doesn't offer any option for getting metadata from the sdist without involving the build backend. While that's no different from the status quo today, I'm much happier with Robert's approach of leaving that as "out of scope" and writing the PEP in terms of source trees that are "a config file and a bunch of stuff that only the build system needs to care about". If your proposal and Robert's took the same view of sdists, I'd say we could toss a coin between them. As it is, I'm inclined to prefer Robert's proposal, simply because he avoids opening the sdist can of worms. I sort of like the Python interface over the command line one, but it's hardly a major distinction. Paul From donald at stufft.io Wed Feb 17 07:44:20 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Feb 2016 07:44:20 -0500 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: > On Feb 17, 2016, at 6:20 AM, Paul Moore wrote: > > Currently, however, people using distutils can create sdists using > "setup.py sdist". For other tools, this PEP does not specify how a > sdist should be created, but it does imply that it is sufficient to > make an archive of a source directory, including a "pypa.json" file, > and pip will be able to consume that as a sdist. Whether tools provide > a command to do this is out of scope for this PEP. I'm writing my own alternative to this problem (was hoping to have it ready yesterday, but I ended up with a migraine and spent most of the day MIA instead) and it finally clicked to me why I felt that sdist needed to be a part of this, and part of why I felt mildly uncomfortable by the PEP. Right now, distutils/setuptools provides a number of functions: * Installation from Sdist * Building a Wheel * Fetching Metadata * Building a sdist * Uploading It's fair to say that I think we're trying to get away from the first of those and it's a good thing that neither of these proposals include it. In addition both proposals have fetching metadata and building wheels built into them. Where they fall short is that there isn't a way to build a sdist or upload said sdist to PyPI. The assumption is that people will just package up their local directories and then publish them to PyPI. However I don't think that assumption is reasonable. You could say that using twine to handle the uploading is a thing people should do (and I agree!) but that currently relies on having static metadata inside of the sdist that twine can parse, static metadata that isn't going to exist if you just simply tarball up a directory on disk. So I think just tarballing up a directory on disk is *not* a reasonable way to create a sdist, however you can't reasonably (IMO) create an independent tool to handle this task with either of these build proposals either. For instance, building a sdist requires getting the version of the project, that is going to require some logic which is (in both of these PEPs) build system specific. So taking flit for example, in order to build a sdist of a flit using package, a tool is going to have to implement the same version detection logic that flit itself is using (and the same goes for other metadata like name, and long_description). As far as I can tell, any tool to create a sdist is going to have to be specific to a particular build tool in order to have the same metadata that you would get building a wheel. The closest that either of these PEPs make it to being able to interrogate a directory for the metadata is by asking it for the metadata that would occur if we created a wheel file right now. While that is theortically enough information, it's also going to contain things which do not belong in a sdist at all. In the end, I think *both* of the current proposals, as written, will cause people to not uploads sdists because neither of them really address the fact that their intent is to remove the standard interface for doing that, without including a replacement to that interface. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Wed Feb 17 07:45:39 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Feb 2016 12:45:39 +0000 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <56C3384B.8010700@gmail.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <56C3384B.8010700@gmail.com> Message-ID: On 16 February 2016 at 14:55, Petr Viktorin wrote: > So, what is the argument against "pip install --user"? Does that not > leave everyone happy? See https://github.com/pypa/pip/issues/1668 Long story short, that's the intention, but there are a lot of questions that need(ed) to be resolved. I'm not sure if we're at a point where it's practical yet, though. (One big question in my mind is whether --user installs work as well as we hope they do - as far as I know very few people actually use them routinely...) Paul From donald at stufft.io Wed Feb 17 08:03:22 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Feb 2016 08:03:22 -0500 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <20160216133313.1c172810@anarchist.wooz.org> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <56C355FF.4020904@ubuntu.com> <20160216133313.1c172810@anarchist.wooz.org> Message-ID: <54A6AF32-86AC-4BE9-95B4-292F5FA50441@stufft.io> > On Feb 16, 2016, at 1:33 PM, Barry Warsaw wrote: > > I don't, but I hope that should not be a problem these days, with modern pip > on modern Debian/Ubuntu systems. We prevent pip from installing in system > locations in various ways, including making --user the default. > > I don't think we're doing anything that isn't at least within the pip > roadmap. E.g. --user will probably someday be the default in stock pip, but > because of reasons it hasn't happened yet. > > IIRC, Donald was also talking about better ways to detect system owned files, > so pip would also refuse to overwrite them. Replying to Barry, but also to generally the whole sub-thread! First off, there is very little chance pip ever removes the ability to install directly into a system Python. So any proposal to remove that functionality is unlikely to ever be accepted without an incredibly compelling reason. As far as some sort of "tainting" goes, that's not really related to this PEP at all, and as of pip 8.0 we already include information when we install a file that says that this particular item was installed by pip. That can be accessed using pkg_resources:: >>> pkg_resources.get_distribution("twine").get_metadata("INSTALLER").strip() 'pip' As far as I know, nothing actually uses that metadata right now, but that should be plenty for some sort of tooling to be written that can tell a distro if there are any non-distro provided packages in an enviornment using something like: >>> for d in pkg_resources.working_set: ... if d.has_metadata("INSTALLER"): ... print("{}: {}".format(d.project_name, d.get_metadata("INSTALLER").strip())) ... certifi: pip packaging: pip pkginfo: pip pyparsing: pip pytz: pip requests: pip requests-toolbelt: pip setuptools: pip twine: pip virtualenv: pip wheel: pip Anyways, where I think [1] we're heading: * When we don't actually have permission to install into site-packages, then default to --user. This will stop leading people towards using sudo because they typed ``pip install foo`` as a user and got a permissions error, the logical way to solve that being ``sudo !!``. * Standardize the patches Debian has which splits up a vendor location from the site location, giving distros a different place to install things than where pip will install by default. * Provide a mechanism that will enable vendors to say that some particular install package is system owned, and then have pip refuse to modify those files without some sort of --yes-i-know-what-im-doing flag. At this time, I don't personally have any plans to add anything like pipsi but I'm not strictly opposed to it either. The largest problem with that is we don't have a good way of knowing, when someone does ``pip install foobar`` if they want to install foobar because it's an isolated bin they want to run, or if it's something they want to be able to import in their current environment. [1] These are my opinions, and none of them are set in stone. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Wed Feb 17 08:06:24 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Feb 2016 13:06:24 +0000 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: On 17 February 2016 at 12:44, Donald Stufft wrote: > You could say that using twine to handle the uploading is a thing people should > do (and I agree!) but that currently relies on having static metadata inside of > the sdist that twine can parse, static metadata that isn't going to exist if > you just simply tarball up a directory on disk. Thank you for confirming this. I'd mentioned it in my comments on Nathaniel's proposal but I wasn't sure. So the situation we currently have with the "sdist format" (vaguely and unsatisfactorily defined as the current version is) is: 1. It contains static metadata, added by distutils. 2. That metadata is not sufficiently reliable for pip to use so pip regenerates it. 3. Twine (and potentially other tools) needs to access that metadata, but isn't critically affected by the unreliability. The two proposals address (2) by giving pip a formal interface to ask the build system for metadata. But in doing so, they drop totally the requirement for (1). As a result, tools like twine are broken by the proposals (to the extent that people adopt alternative build systems - and if no-one does, then the PEPs are useless anyway!). The proposals focus solely on the implications for pip. In the terminology introduced by Nathaniel's proposal, "build frontends" and "integration frontends". But they need to also discuss the implications for let's call them "source consumers" - tools that work with all forms of source that can be processed by a "build frontend", but which *don't* have any need to directly interact with the build system. Those tools *are* affected, as their focus is on "source distributions which can be handled by pip" and the set of such source distributions *is* changing. Such "source consumers" would include twine and PyPI at a minimum. As well as adhoc tools users might write. (Note that if a build tool doesn't want to provide a means to generate a sdist, then that's fine - but in terms of the above they will then explicitly be opting out of the formal "build system" infrastructure and be deliberately targeting the production of binary-only wheel distributions. Whether that's acceptable to their users isn't important to this discussion. But I think the fact that we're even talking about decoupling the build system from setuptools implies that some users do care about not focusing purely on binary wheel-only distribution) Paul From donald at stufft.io Wed Feb 17 08:39:09 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Feb 2016 08:39:09 -0500 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: <3755D883-1BFC-41BE-898F-66AF6FD7D81F@stufft.io> > On Feb 17, 2016, at 8:06 AM, Paul Moore wrote: > > On 17 February 2016 at 12:44, Donald Stufft wrote: >> You could say that using twine to handle the uploading is a thing people should >> do (and I agree!) but that currently relies on having static metadata inside of >> the sdist that twine can parse, static metadata that isn't going to exist if >> you just simply tarball up a directory on disk. > > Thank you for confirming this. I'd mentioned it in my comments on > Nathaniel's proposal but I wasn't sure. > > So the situation we currently have with the "sdist format" (vaguely > and unsatisfactorily defined as the current version is) is: > > 1. It contains static metadata, added by distutils. > 2. That metadata is not sufficiently reliable for pip to use so pip > regenerates it. > 3. Twine (and potentially other tools) needs to access that metadata, > but isn't critically affected by the unreliability. > > The two proposals address (2) by giving pip a formal interface to ask > the build system for metadata. But in doing so, they drop totally the > requirement for (1). As a result, tools like twine are broken by the > proposals (to the extent that people adopt alternative build systems - > and if no-one does, then the PEPs are useless anyway!). Correct. Through largely an accident of history, the static metadata inside of a sdist does not contain what is likely the most contoversial thing to be "baked" into a sdist-- namely the dependencies. It mostly contains things like name, version, classifiers, long_description, author, etc. The unreliaiblity that pip cares about are things like (or really, mainly just), projects that dynamically adjust the install_requires based on attributes of the platform it is being executed on (such as adding a dependency on argparse for py26). The things that Twine cares about are generally not dynamic in that way and typically are "static" within a specific sdist anyways (and if they aren't the failure case is that PyPI displays stale metadata). You can sort of get this information by interrogating the wheel metadata, but that would be a regression for twine as well. One of the major use cases for twine (in fact, the original use case!) was that Openstack wanted to be able to create a sdist on one machine and upload from another machine. They wanted this split because the first machine could be isolated, without access to the PyPI credentials and thus it could safely execute untrusted code which is required to build the sdist. They could then convey that sdist to another machine and upload it, using twine, to PyPI without having to execute any code from inside of the package to do so. > > The proposals focus solely on the implications for pip. In the > terminology introduced by Nathaniel's proposal, "build frontends" and > "integration frontends". But they need to also discuss the > implications for let's call them "source consumers" - tools that work > with all forms of source that can be processed by a "build frontend", > but which *don't* have any need to directly interact with the build > system. Those tools *are* affected, as their focus is on "source > distributions which can be handled by pip" and the set of such source > distributions *is* changing. > > Such "source consumers" would include twine and PyPI at a minimum. As > well as adhoc tools users might write. > > (Note that if a build tool doesn't want to provide a means to generate > a sdist, then that's fine - but in terms of the above they will then > explicitly be opting out of the formal "build system" infrastructure > and be deliberately targeting the production of binary-only wheel > distributions. Whether that's acceptable to their users isn't > important to this discussion. But I think the fact that we're even > talking about decoupling the build system from setuptools implies that > some users do care about not focusing purely on binary wheel-only > distribution) In my opinion, the ability to create a sdist should be considered a standard feature for whatever toolchain any particular project decides to use. I don't much care if there is a seperate sdist building tool and a wheel building tool or if they are rolled into one tool, but I think that a "standards compliant" toolchain needs both and I don't think we can replace a toolchain that has both with one that doesn't. Now, we obviously can't stop tools like flit from saying they only want to support wheels if that's their choice, but I don't think that we should go out of our way to make it a possibility to do that either. The flip side of that, is I do not think (as has been suggested in the past) that mandating sdists on PyPI is the correct answer either. We're not trying to *force* people to upload sdists, but what we do want is for it to be considered standard to do so, and that if you don't upload one, you're doing something "weird". My reasoning for this is: * Not everything on PyPI needs to be open source, the only thing we require is a license to distribute, not a license to use or modify. It's perfectly valid to upload only a wheel to PyPI, but doing so has implications and should be something only done in edge cases. * We can't even really force it for real, if someone doesn't want to produce a sdist and PyPI says they must, they can jsut upload an empty sdist. Like many things in packaging (and life in general)! I think the way to handle this is to pave cowpaths in the direction we want people to go. Make the "right" (providing sdists on PyPI) thing easy and the "wrong" thing hard, but possible. The current proposals pave cowpaths towards the "wrong" thing. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From glyph at twistedmatrix.com Wed Feb 17 08:58:04 2016 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Wed, 17 Feb 2016 05:58:04 -0800 Subject: [Distutils] =?utf-8?q?Don=27t_Use_=60sudo_pip_install=C2=B4_=28w?= =?utf-8?b?YXMgUmU6ICBbZmluYWwgdmVyc2lvbj9dIFBFUCA1MTPigKYp?= In-Reply-To: <87ECB934-BE24-42FF-9259-2336FCAA61F8@coderanger.net> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> <4D061B1B-9BA8-4D4B-B87D-7A2528B51C46@twistedmatrix.com> <7B72FAD8-4882-4445-8769-3BF01DDFD794@twistedmatrix.com> <87ECB934-BE24-42FF-9259-2336FCAA61F8@coderanger.net> Message-ID: <4DF5ADB4-BDB1-42C7-B2F3-13262CDC7890@twistedmatrix.com> > On Feb 16, 2016, at 6:22 PM, Noah Kantrowitz wrote: > > I'm not concerned with if the module is importable specifically, but I am concerned with where the files will live overall. When building generic ops tooling, being unsurprising is almost always the right move and I would be surprised if supervisor installed to a custom virtualenv. Would you not be surprised if installing supervisord upgraded e.g. `six? or `setuptools? and broke apport? or lsb_release? or dnf? This type of version conflict is of course rare, but it is always possible, and every 'pip install' takes the system from a supported / supportable state to "???" depending on the dependencies of every other tool which may have been installed (and pip doesn't have a constraint solver for its dependencies, so you don't even know if the system gets formally broken by two explicitly conflicting requirements). > It's a weird side effect of Python not having a great solution for "application packaging" I guess? We've got standards for web-ish applications, but not much for system services. I'm not saying I think creating an isolated "global-ish" environment would be worse, I'm saying nothing does that right now and I personally don't want to be the first because that bring a lot of pain with it :-) What makes the web-ish stuff "standard" is just that a lot of people are doing it. So a lot of people should start doing this, and then it will also be a standard :-). I can tell you that on systems where I've done this sort of thing, it has surprised no-one that I'm aware of and I have not had any issues to speak of. So I think you might be overestimating the risk. In fairness though I've never written a clear explanation anywhere of why this is desirable; it strikes me as obvious but it is clearly not the present best-practice, which means somebody needs to do some thought-leadering. So I owe you a blog post. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From phihag at phihag.de Wed Feb 17 03:30:52 2016 From: phihag at phihag.de (Philipp Hagemeister) Date: Wed, 17 Feb 2016 09:30:52 +0100 Subject: [Distutils] multiple backports of ipaddress and a world of pain In-Reply-To: <56C42667.4010503@simplistix.co.uk> References: <56C36C7D.1000307@simplistix.co.uk> <56C3A33A.50907@phihag.de> <56C42667.4010503@simplistix.co.uk> Message-ID: <56C42FBC.4020103@phihag.de> On 17.02.2016 08:51, Chris Withers wrote: > The code James uses as a result isn't that pretty, multiple occurrences of: > > try: > value = unicode(value) > except NameError: > pass This is the last resort when you truly don't know what the input is, and you are sure the string should only contain ASCII characters. Works fine for ipaddress purposes. > What would you recommend instead? When I monkeypatched this yesterday, I > went with: > > if isinstance(value, bytes): > value = value.decode('ascii') > > ...but I wonder if there's something better? Either is fine by me. Preferably, one should go fix the source to return character strings in the first place. Usually, this includes from __future__ import unicode_literals, io.open with an encoding parameter, and inserting unconditional decode calls when the source really is bytes. I don't know enough about Python's postgres connector to give a better solution. How does python-postgres handle character strings when they're not all ASCII? If it's returning b'D\xc3\xbcsseldorf' on 2.x and u'D?sseldorf' on 3.x, then you have to conditionally decode('utf-8') all strings anyways in order to work on Python 3 as well as 2.x, haven't you? Greetings from D?sseldorf, Philipp -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From steve.dower at python.org Wed Feb 17 10:50:00 2016 From: steve.dower at python.org (Steve Dower) Date: Wed, 17 Feb 2016 07:50:00 -0800 Subject: [Distutils] deprecating pip install --target In-Reply-To: References: Message-ID: Another alternative is making "pip the library" as has been discussed in the past. Certainly my needs would be satisfied by a library that can get me as far as wheel files given package name/s and version spec (and index URL I guess). Unpacking wheels isn't especially difficult and in my case I know there are no existing installs to worry about. Top-posted from my Windows Phone -----Original Message----- From: "Paul Moore" Sent: ?2/?17/?2016 2:10 To: "Robert Collins" Cc: "Steve Dower" ; "Python Distutils SIG" Subject: Re: [Distutils] deprecating pip install --target On 16 February 2016 at 22:52, Robert Collins wrote: >> An alternative would be great, though I can probably fake things somehow for >> my purposes. > > Sounds similar to Daniel's need - and again, --prefix + setting PATH > and PYTHONPATH would be better. Note that if I read the help for --prefix correctly, "pip install --target x foo" puts foo in x, whereas "pip install --prefix x foo" puts foo in x/lib. So how would setting prefix allow me to put foo in x, and not in a subdirectory? That is specifically my requirement (and the vendoring requirement in general). I *know* that means there's no obvious place to put data files or extensions or whatever, and that's fine by me. It seems that if we want to go down this route, we need to include the full set of --install-purelib, --install-platlib, --install-scripts etc arguments to pip. But that's probably the wrong solution - if we want to start playing with the various install location parameters to pip install (--target, --prefix, --root) we should probably do a "proper" job and just find a way to allow user-defined schemes. Paul _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at vorpus.org Wed Feb 17 16:01:01 2016 From: njs at vorpus.org (Nathaniel Smith) Date: Wed, 17 Feb 2016 13:01:01 -0800 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: On Feb 17, 2016 4:44 AM, "Donald Stufft" wrote: > [...] > You could say that using twine to handle the uploading is a thing people should > do (and I agree!) but that currently relies on having static metadata inside of > the sdist that twine can parse, static metadata that isn't going to exist if > you just simply tarball up a directory on disk. Ah-ha, this is useful. The reason this hasn't been considered, at least in my proposal, is that I think this is the first I've heard that there is anything that cares about what's in an sdist besides setup.py :-). Is there anything written up on what twine wants from an sdist? Would it make sense for you to write up a spec on what twine/pypi need and a better way to represent it than whatever it is distutils does now? I think both Robert and my proposal basically see their scope as being strictly restricted to "here's how we want to replace pip-calling-setup.py with pip-calling-something-else", while keeping everything else the same or at least delegating any other changes to other PEPs. So we envision that build system authors will provide some way to package up source into an sdist, whatever that means; that could be a current-style sdist with metadata requirements reverse-engineered from twine and setuptools, or it could be done kind of new and improved sdist that is about to get its own PEP... either way, it's orthogonal to replacing setup.py. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Feb 17 16:30:49 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Feb 2016 21:30:49 +0000 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: On 17 February 2016 at 21:01, Nathaniel Smith wrote: > On Feb 17, 2016 4:44 AM, "Donald Stufft" wrote: >> > [...] >> You could say that using twine to handle the uploading is a thing people >> should >> do (and I agree!) but that currently relies on having static metadata >> inside of >> the sdist that twine can parse, static metadata that isn't going to exist >> if >> you just simply tarball up a directory on disk. > > Ah-ha, this is useful. The reason this hasn't been considered, at least in > my proposal, is that I think this is the first I've heard that there is > anything that cares about what's in an sdist besides setup.py :-). Yes, twine is a good, concrete example. > Is there anything written up on what twine wants from an sdist? Would it > make sense for you to write up a spec on what twine/pypi need and a better > way to represent it than whatever it is distutils does now? I think the key point here is that twine expects a *sdist*, that is, an archive, containing a PKG-INFO file conforming to the Metadata 1.1 spec. See PEP 314. (Note, I only just read PEP 314 myself - it's as much news to me that it defines how sdists have to contain metadata as it probably is to everyone else...) One of the problems here is that the older packaging PEPs got into such a mess, that people have stopped believing in them at all. I just looked at PEPs 241 and 314, and they actually do a pretty good job of specifying how sdists must include metadata, and I think it's fair to say that this should be our staring point. > I think both Robert and my proposal basically see their scope as being > strictly restricted to "here's how we want to replace pip-calling-setup.py > with pip-calling-something-else", while keeping everything else the same or > at least delegating any other changes to other PEPs. So we envision that > build system authors will provide some way to package up source into an > sdist, whatever that means; that could be a current-style sdist with > metadata requirements reverse-engineered from twine and setuptools, or it > could be done kind of new and improved sdist that is about to get its own > PEP... either way, it's orthogonal to replacing setup.py. Robert more or less did that, with the proviso that his PEP left open the possibility that build systems can upload things that the PEP says can be processed by pip, even though in practice they aren't sdists (according to PEP 314). But given that no-one had really dug back into the implications of the older PEPs, that's more of a loophole than a deliberate choice. On the other hand, your proposal explicitly allows a "version 1" sdist to not contain PEP 314 metadata, precisely because it defines a new format of sdist *without* mandating that it must conform to PEP 314 (no blame here - as I say, we'd all omitted to check back at the older PEPs). IMO, we should either bite the bullet and properly define a "new sdist" format (which won't be an easy job!), or we should couch the "build system abstraction" PEPs in terms of adding a new "build system interface" metadata file to the existing sdist format. In the interests of openness, I'd include comments pointing out that the existing sdist format is defined as conforming to PEP 314, and that even though pip itself (and the build system interface) doesn't use the metadata mandated by PEP 314, other tools do. The latter option is far easier (we're basically almost there) but it will *not* suit build tools that can't (or won't) generate a sdist that conforms to PEP 314 (i.e., contains PKG-INFO). Paul From robertc at robertcollins.net Wed Feb 17 17:58:33 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 18 Feb 2016 11:58:33 +1300 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: On 18 February 2016 at 10:30, Paul Moore wrote: > I think the key point here is that twine expects a *sdist*, that is, > an archive, containing a PKG-INFO file conforming to the Metadata 1.1 > spec. See PEP 314. (Note, I only just read PEP 314 myself - it's as > much news to me that it defines how sdists have to contain metadata as > it probably is to everyone else...) > > One of the problems here is that the older packaging PEPs got into > such a mess, that people have stopped believing in them at all. I just > looked at PEPs 241 and 314, and they actually do a pretty good job of > specifying how sdists must include metadata, and I think it's fair to > say that this should be our staring point. Hmm - I certainly didn't stop believing it.. its just that the metadata *specified* by the PEP's doesn't include anything distutils didn't know about: e.g. dependencies. >> I think both Robert and my proposal basically see their scope as being >> strictly restricted to "here's how we want to replace pip-calling-setup.py >> with pip-calling-something-else", while keeping everything else the same or >> at least delegating any other changes to other PEPs. So we envision that >> build system authors will provide some way to package up source into an >> sdist, whatever that means; that could be a current-style sdist with >> metadata requirements reverse-engineered from twine and setuptools, or it >> could be done kind of new and improved sdist that is about to get its own >> PEP... either way, it's orthogonal to replacing setup.py. > > Robert more or less did that, with the proviso that his PEP left open > the possibility that build systems can upload things that the PEP says > can be processed by pip, even though in practice they aren't sdists > (according to PEP 314). But given that no-one had really dug back into > the implications of the older PEPs, that's more of a loophole than a > deliberate choice. On the other hand, your proposal explicitly allows > a "version 1" sdist to not contain PEP 314 metadata, precisely because > it defines a new format of sdist *without* mandating that it must > conform to PEP 314 (no blame here - as I say, we'd all omitted to > check back at the older PEPs). Well, I'm adamant that my PEP should not make anything worse! My specific concern w.r.t. sdists is that its an orthogonal consideration to 'not having a setup.py'. Not having a setup.py applies to any source tree irrespective of how you obtained it. Sdists and their constraints apply to well - sdists. > IMO, we should either bite the bullet and properly define a "new > sdist" format (which won't be an easy job!), or we should couch the > "build system abstraction" PEPs in terms of adding a new "build system > interface" metadata file to the existing sdist format. In the > interests of openness, I'd include comments pointing out that the > existing sdist format is defined as conforming to PEP 314, and that > even though pip itself (and the build system interface) doesn't use > the metadata mandated by PEP 314, other tools do. > > The latter option is far easier (we're basically almost there) but it > will *not* suit build tools that can't (or won't) generate a sdist > that conforms to PEP 314 (i.e., contains PKG-INFO). I still don't understand how *making* an sdist is tied into this PEP? AIUI flit's only concern with sdists was that they didn't want a setup.py with all the stuff we've discussed about that. Perhaps a flit author can weigh in - generating good PEP 314 PKG-INFO shouldn't be a problem for flit, has no Python2/3 implication, and doesn't require a setup.py. PEP 241 and 314 do not require a setup.py to be present AFAICT. I think all we need to do is make sure that its crystal clear that we're not changing any specified part of the contract of sdists: you should still make them, they still need to meet PEP 314.... setup.py was never PEP'd as part of the contract. We are changing the defacto contract, and so we should make that clear in the prose. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Wed Feb 17 18:12:41 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 18 Feb 2016 12:12:41 +1300 Subject: [Distutils] abstract build system PEP update In-Reply-To: <20160217071353.GA20295@platonas> References: <20160217071353.GA20295@platonas> Message-ID: On 17 February 2016 at 20:13, Marius Gedminas wrote: > On Tue, Feb 16, 2016 at 04:10:43PM +1300, Robert Collins wrote: >> diff --git a/build-system-abstraction.rst b/build-system-abstraction.rst >> index a6e4712..56464f1 100644 >> --- a/build-system-abstraction.rst >> +++ b/build-system-abstraction.rst >> @@ -68,12 +68,15 @@ modelled on pip's existing use of the setuptools >> setup.py interface. >> pypa.json >> --------- >> >> -The file ``pypa.json`` acts as neutron configuration file for pip and other >> +The file ``pypa.json`` acts as neutral configuration file for pip and other >> tools that want to build source trees to consult for configuration. The >> absence of a ``pypa.json`` file in a Python source tree implies a setuptools >> or setuptools compatible build system. >> >> -The JSON has the following schema. Extra keys are ignored. >> +The JSON has the following schema. Extra keys are ignored, which permits the >> +use of ``pypa.json`` as a configuration file for other related tools. If doing >> +that the chosen keys must be namespaced - e.g. ``flit`` with keys under that >> +rather than (say) ``build`` or other generic keys. > > Is this going to be a file that human beings are expected to edit by > hand? > > If so, can we please not use JSON? JSON is rather hostile to humans: no > trailing commas, no possibility to add comments. Find another format thats ideally in the standard library, with an as clean language-neutral schema. Yaml isn't. Toml isn't. $bikeshed. Honestly - If this is the bit we bog down on, great - someone can spend the time finding a thing to use here, but as discussed previously, I don't care: the PEP editor that accepts the PEP can tell me what format should be there, and I'll put it there. Until then, I don't want to think about it because its not interesting. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Wed Feb 17 18:23:14 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 18 Feb 2016 12:23:14 +1300 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: On 18 February 2016 at 01:44, Donald Stufft wrote: > >> On Feb 17, 2016, at 6:20 AM, Paul Moore wrote: >> >> Currently, however, people using distutils can create sdists using >> "setup.py sdist". For other tools, this PEP does not specify how a >> sdist should be created, but it does imply that it is sufficient to >> make an archive of a source directory, including a "pypa.json" file, >> and pip will be able to consume that as a sdist. Whether tools provide >> a command to do this is out of scope for this PEP. > > > I'm writing my own alternative to this problem (was hoping to have it ready > yesterday, but I ended up with a migraine and spent most of the day MIA > instead) and it finally clicked to me why I felt that sdist needed to be a part > of this, and part of why I felt mildly uncomfortable by the PEP. > > Right now, distutils/setuptools provides a number of functions: > > * Installation from Sdist > * Building a Wheel > * Fetching Metadata > * Building a sdist > * Uploading > > It's fair to say that I think we're trying to get away from the first of those > and it's a good thing that neither of these proposals include it. In addition > both proposals have fetching metadata and building wheels built into them. > > Where they fall short is that there isn't a way to build a sdist or upload said > sdist to PyPI. The assumption is that people will just package up their local > directories and then publish them to PyPI. However I don't think that > assumption is reasonable. I don't understand why that should be part of a PEP about *consuming* sdists. They're entirely different use cases. > You could say that using twine to handle the uploading is a thing people should > do (and I agree!) but that currently relies on having static metadata inside of > the sdist that twine can parse, static metadata that isn't going to exist if > you just simply tarball up a directory on disk. So thats a good reason to have an sdist format newer than PEP314, one that includes all the data we want to have (and that will hopefully be usually-static). But the existing sdist PEP's are sufficient to cover twine's needs today. > So I think just tarballing up a directory on disk is *not* a reasonable way to > create a sdist, however you can't reasonably (IMO) create an independent tool > to handle this task with either of these build proposals either. Sure you can - PEP 314 already documents PKG-INFO; setup.py has been defacto - and thats been the blocker for flit- AIUI it at least. > For instance, > building a sdist requires getting the version of the project, that is going to > require some logic which is (in both of these PEPs) build system specific. So > taking flit for example, in order to build a sdist of a flit using package, a > tool is going to have to implement the same version detection logic that flit > itself is using (and the same goes for other metadata like name, and > long_description). As far as I can tell, any tool to create a sdist is going to > have to be specific to a particular build tool in order to have the same > metadata that you would get building a wheel. > > The closest that either of these PEPs make it to being able to interrogate a > directory for the metadata is by asking it for the metadata that would occur > if we created a wheel file right now. While that is theortically enough > information, it's also going to contain things which do not belong in a sdist > at all. > > In the end, I think *both* of the current proposals, as written, will cause > people to not uploads sdists because neither of them really address the fact > that their intent is to remove the standard interface for doing that, without > including a replacement to that interface. I disagree that thats the intent though. The interface for a developer has the things you listed. The interface for an installer has been defined in an adhoc fashion as a subset of 'whatever setup.py does' - and we're fixing *that*, but - and see the preamble for my PEP - the whole point is to decouple the rich do whatever developers need from the very narrow thing installers like pip need. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From noah at coderanger.net Wed Feb 17 22:08:56 2016 From: noah at coderanger.net (Noah Kantrowitz) Date: Wed, 17 Feb 2016 19:08:56 -0800 Subject: [Distutils] =?utf-8?q?Don=27t_Use_=60sudo_pip_install=C2=B4_=28w?= =?utf-8?b?YXMgUmU6ICBbZmluYWwgdmVyc2lvbj9dIFBFUCA1MTPigKYp?= In-Reply-To: <4DF5ADB4-BDB1-42C7-B2F3-13262CDC7890@twistedmatrix.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> <4D061B1B-9BA8-4D4B-B87D-7A2528B51C46@twistedmatrix.com> <7B72FAD8-4882-4445-8769-3BF01DDFD794@twistedmatrix.com> <87ECB934-BE24-42FF-9259-2336FCAA61F8@coderanger.net> <4DF5ADB4-BDB1-42C7-B2F3-13262CDC7890@twistedmatrix.com> Message-ID: <10790755-BCF4-4B6F-BBF7-23E3CE0FBAF5@coderanger.net> > On Feb 17, 2016, at 5:58 AM, Glyph Lefkowitz wrote: > > >> On Feb 16, 2016, at 6:22 PM, Noah Kantrowitz wrote: >> >> I'm not concerned with if the module is importable specifically, but I am concerned with where the files will live overall. When building generic ops tooling, being unsurprising is almost always the right move and I would be surprised if supervisor installed to a custom virtualenv. > > Would you not be surprised if installing supervisord upgraded e.g. `six? or `setuptools? and broke apport? or lsb_release? or dnf? This type of version conflict is of course rare, but it is always possible, and every 'pip install' takes the system from a supported / supportable state to "???" depending on the dependencies of every other tool which may have been installed (and pip doesn't have a constraint solver for its dependencies, so you don't even know if the system gets formally broken by two explicitly conflicting requirements). > >> It's a weird side effect of Python not having a great solution for "application packaging" I guess? We've got standards for web-ish applications, but not much for system services. I'm not saying I think creating an isolated "global-ish" environment would be worse, I'm saying nothing does that right now and I personally don't want to be the first because that bring a lot of pain with it :-) > > What makes the web-ish stuff "standard" is just that a lot of people are doing it. So a lot of people should start doing this, and then it will also be a standard :-). > > I can tell you that on systems where I've done this sort of thing, it has surprised no-one that I'm aware of and I have not had any issues to speak of. So I think you might be overestimating the risk. > > In fairness though I've never written a clear explanation anywhere of why this is desirable; it strikes me as obvious but it is clearly not the present best-practice, which means somebody needs to do some thought-leadering. So I owe you a blog post. Saying it's a good idea and we should move towards it is fine and I agree, but that isn't grounds to remove the ability to do things the current way. So you can warn people off from global installs but until there is at least some community awareness of this other way to do things we can't remove support entirely. It's going to be a very slow deprecation process. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: Message signed with OpenPGP using GPGMail URL: From glyph at twistedmatrix.com Wed Feb 17 22:12:32 2016 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Wed, 17 Feb 2016 19:12:32 -0800 Subject: [Distutils] =?utf-8?q?Don=27t_Use_=60sudo_pip_install=C2=B4_=28w?= =?utf-8?b?YXMgUmU6ICBbZmluYWwgdmVyc2lvbj9dIFBFUCA1MTPigKYp?= In-Reply-To: <10790755-BCF4-4B6F-BBF7-23E3CE0FBAF5@coderanger.net> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> <4D061B1B-9BA8-4D4B-B87D-7A2528B51C46@twistedmatrix.com> <7B72FAD8-4882-4445-8769-3BF01DDFD794@twistedmatrix.com> <87ECB934-BE24-42FF-9259-2336FCAA61F8@coderanger.net> <4DF5ADB4-BDB1-42C7-B2F3-13262CDC7890@twistedmatrix.com> <10790755-BCF4-4B6F-BBF7-23E3CE0FBAF5@coderanger.net> Message-ID: > On Feb 17, 2016, at 7:08 PM, Noah Kantrowitz wrote: > > Saying it's a good idea and we should move towards it is fine and I agree, but that isn't grounds to remove the ability to do things the current way. So you can warn people off from global installs but until there is at least some community awareness of this other way to do things we can't remove support entirely. It's going to be a very slow deprecation process. > > --Noah Sure. We are also in agreement here, basically: in saying that pip should "error", I was describing an ideal state that would take years of education to get to (and I'm not sure that Donald even agrees we should go that way ;-)). But we can't even begin to move in that direction a little unless the better alternative is clearly explained and out in the zeitgeist for some time first. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Wed Feb 17 22:17:26 2016 From: noah at coderanger.net (Noah Kantrowitz) Date: Wed, 17 Feb 2016 19:17:26 -0800 Subject: [Distutils] =?utf-8?q?Don=27t_Use_=60sudo_pip_install=C2=B4_=28w?= =?utf-8?b?YXMgUmU6ICBbZmluYWwgdmVyc2lvbj9dIFBFUCA1MTPigKYp?= In-Reply-To: References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> <4D061B1B-9BA8-4D4B-B87D-7A2528B51C46@twistedmatrix.com> <7B72FAD8-4882-4445-8769-3BF01DDFD794@twistedmatrix.com> <87ECB934-BE24-42FF-9259-2336FCAA61F8@coderanger.net> <4DF5ADB4-BDB1-42C7-B2F3-13262CDC7890@twistedmatrix.com> <10790755-BCF4-4B6F-BBF7-23E3CE0FBAF5@coderanger.net> Message-ID: <1BEAB4AA-3A35-467D-8BC9-8DD36EC26CA1@coderanger.net> > On Feb 17, 2016, at 7:12 PM, Glyph Lefkowitz wrote: > > >> On Feb 17, 2016, at 7:08 PM, Noah Kantrowitz wrote: >> >> Saying it's a good idea and we should move towards it is fine and I agree, but that isn't grounds to remove the ability to do things the current way. So you can warn people off from global installs but until there is at least some community awareness of this other way to do things we can't remove support entirely. It's going to be a very slow deprecation process. >> >> --Noah > > Sure. We are also in agreement here, basically: in saying that pip should "error", I was describing an ideal state that would take years of education to get to (and I'm not sure that Donald even agrees we should go that way ;-)). But we can't even begin to move in that direction a little unless the better alternative is clearly explained and out in the zeitgeist for some time first. Okay, then :+1: but I would be wary of using the phrasing "should just exit immediately with an error" in such a situation. "should" isn't wrong per se, but you don't actually want it to do that nooooow ;-) --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: Message signed with OpenPGP using GPGMail URL: From njs at pobox.com Wed Feb 17 22:17:43 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 17 Feb 2016 19:17:43 -0800 Subject: [Distutils] =?utf-8?q?Don=27t_Use_=60sudo_pip_install=C2=B4_=28w?= =?utf-8?b?YXMgUmU6IFtmaW5hbCB2ZXJzaW9uP10gUEVQIDUxM+KApik=?= In-Reply-To: <7B72FAD8-4882-4445-8769-3BF01DDFD794@twistedmatrix.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> <4D061B1B-9BA8-4D4B-B87D-7A2528B51C46@twistedmatrix.com> <7B72FAD8-4882-4445-8769-3BF01DDFD794@twistedmatrix.com> Message-ID: On Tue, Feb 16, 2016 at 6:12 PM, Glyph Lefkowitz wrote: > Here, I'll make it for you. Assuming virtualenv is installed: > > python -m virtualenv /usr/lib/supervisord/environment > /usr/lib/supervisord/environment/bin/pip install supervisord > ln -vs /usr/lib/supervisord/environment/bin/supervisor* /usr/bin > > > More tooling around this idiom would of course be nifty, but this is really > all it takes. Maybe pip install --self-contained=/opt/supervisord supervisord should do something like this? -n -- Nathaniel J. Smith -- https://vorpus.org From glyph at twistedmatrix.com Wed Feb 17 22:44:20 2016 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Wed, 17 Feb 2016 19:44:20 -0800 Subject: [Distutils] =?utf-8?q?Don=27t_Use_=60sudo_pip_install=C2=B4_=28w?= =?utf-8?b?YXMgUmU6IFtmaW5hbCB2ZXJzaW9uP10gUEVQIDUxM+KApik=?= In-Reply-To: References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> <4D061B1B-9BA8-4D4B-B87D-7A2528B51C46@twistedmatrix.com> <7B72FAD8-4882-4445-8769-3BF01DDFD794@twistedmatrix.com> Message-ID: <63A99B94-07B5-49FE-8F52-5169281232BE@twistedmatrix.com> > On Feb 17, 2016, at 7:17 PM, Nathaniel Smith wrote: > > On Tue, Feb 16, 2016 at 6:12 PM, Glyph Lefkowitz > wrote: >> Here, I'll make it for you. Assuming virtualenv is installed: >> >> python -m virtualenv /usr/lib/supervisord/environment >> /usr/lib/supervisord/environment/bin/pip install supervisord >> ln -vs /usr/lib/supervisord/environment/bin/supervisor* /usr/bin >> >> >> More tooling around this idiom would of course be nifty, but this is really >> all it takes. > > Maybe > > pip install --self-contained=/opt/supervisord supervisord > > should do something like this? I think making pip do this might be mixing layers too much. Frankly `pipsi? does almost the right thing; if `sudo pipsi? put script symlinks in /usr/local/bin/ instead of ~/.local/bin/ and put venvs into /usr/local/lib/pipsi// instead of ~/.local/venvs/, it would be almost exactly the right thing. (I previously said "/usr/bin/" but the whole point of /usr/local is that it's a place you can write to which _is_ on the default path but _isn't_ managed by the system package manager.) Whatever the invocation is though, Noah has a point about system administrator expectations. If you always have to manually specify a path for --self-contained, then there's going to be no standard place to go look to see what applications are installed via this mechanism, and it makes diagnostics harder. There could of course be an option to put the install somewhere else, but if it's going to be pip, then it should be: pip install --self-contained supervisor by default, and pip install --self-contained --self-contained-environment=/opt/supervisor supervisor in the case where the user wants a non-standard location. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Wed Feb 17 22:58:33 2016 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Wed, 17 Feb 2016 19:58:33 -0800 Subject: [Distutils] =?utf-8?q?Don=27t_Use_=60sudo_pip_install=C2=B4_=28w?= =?utf-8?b?YXMgUmU6IFtmaW5hbCB2ZXJzaW9uP10gUEVQIDUxM+KApik=?= In-Reply-To: <63A99B94-07B5-49FE-8F52-5169281232BE@twistedmatrix.com> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <0AEDCCA1-02A5-47D3-B780-505BAC2035CD@twistedmatrix.com> <2996F130-5D6D-4855-8A06-15C7160D5611@coderanger.net> <4D061B1B-9BA8-4D4B-B87D-7A2528B51C46@twistedmatrix.com> <7B72FAD8-4882-4445-8769-3BF01DDFD794@twistedmatrix.com> <63A99B94-07B5-49FE-8F52-5169281232BE@twistedmatrix.com> Message-ID: <76A576F9-1EC6-492B-8DA6-EAE4E82583A4@twistedmatrix.com> > On Feb 17, 2016, at 7:44 PM, Glyph Lefkowitz wrote: > > I think making pip do this might be mixing layers too much. Frankly `pipsi? does almost the right thing; if `sudo pipsi? put script symlinks in /usr/local/bin/ instead of ~/.local/bin/ and put venvs into /usr/local/lib/pipsi// instead of ~/.local/venvs/, it would be almost exactly the right thing. I filed an issue here - https://github.com/mitsuhiko/pipsi/issues/69 - so we can continue discussion of this specific solution in a more appropriate forum. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfunk at funktronics.ca Wed Feb 17 20:31:41 2016 From: jfunk at funktronics.ca (James Oakley) Date: Wed, 17 Feb 2016 17:31:41 -0800 (PST) Subject: [Distutils] multiple backports of ipaddress and a world of pain In-Reply-To: <56C42FBC.4020103@phihag.de> References: <56C36C7D.1000307@simplistix.co.uk> <56C3A33A.50907@phihag.de> <56C42667.4010503@simplistix.co.uk> <56C42FBC.4020103@phihag.de> Message-ID: <1124269866.39995.1455759101616.JavaMail.zimbra@funktronics.ca> On Wednesday, February 17 12:30, Philipp Hagemeister wrote: > Preferably, one should go fix the source to return character strings in > the first place. Usually, this includes from __future__ import > unicode_literals, io.open with an encoding parameter, and inserting > unconditional decode calls when the source really is bytes. The code in question is Django, and it seems to be good in the currently supported version. This issue only occurred in older versions. It actually surprised me that this occurred, since Django is usually quite good at ensuring unicode strings throughout the code. -- James Oakley jfunk at funktronics.ca From ben+python at benfinney.id.au Thu Feb 18 00:42:12 2016 From: ben+python at benfinney.id.au (Ben Finney) Date: Thu, 18 Feb 2016 16:42:12 +1100 Subject: [Distutils] Consumers that care about the contents of an sdist (was: abstract build system PEP update) References: Message-ID: <85bn7e377v.fsf_-_@benfinney.id.au> Nathaniel Smith writes: > Ah-ha, this is useful. The reason this hasn't been considered, at > least in my proposal, is that I think this is the first I've heard > that there is anything that cares about what's in an sdist besides > setup.py :-). Please know that OS distributions also very much care about the contents of the sdist, because that's what the distributor claims is the corresponding source for the distribution. So they'll want to start from the sdist and build from that, to verify the sdist constitutes the source. -- \ ?The fact of your own existence is the most astonishing fact | `\ you'll ever have to confront. Don't dare ever see your life as | _o__) boring, monotonous, or joyless.? ?Richard Dawkins, 2010-03-10 | Ben Finney From marius at gedmin.as Thu Feb 18 01:20:03 2016 From: marius at gedmin.as (Marius Gedminas) Date: Thu, 18 Feb 2016 08:20:03 +0200 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: <20160217071353.GA20295@platonas> Message-ID: <20160218062003.GA4569@platonas> On Thu, Feb 18, 2016 at 12:12:41PM +1300, Robert Collins wrote: > On 17 February 2016 at 20:13, Marius Gedminas wrote: > > On Tue, Feb 16, 2016 at 04:10:43PM +1300, Robert Collins wrote: > >> diff --git a/build-system-abstraction.rst b/build-system-abstraction.rst > >> index a6e4712..56464f1 100644 > >> --- a/build-system-abstraction.rst > >> +++ b/build-system-abstraction.rst > >> @@ -68,12 +68,15 @@ modelled on pip's existing use of the setuptools > >> setup.py interface. > >> pypa.json > >> --------- > >> > >> -The file ``pypa.json`` acts as neutron configuration file for pip and other > >> +The file ``pypa.json`` acts as neutral configuration file for pip and other > >> tools that want to build source trees to consult for configuration. The > >> absence of a ``pypa.json`` file in a Python source tree implies a setuptools > >> or setuptools compatible build system. > >> > >> -The JSON has the following schema. Extra keys are ignored. > >> +The JSON has the following schema. Extra keys are ignored, which permits the > >> +use of ``pypa.json`` as a configuration file for other related tools. If doing > >> +that the chosen keys must be namespaced - e.g. ``flit`` with keys under that > >> +rather than (say) ``build`` or other generic keys. > > > > Is this going to be a file that human beings are expected to edit by > > hand? > > > > If so, can we please not use JSON? JSON is rather hostile to humans: no > > trailing commas, no possibility to add comments. > > Find another format thats ideally in the standard library, with an as > clean language-neutral schema. Yaml isn't. Toml isn't. $bikeshed. ConfigParser seems to be the only other available choice then. > Honestly - If this is the bit we bog down on, great - someone can > spend the time finding a thing to use here, but as discussed > previously, I don't care: the PEP editor that accepts the PEP can tell > me what format should be there, and I'll put it there. Until then, I > don't want to think about it because its not interesting. If it's a blob of constant metadata that describes the build system, maybe generated by the build system itself (by running some 'init-package' command, say), then I don't care either. If it's something that's going to be part of the user experience of Python packages, then I've reservations. Marius Gedminas -- Everyone has a photographic memory. Some don't have film. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From njs at pobox.com Thu Feb 18 03:43:49 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 18 Feb 2016 00:43:49 -0800 Subject: [Distutils] Consumers that care about the contents of an sdist (was: abstract build system PEP update) In-Reply-To: <85bn7e377v.fsf_-_@benfinney.id.au> References: <85bn7e377v.fsf_-_@benfinney.id.au> Message-ID: On Wed, Feb 17, 2016 at 9:42 PM, Ben Finney wrote: > Nathaniel Smith writes: > >> Ah-ha, this is useful. The reason this hasn't been considered, at >> least in my proposal, is that I think this is the first I've heard >> that there is anything that cares about what's in an sdist besides >> setup.py :-). > > Please know that OS distributions also very much care about the contents > of the sdist, because that's what the distributor claims is the > corresponding source for the distribution. Well, uh. Yes. That's what makes it an "sdist" rather than a "". The question was what can you about an sdist (in terms of structure, metadata, etc.) besides "you can build something from it by calling setup.py". -n -- Nathaniel J. Smith -- https://vorpus.org From p.f.moore at gmail.com Thu Feb 18 05:26:14 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 18 Feb 2016 10:26:14 +0000 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: On 17 February 2016 at 22:58, Robert Collins wrote: >> The latter option is far easier (we're basically almost there) but it >> will *not* suit build tools that can't (or won't) generate a sdist >> that conforms to PEP 314 (i.e., contains PKG-INFO). > > I still don't understand how *making* an sdist is tied into this PEP? > AIUI flit's only concern with sdists was that they didn't want a > setup.py with all the stuff we've discussed about that. Perhaps a flit > author can weigh in - generating good PEP 314 PKG-INFO shouldn't be a > problem for flit, has no Python2/3 implication, and doesn't require a > setup.py. That's why I didn't specifically refer to flit. In the context of this point, if projects produce sdists (i.e., they contain PKG-INFO) then we're find. AFAIK, there's no PEP saying a sdist has to contain a setup.py. Backward compatibility issues as noted in your PEP may imply that a minimal "shim" setup.py should be included. I'd have to reread what you said to confirm if you mandated a shim or not[1] but that's a side issue for now. > PEP 241 and 314 do not require a setup.py to be present AFAICT. Correct (as far as *I* can tell). And that's fine. > I think all we need to do is make sure that its crystal clear that > we're not changing any specified part of the contract of sdists: you > should still make them, they still need to meet PEP 314.... setup.py > was never PEP'd as part of the contract. Exactly. This was what I was proposing, so I agree 100% > We are changing the defacto contract, and so we should make that clear > in the prose. And yet, this is where we have to be careful, as pip isn't the only consumer of that contract (that's the point of Donald bringing up twine). You say in another mail: > How about:""" > The existing > PEP-314 [#pep314] requirements for sdists still apply, and distutils > or setuptools > users can use ``setup.py sdist`` to create an sdist. Other tools should create > sdists compatible with PEP-314 [#pep314]. Note that pip itself does not require > PEP-314 compatibility - it does not use any of the metadata from sdists - they > are treated like source trees from disk or version control.""" While this is accurate as regards pip not using the PEP 314 metadata, I think it's important to point out that in practical terms pip *does* require PEP 314 compatible sdists, in that it *locates* source by querying PyPI for a sdist, and so it inherits PyPI's rules for what is hosted there. I don't know if PyPI checks for PEP 314 metadata, or if Warehouse will in future, but that is something we need to take into consideration. And even if they don't, we recommend using twine for uploads, and twine *does* require PEP 314. As worded, the note about pip will[2] increase pressure for a means to host the sources for that tool on PyPI, which in turn will increase pressure for PyPI/Warehouse to *not* enforce PEP 314, and in turn for twine to not enforce that PEP. We aren't ready to offer an alternative to PEP 314 yet, though. What I don't understand is why it matters so much to you that the proposal *doesn't* simply say "sdists should conform to PEP 314", and you keep pushing to leave the door open for people to omit the metadata. (That's in *sdists* - as you point out, pip doesn't use or need metadata in the source tree, but that's already clear in the proposal). All I can imagine is that it's because you want to allow flit users the extra flexibility. My questions for flit (and any other build tool that expects to use these proposals): 1. Can you produce PEP 314 compatible sdists? 2. If not, what are your reasons (and does this proposal change anything in that regard)? 3. Assuming you don't intend to produce PEP 314 compatible sdists, what is your recommendation to your users on how they publish source, should they wish to? 4. If your users were unable to publish source on PyPI because of your lack of a sdist, would that change your view on whether you support sdists? 5. What changes would you need to the sdist format to make it acceptable to you? Unless those questions expose a significant roadblock that flit simply cannot overcome with PEP 314 compatibility, it seems to me that there's no harm in mandating PEP 314 for sdists without qualification, and leaving further discussion about de facto behaviour, possible changes, etc, to the "sdist 2.0" discussion that we're all trying to defer. By the way - I should point out that it's pretty straightforward for a project author to *manually* create a PKG-INFO file in their flit project, bundle it up with all of the rest of the sources, name the archive correctly, and have a PEP 314 compatible sdist. Managing that PKG-INFO file in parallel with flit's own copy of the metadata is likely to be a PITA, but how is that not just a usability issue to be dealt with between the users and authors of flit? Paul [1] One downsides of managing PEPs as PRs against the packaging specs is that there is no easy quotable term to refer to them by - we could, and probably should, use the PR number. And as a consequence, it's hard to quickly find the text because there's no simple form for the URL (again mitigated by using the PR number). [2] Once again, this is all on the assumption that tools will be developed that use the build system abstraction and can't or won't produce PEP 314 compatible sdists. Otherwise why are we even arguing? If no-one has a problem following PEP 314, we should just note that this proposal doesn't change the fact that it is a requirement for a sdist and stop worrying. From donald at stufft.io Thu Feb 18 06:05:21 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Feb 2016 06:05:21 -0500 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: <87EEB24C-DA75-4795-8092-0DB5B8709D8E@stufft.io> > On Feb 18, 2016, at 5:26 AM, Paul Moore wrote: > > [1] One downsides of managing PEPs as PRs against the packaging specs > is that there is no easy quotable term to refer to them by - we could, > and probably should, use the PR number. And as a consequence, it's > hard to quickly find the text because there's no simple form for the > URL (again mitigated by using the PR number). I've merged the two PRs and given them numbers, Robert's PEP is PEP 516 and Nathaniel's is PEP 517. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From olivier.grisel at ensta.org Thu Feb 18 07:41:41 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Thu, 18 Feb 2016 13:41:41 +0100 Subject: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions In-Reply-To: <54A6AF32-86AC-4BE9-95B4-292F5FA50441@stufft.io> References: <56AFEC55.30706@ubuntu.com> <5CC1AD90-17DB-4349-9968-BBF466E35961@twistedmatrix.com> <56C30265.6020008@ubuntu.com> <56C355FF.4020904@ubuntu.com> <20160216133313.1c172810@anarchist.wooz.org> <54A6AF32-86AC-4BE9-95B4-292F5FA50441@stufft.io> Message-ID: Strong +1 for all Donald's proposals. -- Olivier From ncoghlan at gmail.com Thu Feb 18 08:32:45 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Feb 2016 23:32:45 +1000 Subject: [Distutils] multiple backports of ipaddress and a world of pain In-Reply-To: <56C36C7D.1000307@simplistix.co.uk> References: <56C36C7D.1000307@simplistix.co.uk> Message-ID: On 17 February 2016 at 04:37, Chris Withers wrote: > Hi All, > > (Apologies for copying in the maintainers of the two backports and > django-netfields directly, I'm not sure you're on this distutils list...) > > This is painful and horrible, and I wish pip would prevent > modules/packages with the same name being installed by different > distributions at the same time, but even if it did, that would just force > something to happen rather than this: > > So, RHEL7, for worse or worse, ships with Python 2.7.5. It's 2.7.5 + important security backports, so any package that relies on PEP 466 features like ssl.create_default_context() should be fine in 7.2+. (You can also switch on default certificate verification if you want it: https://access.redhat.com/articles/2039753 ) > That means to keep pip happy, you need to do these dances in all the > virtualenvs you create: > > > http://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning > http://urllib3.readthedocs.org/en/latest/security.html#pyopenssl If urllib3 is actually using version detection rather than feature detection as recommended in PEP 466 ( https://www.python.org/dev/peps/pep-0466/#backwards-compatibility-considerations), then that's a missing bug report against urllib3 > One of those extra packages drags in this backport: > > https://pypi.python.org/pypi/ipaddress > > Yay! Now we have a happy pip talking to both PyPI and our internal DevPI > server! > > Right, so in a Django project I need to use > https://pypi.python.org/pypi/django-netfields. This, however, chooses > this backport instead: > > https://pypi.python.org/pypi/py2-ipaddress > > So, now we have two packages installing ipaddress.py, except they're two > very different versions and make different assumptions about what to do > with Python 2 strings. > > What should happen here? (other than me crying a lot...) > It looks like you found a resolution to this part of the problem, but those dependencies should only be needed on 7.0 and 7.1 Unfortunately, I missed this use case when PEP 508 was being defined, so there's currently no capability for Python level dependencies to be conditional on the presence or absence of particular attributes in other modules :( Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Feb 18 08:48:01 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Feb 2016 23:48:01 +1000 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: On 18 February 2016 at 07:30, Paul Moore wrote: > On 17 February 2016 at 21:01, Nathaniel Smith wrote: > > On Feb 17, 2016 4:44 AM, "Donald Stufft" wrote: > >> > > [...] > >> You could say that using twine to handle the uploading is a thing people > >> should > >> do (and I agree!) but that currently relies on having static metadata > >> inside of > >> the sdist that twine can parse, static metadata that isn't going to > exist > >> if > >> you just simply tarball up a directory on disk. > > > > Ah-ha, this is useful. The reason this hasn't been considered, at least > in > > my proposal, is that I think this is the first I've heard that there is > > anything that cares about what's in an sdist besides setup.py :-). > > Yes, twine is a good, concrete example. > I believe tools like pyp2rpm, conda skeleton, py2dsc and fpm also rely on that static sdist metadata (I'm not 100% sure on that, but it would make sense for them to do so). We just spend so much time worrying about the dependency management problems that PKG-INFO *doesn't* handle that we forget the simpler descriptive metadata that it covers accurately :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu Feb 18 09:33:16 2016 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Feb 2016 14:33:16 +0000 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: Yes PKG-INFO is reasonably useful and easy to hand-edit. It would be easy to maintain. If you were doing it manually the only thing you would need to update frequently would be the version number. On Thu, Feb 18, 2016 at 8:48 AM Nick Coghlan wrote: > On 18 February 2016 at 07:30, Paul Moore wrote: > >> On 17 February 2016 at 21:01, Nathaniel Smith wrote: >> > On Feb 17, 2016 4:44 AM, "Donald Stufft" wrote: >> >> >> > [...] >> >> You could say that using twine to handle the uploading is a thing >> people >> >> should >> >> do (and I agree!) but that currently relies on having static metadata >> >> inside of >> >> the sdist that twine can parse, static metadata that isn't going to >> exist >> >> if >> >> you just simply tarball up a directory on disk. >> > >> > Ah-ha, this is useful. The reason this hasn't been considered, at least >> in >> > my proposal, is that I think this is the first I've heard that there is >> > anything that cares about what's in an sdist besides setup.py :-). >> >> Yes, twine is a good, concrete example. >> > > I believe tools like pyp2rpm, conda skeleton, py2dsc and fpm also rely on > that static sdist metadata (I'm not 100% sure on that, but it would make > sense for them to do so). > > We just spend so much time worrying about the dependency management > problems that PKG-INFO *doesn't* handle that we forget the simpler > descriptive metadata that it covers accurately :) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Feb 18 10:00:56 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 18 Feb 2016 15:00:56 +0000 Subject: [Distutils] abstract build system PEP update In-Reply-To: References: Message-ID: On 18 February 2016 at 13:48, Nick Coghlan wrote: >> Yes, twine is a good, concrete example. > > I believe tools like pyp2rpm, conda skeleton, py2dsc and fpm also rely on > that static sdist metadata (I'm not 100% sure on that, but it would make > sense for them to do so). Yeah, I'm sure there are a lot of tools out there that use the sdist metadata. In a lot of ways, pip is an extremely atypical example of a sdist consumer. > We just spend so much time worrying about the dependency management problems > that PKG-INFO *doesn't* handle that we forget the simpler descriptive > metadata that it covers accurately :) Indeed - and by focusing solely on pip's requirements in many ways we're designing for an edge case (admittedly a hugely significant and by far the most used edge case, but even so...) :-) Paul From ethan at stoneleaf.us Thu Feb 18 12:06:47 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 18 Feb 2016 09:06:47 -0800 Subject: [Distutils] PEP 516 and pypa.json Message-ID: <56C5FA27.20305@stoneleaf.us> Greetings! I saw PEP-0516 go through check-ins, and had a question about the pypa.json portion of the proposal -- namely, why are we using a .json file? I presume this is a file that will be created by hand, and while json is not as ugly as xml, it's certainly not pretty. Can we not use an .ini file, or a white-space significant format? The latter should be familiar, if not comfortable, to any Python programmer. -- ~Ethan~ From njs at pobox.com Thu Feb 18 14:32:17 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 18 Feb 2016 11:32:17 -0800 Subject: [Distutils] PEP 516 and pypa.json In-Reply-To: <56C5FA27.20305@stoneleaf.us> References: <56C5FA27.20305@stoneleaf.us> Message-ID: On Thu, Feb 18, 2016 at 9:06 AM, Ethan Furman wrote: > Greetings! > > I saw PEP-0516 go through check-ins, and had a question about the pypa.json > portion of the proposal -- namely, why are we using a .json file? > > I presume this is a file that will be created by hand, and while json is not > as ugly as xml, it's certainly not pretty. > > Can we not use an .ini file, or a white-space significant format? The > latter should be familiar, if not comfortable, to any Python programmer. These are still draft proposals, and the actual file format is the least interesting question to discuss. Think of the JSON thing as being a placeholder for now :-). There is no obvious solution, because .ini files are extremely underspecified, nothing else is in the stdlib, yaml contains its own share of gibbering horrors, toml is not widely used and is associated with an extremely divisive figure, etc. etc. Don't worry, though: there will be bikeshedding. Ideally *after* the more substantive issues are settled :-) -n -- Nathaniel J. Smith -- https://vorpus.org From dholth at gmail.com Thu Feb 18 15:00:17 2016 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Feb 2016 20:00:17 +0000 Subject: [Distutils] PEP 516 and pypa.json In-Reply-To: References: <56C5FA27.20305@stoneleaf.us> Message-ID: I'm still pulling for RFC 822 format :) On Thu, Feb 18, 2016, 14:33 Nathaniel Smith wrote: > On Thu, Feb 18, 2016 at 9:06 AM, Ethan Furman wrote: > > Greetings! > > > > I saw PEP-0516 go through check-ins, and had a question about the > pypa.json > > portion of the proposal -- namely, why are we using a .json file? > > > > I presume this is a file that will be created by hand, and while json is > not > > as ugly as xml, it's certainly not pretty. > > > > Can we not use an .ini file, or a white-space significant format? The > > latter should be familiar, if not comfortable, to any Python programmer. > > These are still draft proposals, and the actual file format is the > least interesting question to discuss. Think of the JSON thing as > being a placeholder for now :-). > > There is no obvious solution, because .ini files are extremely > underspecified, nothing else is in the stdlib, yaml contains its own > share of gibbering horrors, toml is not widely used and is associated > with an extremely divisive figure, etc. etc. Don't worry, though: > there will be bikeshedding. Ideally *after* the more substantive > issues are settled :-) > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Thu Feb 18 19:44:11 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 18 Feb 2016 16:44:11 -0800 Subject: [Distutils] PEP 516 and pypa.json In-Reply-To: References: <56C5FA27.20305@stoneleaf.us> Message-ID: <56C6655B.9090809@stoneleaf.us> On 02/18/2016 12:00 PM, Daniel Holth wrote: > On Thu, Feb 18, 2016 at 9:06 AM, Ethan Furman wrote: >> I saw PEP-0516 go through check-ins, and had a question about the >> pypa.json portion of the proposal -- namely, why are we using a >> .json file? >> >> I presume this is a file that will be created by hand, and while >> json is not as ugly as xml, it's certainly not pretty. > I'm still pulling for RFC 822 format :) You know what? xml is okay.* ;) -- ~Ethan~ * Not really! Just kidding! From chris at simplistix.co.uk Fri Feb 19 01:59:22 2016 From: chris at simplistix.co.uk (Chris Withers) Date: Fri, 19 Feb 2016 06:59:22 +0000 Subject: [Distutils] multiple backports of ipaddress and a world of pain In-Reply-To: References: <56C36C7D.1000307@simplistix.co.uk> Message-ID: <56C6BD4A.7070202@simplistix.co.uk> An HTML attachment was scrubbed... URL: From lele at metapensiero.it Fri Feb 19 16:22:06 2016 From: lele at metapensiero.it (Lele Gaifax) Date: Fri, 19 Feb 2016 22:22:06 +0100 Subject: [Distutils] Question about package metainfo shown on PyPI Message-ID: <87wpq0o0ox.fsf@metapensiero.it> Hi all, a friend showed me the case of the package "django-ddp": it's PyPI page[1] contains a section "Requires Distributions", apparently matching package' setup.py declarations[2]. Since very few other PyPI pages contain that information, surely not those describing my packages, even if their setup declarations look very similar to the d-ddp one. Is there some magic at work here, or is it just that Django's people manually tweak the settings of their packages directly thru the admin pages on PyPI? Thanks for any clarification, ciao, lele. [1] https://pypi.python.org/pypi/django-ddp/0.19.1 [2] https://github.com/django-ddp/django-ddp/blob/develop/setup.py#L206 -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From ncoghlan at gmail.com Sat Feb 20 08:21:44 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Feb 2016 23:21:44 +1000 Subject: [Distutils] multiple backports of ipaddress and a world of pain In-Reply-To: <56C6BD4A.7070202@simplistix.co.uk> References: <56C36C7D.1000307@simplistix.co.uk> <56C6BD4A.7070202@simplistix.co.uk> Message-ID: On 19 February 2016 at 16:59, Chris Withers wrote: > Hi Nick, > > On 18/02/2016 13:32, Nick Coghlan wrote: > > On 17 February 2016 at 04:37, Chris Withers > wrote: > >> >> So, RHEL7, for worse or worse, ships with Python 2.7.5. > > > It's 2.7.5 + important security backports, so any package that relies on > PEP 466 features like ssl.create_default_context() should be fine in 7.2+. > (You can also switch on default certificate verification if you want it: > https://access.redhat.com/articles/2039753 ) > > The company I'm with at the moment is one of the more aggressive operating > system release followers I've worked with or for, and even we're not on 7.2 > yet! > Aye, being on 7.x at all already means they're doing better than a lot of folks. If fast adoption of new distro versions was entirely typical we wouldn't still be having to encourage people to stop running their own applications in the RHEL 6 system Python :) > > > It looks like you found a resolution to this part of the problem, but > those dependencies should only be needed on 7.0 and 7.1 > > Unfortunately, I missed this use case when PEP 508 was being defined, so > there's currently no capability for Python level dependencies to be > conditional on the presence or absence of particular attributes in other > modules :( > > Not sure that's such a biggie here, I'd more like to see pip at least > notice that it's trying to install two files into the same location. > It's mainly a note to myself that there's a current gap in our backporting story there, since it means folks *can't* currently do ssl module feature detection at installation time to decide if they need PyOpenSSL as a dependency or not. At the moment that only impacts RHEL 7.2+, but if PEP 493 gets accepted, the omission may end up affecting other LTS Linux distros as well. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From enischte at gmail.com Mon Feb 22 19:53:14 2016 From: enischte at gmail.com (Matthias Goerner) Date: Mon, 22 Feb 2016 16:53:14 -0800 Subject: [Distutils] Can't build C++11 libraries with distutils.command.build_clib because it lacks extra_compile_args Message-ID: Hi! I am trying to use distutils.command's build_clib to compile C++11 code but had to monkey patch build_clib to support the extra_compile_args "-std=c++11", see my_build_libraries in http://sageregina.unhyperbolic.org/setup.py . As you can see, it was only adding the line extra_postargs = build_info.get('extra_compile_args')) to objects = self.compiler.compile( sources, output_dir=self.build_temp, macros=macros, include_dirs=include_dirs, ...) in build_clib.py. Could you make this change to distutils so that is available for everyone and I can eventually stop monkey-patching? Best wishes, Matthias Goerner -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl at personnelware.com Mon Feb 22 20:20:18 2016 From: carl at personnelware.com (Carl Karsten) Date: Mon, 22 Feb 2016 19:20:18 -0600 Subject: [Distutils] how about -c config.ini Message-ID: What are the chances of being able to pass the full path to a config file as a command line option? https://docs.python.org/2/install/index.html#location-and-names-of-config-files setup.cfg (3) ... I.e., in the current directory (usually the location of the setup script). This, only when running in a script in Travis it may not be obvious what the current dir is, and being able to point to the desired config file would be ... desired. -- Carl K -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.m.bray at gmail.com Thu Feb 25 12:19:12 2016 From: erik.m.bray at gmail.com (Erik Bray) Date: Thu, 25 Feb 2016 18:19:12 +0100 Subject: [Distutils] Can't build C++11 libraries with distutils.command.build_clib because it lacks extra_compile_args In-Reply-To: References: Message-ID: On Tue, Feb 23, 2016 at 1:53 AM, Matthias Goerner wrote: > Hi! > > I am trying to use distutils.command's build_clib to compile C++11 code but > had to monkey patch build_clib to support the extra_compile_args > "-std=c++11", see my_build_libraries in > http://sageregina.unhyperbolic.org/setup.py . > > As you can see, it was only adding the line > > extra_postargs = build_info.get('extra_compile_args')) > > to > > objects = self.compiler.compile( > sources, > output_dir=self.build_temp, > macros=macros, > include_dirs=include_dirs, > ...) > > > in build_clib.py. Could you make this change to distutils so that is > available for everyone and I can eventually stop monkey-patching? Hi, I don't believe build_clib is really supported. You can't even find much reference to it or how to use it in the documentation. I believe it's a legacy of distutils' earlier naive attempts to claim to support compilation of C/C++ libraries, but that's too lacking in flexibility to really be useful beyond the most trivial cases. Hence a decade+ of frustration with using distutils/setuptools to build projects that rely on non-Python libraries and various outcroppings of third-party projects intending to improve on that and the like. You probably won't get distutils patched, but what you can always do is subclass commands to add additional functionality. This is less hacky the monkey-patching and is generally well supported. The long tradition of this is why distutils is nearly impossible to change :) Erik From wolfgang.maier at biologie.uni-freiburg.de Mon Feb 29 11:00:36 2016 From: wolfgang.maier at biologie.uni-freiburg.de (Wolfgang Maier) Date: Mon, 29 Feb 2016 17:00:36 +0100 Subject: [Distutils] deprecating pip install --target In-Reply-To: References: Message-ID: <56D46B24.6000700@biologie.uni-freiburg.de> On 17.02.2016 11:09, Paul Moore wrote: > On 16 February 2016 at 22:52, Robert Collins wrote: >>> An alternative would be great, though I can probably fake things somehow for >>> my purposes. >> >> Sounds similar to Daniel's need - and again, --prefix + setting PATH >> and PYTHONPATH would be better. > > Note that if I read the help for --prefix correctly, "pip install > --target x foo" puts foo in x, whereas "pip install --prefix x foo" > puts foo in x/lib. So how would setting prefix allow me to put foo in > x, and not in a subdirectory? That is specifically my requirement (and > the vendoring requirement in general). I just discovered a way that lets you do just that, but before using it for my own code I'd like to know whether you would consider it a weird hack that will probably not work in the future or something reasonable that could be used in production code? Here's the trick: write a temporary distutils setup.cfg file in the current working directory with the content: [install] install-lib=abspath/to/target_dir then run pip from that directory like so: pip install packagexy --prefix abspath/to/target_dir Of note, combining a local setup.cfg file and --prefix like this isn't documented and to me it wasn't even clear whether the file would be expected in the current working directory that pip gets run in or in the downloaded package. What is more, the local setup.cfg file can also be used to specify a complete installation scheme for use without the --prefix option. As I said I'm really interested in your opinions, Wolfgang From wolfgang.maier at biologie.uni-freiburg.de Mon Feb 29 11:55:44 2016 From: wolfgang.maier at biologie.uni-freiburg.de (Wolfgang Maier) Date: Mon, 29 Feb 2016 17:55:44 +0100 Subject: [Distutils] deprecating pip install --target In-Reply-To: <56D46B24.6000700@biologie.uni-freiburg.de> References: <56D46B24.6000700@biologie.uni-freiburg.de> Message-ID: <56D47810.1060902@biologie.uni-freiburg.de> On 29.02.2016 17:00, Wolfgang Maier wrote: > > I just discovered a way that lets you do just that, but before using it > for my own code I'd like to know whether you would consider it a weird > hack that will probably not work in the future or something reasonable > that could be used in production code? Here's the trick: > > write a temporary distutils setup.cfg file in the current working > directory with the content: > > [install] > install-lib=abspath/to/target_dir > > then run pip from that directory like so: > > pip install packagexy --prefix abspath/to/target_dir > > Of note, combining a local setup.cfg file and --prefix like this isn't > documented and to me it wasn't even clear whether the file would be > expected in the current working directory that pip gets run in or in the > downloaded package. > Answering my own question: its a messy hack that will not work from a virtualenv because distutils will ignore the local cfg file then. From wolfgang.maier at biologie.uni-freiburg.de Mon Feb 29 11:00:36 2016 From: wolfgang.maier at biologie.uni-freiburg.de (Wolfgang Maier) Date: Mon, 29 Feb 2016 17:00:36 +0100 Subject: [Distutils] deprecating pip install --target In-Reply-To: References: Message-ID: <56D46B24.6000700@biologie.uni-freiburg.de> On 17.02.2016 11:09, Paul Moore wrote: > On 16 February 2016 at 22:52, Robert Collins wrote: >>> An alternative would be great, though I can probably fake things somehow for >>> my purposes. >> >> Sounds similar to Daniel's need - and again, --prefix + setting PATH >> and PYTHONPATH would be better. > > Note that if I read the help for --prefix correctly, "pip install > --target x foo" puts foo in x, whereas "pip install --prefix x foo" > puts foo in x/lib. So how would setting prefix allow me to put foo in > x, and not in a subdirectory? That is specifically my requirement (and > the vendoring requirement in general). I just discovered a way that lets you do just that, but before using it for my own code I'd like to know whether you would consider it a weird hack that will probably not work in the future or something reasonable that could be used in production code? Here's the trick: write a temporary distutils setup.cfg file in the current working directory with the content: [install] install-lib=abspath/to/target_dir then run pip from that directory like so: pip install packagexy --prefix abspath/to/target_dir Of note, combining a local setup.cfg file and --prefix like this isn't documented and to me it wasn't even clear whether the file would be expected in the current working directory that pip gets run in or in the downloaded package. What is more, the local setup.cfg file can also be used to specify a complete installation scheme for use without the --prefix option. As I said I'm really interested in your opinions, Wolfgang From wolfgang.maier at biologie.uni-freiburg.de Mon Feb 29 11:55:44 2016 From: wolfgang.maier at biologie.uni-freiburg.de (Wolfgang Maier) Date: Mon, 29 Feb 2016 17:55:44 +0100 Subject: [Distutils] deprecating pip install --target In-Reply-To: <56D46B24.6000700@biologie.uni-freiburg.de> References: <56D46B24.6000700@biologie.uni-freiburg.de> Message-ID: <56D47810.1060902@biologie.uni-freiburg.de> On 29.02.2016 17:00, Wolfgang Maier wrote: > > I just discovered a way that lets you do just that, but before using it > for my own code I'd like to know whether you would consider it a weird > hack that will probably not work in the future or something reasonable > that could be used in production code? Here's the trick: > > write a temporary distutils setup.cfg file in the current working > directory with the content: > > [install] > install-lib=abspath/to/target_dir > > then run pip from that directory like so: > > pip install packagexy --prefix abspath/to/target_dir > > Of note, combining a local setup.cfg file and --prefix like this isn't > documented and to me it wasn't even clear whether the file would be > expected in the current working directory that pip gets run in or in the > downloaded package. > Answering my own question: its a messy hack that will not work from a virtualenv because distutils will ignore the local cfg file then. From jonwayne at google.com Mon Feb 29 16:24:45 2016 From: jonwayne at google.com (Jon Parrott) Date: Mon, 29 Feb 2016 21:24:45 +0000 Subject: [Distutils] deprecating pip install --target Message-ID: Currently, Google App Engine recommends that users use "--target" to vendor packages: https://cloud.google.com/appengine/docs/python/tools/libraries27?hl=en#vendoring Ostensibly we also support virtualenvs with only pure-python packages. > Alternatively, it should be possible to *detect* the problem cases, so > why not do that, and reject them? Effectively, reduce the scope of > --target to pure-python wheels only. This would be acceptable for the App Engine use case. -------------- next part -------------- An HTML attachment was scrubbed... URL: