From marius at gedmin.as Tue Dec 1 01:38:02 2015 From: marius at gedmin.as (Marius Gedminas) Date: Tue, 1 Dec 2015 08:38:02 +0200 Subject: [Distutils] namespace_package In-Reply-To: References: Message-ID: <20151201063802.GB8007@platonas> On Mon, Nov 30, 2015 at 06:59:31PM -0500, KP wrote: > I'm not sure where the issue is, but when I specify a namespace_package in > the setup.py file, I can indeed have multiple packages with the same base > (foo.bar, foo.blah, etc...). The files all install in to the same > directory. It drops the foo/__init__.py that would be doing the > extend_path, and instead adds a ".pth" file that is a bit over my head. > > The problem is that it does not seem to traverse the entire sys.path to > find multiple foo packages. Does every foo.x package specify namespace_packages=['foo']? Do they all ship an identical foo/__init__.py with import pkg_resources pkg_resources.declare_namespace(__name__) ? AFAIU you need both things in every package, if you want to use namespace packages. > If I do not specify namespace_packages and instead just use the > pkgutil.extend_path, then this seems to allow the packages to be in > multiple places in the sys.path. > > Is there something additional for the namespace_package that i need to > specify in order for all of the sys.path to be checked? > > I'm using 18.5 setuptools....but I am not sure if this somehow ties in to > wheel/pip, since I'm using that for the actual install. Marius Gedminas -- Give a man a computer program and you give him a headache, but teach him to program computers and you give him the power to create headaches for others for the rest of his life... -- R. B. Forest -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From patter001 at gmail.com Tue Dec 1 07:17:58 2015 From: patter001 at gmail.com (KP) Date: Tue, 1 Dec 2015 07:17:58 -0500 Subject: [Distutils] namespace_package In-Reply-To: <20151201063802.GB8007@platonas> References: <20151201063802.GB8007@platonas> Message-ID: yes, both of those statements are true. However, with the namespace_packages = ['foo'], the lib\site-packages\foo\__init__.py does not get installed (even though it is in the source tree). Instead there's just a dir with "foo/bar/__init__.py" and "foo/blah/__init__.py". I will try to look in the "wheel" side of things next I guess. Perhaps pip is doing something since it seems to install even source distributables by first converting to a wheel. On Tue, Dec 1, 2015 at 1:38 AM, Marius Gedminas wrote: > On Mon, Nov 30, 2015 at 06:59:31PM -0500, KP wrote: > > I'm not sure where the issue is, but when I specify a namespace_package > in > > the setup.py file, I can indeed have multiple packages with the same base > > (foo.bar, foo.blah, etc...). The files all install in to the same > > directory. It drops the foo/__init__.py that would be doing the > > extend_path, and instead adds a ".pth" file that is a bit over my head. > > > > The problem is that it does not seem to traverse the entire sys.path to > > find multiple foo packages. > > Does every foo.x package specify namespace_packages=['foo']? > > Do they all ship an identical foo/__init__.py with > > import pkg_resources > pkg_resources.declare_namespace(__name__) > > ? > > AFAIU you need both things in every package, if you want to use > namespace packages. > > > If I do not specify namespace_packages and instead just use the > > pkgutil.extend_path, then this seems to allow the packages to be in > > multiple places in the sys.path. > > > > Is there something additional for the namespace_package that i need to > > specify in order for all of the sys.path to be checked? > > > > I'm using 18.5 setuptools....but I am not sure if this somehow ties in to > > wheel/pip, since I'm using that for the actual install. > > Marius Gedminas > -- > Give a man a computer program and you give him a headache, but teach him to > program computers and you give him the power to create headaches for > others for > the rest of his life... > -- R. B. Forest > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ubernostrum at gmail.com Tue Dec 1 18:46:58 2015 From: ubernostrum at gmail.com (James Bennett) Date: Tue, 1 Dec 2015 17:46:58 -0600 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: Reviving this old thread because today is Django 1.9's release date and I'm unsure of the process for keeping up with new-released versions in trove classifiers. Do we need to manually poke someone each time (as with today, when "Framework :: Django :: 1.9" becomes a thing), or is there a way to automate it? -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at python.org Tue Dec 1 19:54:53 2015 From: richard at python.org (Richard Jones) Date: Wed, 2 Dec 2015 11:54:53 +1100 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: At the moment it's a manual poke, but I have done this thing right now. On 2 December 2015 at 10:46, James Bennett wrote: > Reviving this old thread because today is Django 1.9's release date and > I'm unsure of the process for keeping up with new-released versions in > trove classifiers. Do we need to manually poke someone each time (as with > today, when "Framework :: Django :: 1.9" becomes a thing), or is there a > way to automate it? > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at gedmin.as Wed Dec 2 01:52:50 2015 From: marius at gedmin.as (Marius Gedminas) Date: Wed, 2 Dec 2015 08:52:50 +0200 Subject: [Distutils] namespace_package In-Reply-To: References: <20151201063802.GB8007@platonas> Message-ID: <20151202065250.GA6906@platonas> On Tue, Dec 01, 2015 at 07:17:58AM -0500, KP wrote: > However, with the namespace_packages = ['foo'], the > lib\site-packages\foo\__init__.py does not get installed (even though it is > in the source tree). Instead there's just a dir with "foo/bar/__init__.py" > and "foo/blah/__init__.py". I will try to look in the "wheel" side of > things next I guess. Perhaps pip is doing something since it seems to > install even source distributables by first converting to a wheel. Can you show us your setup.py? Marius Gedminas -- I?m not big on predictions, but I do have one for 2011: HTML5 will continue to be popular, because anything popular will get labeled ?HTML5.? -- Mark Pilgrim -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From patter001 at gmail.com Tue Dec 1 16:29:13 2015 From: patter001 at gmail.com (KP) Date: Tue, 1 Dec 2015 16:29:13 -0500 Subject: [Distutils] namespace_package In-Reply-To: References: <20151201063802.GB8007@platonas> Message-ID: Just to recap: 1. if you don't put namespace_packages in the setup.py, then it will uninstall the shared __init__.py when you uninstall any of the packages 2. If you put namespace_packages, then there is a pth file created for the shared directory (site-packages/foo) and no foo/__init__.py is created (even if it is in your package) #2 - breaks things like : doing a source checkout that participates in this namespace_package...If you do this then only the lib/site-packages/foo/ are importable Solution appears to be: On Tue, Dec 1, 2015 at 7:17 AM, KP wrote: > yes, both of those statements are true. > > However, with the namespace_packages = ['foo'], the > lib\site-packages\foo\__init__.py does not get installed (even though it is > in the source tree). Instead there's just a dir with "foo/bar/__init__.py" > and "foo/blah/__init__.py". I will try to look in the "wheel" side of > things next I guess. Perhaps pip is doing something since it seems to > install even source distributables by first converting to a wheel. > > > On Tue, Dec 1, 2015 at 1:38 AM, Marius Gedminas wrote: > >> On Mon, Nov 30, 2015 at 06:59:31PM -0500, KP wrote: >> > I'm not sure where the issue is, but when I specify a namespace_package >> in >> > the setup.py file, I can indeed have multiple packages with the same >> base >> > (foo.bar, foo.blah, etc...). The files all install in to the same >> > directory. It drops the foo/__init__.py that would be doing the >> > extend_path, and instead adds a ".pth" file that is a bit over my head. >> > >> > The problem is that it does not seem to traverse the entire sys.path to >> > find multiple foo packages. >> >> Does every foo.x package specify namespace_packages=['foo']? >> >> Do they all ship an identical foo/__init__.py with >> >> import pkg_resources >> pkg_resources.declare_namespace(__name__) >> >> ? >> >> AFAIU you need both things in every package, if you want to use >> namespace packages. >> >> > If I do not specify namespace_packages and instead just use the >> > pkgutil.extend_path, then this seems to allow the packages to be in >> > multiple places in the sys.path. >> > >> > Is there something additional for the namespace_package that i need to >> > specify in order for all of the sys.path to be checked? >> > >> > I'm using 18.5 setuptools....but I am not sure if this somehow ties in >> to >> > wheel/pip, since I'm using that for the actual install. >> >> Marius Gedminas >> -- >> Give a man a computer program and you give him a headache, but teach him >> to >> program computers and you give him the power to create headaches for >> others for >> the rest of his life... >> -- R. B. Forest >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From patter001 at gmail.com Tue Dec 1 16:31:27 2015 From: patter001 at gmail.com (KP) Date: Tue, 1 Dec 2015 16:31:27 -0500 Subject: [Distutils] namespace_package In-Reply-To: References: <20151201063802.GB8007@platonas> Message-ID: (sorry for the stupid previous early send) Just to recap: 1. if you don't put namespace_packages in the setup.py, then it will uninstall the shared __init__.py when you uninstall any of the packages 2. If you put namespace_packages, then there is a pth file created for the shared directory (site-packages/foo) and no foo/__init__.py is created (even if it is in your package) #2 - breaks things like : doing a source checkout that participates in this namespace_package...If you do this then only the lib/site-packages/foo/ are importable Solution appears to be: create a standalone "foo" package that has ONLY the shared __init__.py, and do NOT set "namespace_packages" . This seems to associate the __init__.py with the "foo" tool, so that only when you uninstall the "foo" tool does the __init__.py get uninstalled. It also has the shared __init__.py in lib/site-packages, so it seems that enables the other namespace packages that are source checkouts or added to path via other methods. Hopefully this works long term, and maybe is useful to someone else out there.... Thanks, On Tue, Dec 1, 2015 at 4:29 PM, KP wrote: > Just to recap: > > 1. if you don't put namespace_packages in the setup.py, then it will > uninstall the shared __init__.py when you uninstall any of the packages > 2. If you put namespace_packages, then there is a pth file created for > the shared directory (site-packages/foo) and no foo/__init__.py is created > (even if it is in your package) > #2 - breaks things like : doing a source checkout that participates in > this namespace_package...If you do this then only the > lib/site-packages/foo/ are importable > > Solution appears to be: > > > On Tue, Dec 1, 2015 at 7:17 AM, KP wrote: > >> yes, both of those statements are true. >> >> However, with the namespace_packages = ['foo'], the >> lib\site-packages\foo\__init__.py does not get installed (even though it is >> in the source tree). Instead there's just a dir with "foo/bar/__init__.py" >> and "foo/blah/__init__.py". I will try to look in the "wheel" side of >> things next I guess. Perhaps pip is doing something since it seems to >> install even source distributables by first converting to a wheel. >> >> >> On Tue, Dec 1, 2015 at 1:38 AM, Marius Gedminas wrote: >> >>> On Mon, Nov 30, 2015 at 06:59:31PM -0500, KP wrote: >>> > I'm not sure where the issue is, but when I specify a >>> namespace_package in >>> > the setup.py file, I can indeed have multiple packages with the same >>> base >>> > (foo.bar, foo.blah, etc...). The files all install in to the same >>> > directory. It drops the foo/__init__.py that would be doing the >>> > extend_path, and instead adds a ".pth" file that is a bit over my head. >>> > >>> > The problem is that it does not seem to traverse the entire sys.path to >>> > find multiple foo packages. >>> >>> Does every foo.x package specify namespace_packages=['foo']? >>> >>> Do they all ship an identical foo/__init__.py with >>> >>> import pkg_resources >>> pkg_resources.declare_namespace(__name__) >>> >>> ? >>> >>> AFAIU you need both things in every package, if you want to use >>> namespace packages. >>> >>> > If I do not specify namespace_packages and instead just use the >>> > pkgutil.extend_path, then this seems to allow the packages to be in >>> > multiple places in the sys.path. >>> > >>> > Is there something additional for the namespace_package that i need to >>> > specify in order for all of the sys.path to be checked? >>> > >>> > I'm using 18.5 setuptools....but I am not sure if this somehow ties in >>> to >>> > wheel/pip, since I'm using that for the actual install. >>> >>> Marius Gedminas >>> -- >>> Give a man a computer program and you give him a headache, but teach him >>> to >>> program computers and you give him the power to create headaches for >>> others for >>> the rest of his life... >>> -- R. B. Forest >>> >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zvezdanpetkovic at gmail.com Wed Dec 2 00:17:14 2015 From: zvezdanpetkovic at gmail.com (Zvezdan Petkovic) Date: Tue, 1 Dec 2015 21:17:14 -0800 Subject: [Distutils] namespace_package In-Reply-To: References: <20151201063802.GB8007@platonas> Message-ID: Hi KP, > On Dec 1, 2015, at 4:17 AM, KP wrote: > > yes, both of those statements are true. > > However, with the namespace_packages = [?foo'], the lib\site-packages\foo\__init__.py does not get installed (even though it is in the source tree). This is exactly how it?s supposed to behave for namespace packages. The *.pth file will take care of providing info about your namespace to the python importer. > Instead there?s just a dir with "foo/bar/__init__.py" and "foo/blah/__init__.py". These are regular packages. Hence they preserve their __init__.py. Hope this helps. Regards, Zvezdan > I will try to look in the "wheel" side of things next I guess. Perhaps pip is doing something since it seems to install even source distributables by first converting to a wheel. > > > On Tue, Dec 1, 2015 at 1:38 AM, Marius Gedminas > wrote: > On Mon, Nov 30, 2015 at 06:59:31PM -0500, KP wrote: > > I'm not sure where the issue is, but when I specify a namespace_package in > > the setup.py file, I can indeed have multiple packages with the same base > > (foo.bar, foo.blah, etc...). The files all install in to the same > > directory. It drops the foo/__init__.py that would be doing the > > extend_path, and instead adds a ".pth" file that is a bit over my head. > > > > The problem is that it does not seem to traverse the entire sys.path to > > find multiple foo packages. > > Does every foo.x package specify namespace_packages=['foo']? > > Do they all ship an identical foo/__init__.py with > > import pkg_resources > pkg_resources.declare_namespace(__name__) > > ? > > AFAIU you need both things in every package, if you want to use > namespace packages. > > > If I do not specify namespace_packages and instead just use the > > pkgutil.extend_path, then this seems to allow the packages to be in > > multiple places in the sys.path. > > > > Is there something additional for the namespace_package that i need to > > specify in order for all of the sys.path to be checked? > > > > I'm using 18.5 setuptools....but I am not sure if this somehow ties in to > > wheel/pip, since I'm using that for the actual install. > > Marius Gedminas > -- > Give a man a computer program and you give him a headache, but teach him to > program computers and you give him the power to create headaches for others for > the rest of his life... > -- R. B. Forest > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From patter001 at gmail.com Wed Dec 2 21:46:23 2015 From: patter001 at gmail.com (KP) Date: Wed, 2 Dec 2015 21:46:23 -0500 Subject: [Distutils] namespace_package In-Reply-To: References: <20151201063802.GB8007@platonas> Message-ID: It appears that the .pth file only adds the lib/site-packages/foo/ module to the path, so that part _works_ but that seems to only work for modules installed in to lib/site-packages. So if I do something like: foo.bar having this setup.py setup(name="foo.bar", version=str(__version__), packages=["foo","foo.bar"], namespace_packages=['foo'], ) To go with this package there's a couple of possible use cases: 1. --- simple pth file point to another participant in this name space package -- But I also have a pth file with: c:\test And the directory contents of C:\Test are: c:\test\foo\__init__.py # and this as the typical pkgutil.extend_path c:\test\foo\more\__init__.py The import mechanisms only find the "foo.bar" I installed and not the foo.more 2. -- is similar to #1, but maybe slightly more accepted normal...assume the same "foo.bar", and the setup.py provide. If I'm working on another "foo.more" package that participates in this namespace package, then an pip install from the source will _not_ work. The workaround I posted earlier (an additional package that is NOT a namespace package, and is needed to force an __init__.py to exist in lib/site-packages/foo). This works, but it feels like a bit of a hack, and that the the "-pkg.pth" created for foo.bar should have worked. (work = allow namespace packages in multiple directories that are in sys.path) On Wed, Dec 2, 2015 at 12:17 AM, Zvezdan Petkovic wrote: > Hi KP, > > On Dec 1, 2015, at 4:17 AM, KP wrote: > > yes, both of those statements are true. > > However, with the namespace_packages = [?foo'], the > lib\site-packages\foo\__init__.py does not get installed (even though it is > in the source tree). > > > This is exactly how it?s supposed to behave for namespace packages. > The *.pth file will take care of providing info about your namespace to > the python importer. > > Instead there?s just a dir with "foo/bar/__init__.py" and > "foo/blah/__init__.py". > > > These are regular packages. Hence they preserve their __init__.py. > Hope this helps. > > Regards, > > Zvezdan > > > I will try to look in the "wheel" side of things next I guess. Perhaps pip > is doing something since it seems to install even source distributables by > first converting to a wheel. > > > On Tue, Dec 1, 2015 at 1:38 AM, Marius Gedminas wrote: > >> On Mon, Nov 30, 2015 at 06:59:31PM -0500, KP wrote: >> > I'm not sure where the issue is, but when I specify a namespace_package >> in >> > the setup.py file, I can indeed have multiple packages with the same >> base >> > (foo.bar, foo.blah, etc...). The files all install in to the same >> > directory. It drops the foo/__init__.py that would be doing the >> > extend_path, and instead adds a ".pth" file that is a bit over my head. >> > >> > The problem is that it does not seem to traverse the entire sys.path to >> > find multiple foo packages. >> >> Does every foo.x package specify namespace_packages=['foo']? >> >> Do they all ship an identical foo/__init__.py with >> >> import pkg_resources >> pkg_resources.declare_namespace(__name__) >> >> ? >> >> AFAIU you need both things in every package, if you want to use >> namespace packages. >> >> > If I do not specify namespace_packages and instead just use the >> > pkgutil.extend_path, then this seems to allow the packages to be in >> > multiple places in the sys.path. >> > >> > Is there something additional for the namespace_package that i need to >> > specify in order for all of the sys.path to be checked? >> > >> > I'm using 18.5 setuptools....but I am not sure if this somehow ties in >> to >> > wheel/pip, since I'm using that for the actual install. >> >> Marius Gedminas >> -- >> Give a man a computer program and you give him a headache, but teach him >> to >> program computers and you give him the power to create headaches for >> others for >> the rest of his life... >> -- R. B. Forest >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zvezdanpetkovic at gmail.com Wed Dec 2 21:54:58 2015 From: zvezdanpetkovic at gmail.com (Zvezdan Petkovic) Date: Wed, 2 Dec 2015 18:54:58 -0800 Subject: [Distutils] namespace_package In-Reply-To: References: <20151201063802.GB8007@platonas> Message-ID: Hi KP, Maybe I didn?t follow the thread from the beginning, but your use case is not clear to me. > On Dec 1, 2015, at 1:31 PM, KP wrote: > > (sorry for the stupid previous early send) > > Just to recap: > > 1. if you don?t put namespace_packages in the setup.py, then it will uninstall the shared __init__.py when you uninstall any of the packages Right. You need to declare the namespace in setup.py. > 2. If you put namespace_packages, then there is a pth file created for the shared directory (site-packages/foo) and no foo/__init__.py is created (even if it is in your package) Correct. That is exactly the behavior that?s expected. There can be many packages in the namespace (e.g., zope.* or zope.app.*) and they all share the namespace, but none of them owns that __init__.py. That?s why it?s not installed there and is replaced with a *.pth file. > #2 - breaks things like : doing a source checkout that participates in this namespace_package?If you do this then only the lib/site-packages/foo/ are importable Again, it?s not clear what are you referring to? Are you doing a source checkout over your installed packages / virtual environment? I sure hope not. You can use pip ?editable for adding your source checkouts to virtual environment or installed packages. If you use build tools (such as zc.buildout) and install your packages, they will end up over several egg / wheel directories ?parallel? to each other, but the proper namespace declaration will ensure that they are all importable. You cannot even do the source checkout over such an installation. It looks like you are trying to find a workaround for the problem that perhaps is not a problem at all if you use the standard approach properly. > Solution appears to be: > > create a standalone ?foo" package that has ONLY the shared __init__.py, and do NOT set "namespace_packages" . No, that is not the solution. That?s a sure way to break the namespace in the egg/wheel based installations. > This seems to associate the __init__.py with the ?foo? tool, so that only when you uninstall the "foo" tool does the __init__.py get uninstalled. Why do you worry so much about uninstalling __init__.py. If it?s not there (and it isn?t with properly declared namespaces) why does it matter? > It also has the shared __init__.py in lib/site-packages, so it seems that enables the other namespace packages that are source checkouts or added to path via other methods. What other methods? The key here is to make decision how do you want to install your packages? Pick one and use it instead fighting the tooling. > > Hopefully this works long term, and maybe is useful to someone else out there?. I doubt it, It?s an incorrect advice. Zvezdan > > Thanks, > > > > > On Tue, Dec 1, 2015 at 4:29 PM, KP > wrote: > Just to recap: > > 1. if you don't put namespace_packages in the setup.py, then it will uninstall the shared __init__.py when you uninstall any of the packages > 2. If you put namespace_packages, then there is a pth file created for the shared directory (site-packages/foo) and no foo/__init__.py is created (even if it is in your package) > #2 - breaks things like : doing a source checkout that participates in this namespace_package...If you do this then only the lib/site-packages/foo/ are importable > > Solution appears to be: > > > On Tue, Dec 1, 2015 at 7:17 AM, KP > wrote: > yes, both of those statements are true. > > However, with the namespace_packages = ['foo'], the lib\site-packages\foo\__init__.py does not get installed (even though it is in the source tree). Instead there's just a dir with "foo/bar/__init__.py" and "foo/blah/__init__.py". I will try to look in the "wheel" side of things next I guess. Perhaps pip is doing something since it seems to install even source distributables by first converting to a wheel. > > > On Tue, Dec 1, 2015 at 1:38 AM, Marius Gedminas > wrote: > On Mon, Nov 30, 2015 at 06:59:31PM -0500, KP wrote: > > I'm not sure where the issue is, but when I specify a namespace_package in > > the setup.py file, I can indeed have multiple packages with the same base > > (foo.bar, foo.blah, etc...). The files all install in to the same > > directory. It drops the foo/__init__.py that would be doing the > > extend_path, and instead adds a ".pth" file that is a bit over my head. > > > > The problem is that it does not seem to traverse the entire sys.path to > > find multiple foo packages. > > Does every foo.x package specify namespace_packages=['foo']? > > Do they all ship an identical foo/__init__.py with > > import pkg_resources > pkg_resources.declare_namespace(__name__) > > ? > > AFAIU you need both things in every package, if you want to use > namespace packages. > > > If I do not specify namespace_packages and instead just use the > > pkgutil.extend_path, then this seems to allow the packages to be in > > multiple places in the sys.path. > > > > Is there something additional for the namespace_package that i need to > > specify in order for all of the sys.path to be checked? > > > > I'm using 18.5 setuptools....but I am not sure if this somehow ties in to > > wheel/pip, since I'm using that for the actual install. > > Marius Gedminas > -- > Give a man a computer program and you give him a headache, but teach him to > program computers and you give him the power to create headaches for others for > the rest of his life... > -- R. B. Forest > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From patter001 at gmail.com Wed Dec 2 22:00:52 2015 From: patter001 at gmail.com (KP) Date: Wed, 2 Dec 2015 22:00:52 -0500 Subject: [Distutils] namespace_package In-Reply-To: References: <20151201063802.GB8007@platonas> Message-ID: >It looks like you are trying to find a workaround for the problem that perhaps is not a problem at all if you use the standard approach properly. I'm definitely _trying_ to use a standard approach...That is why I am here posting. Put simply this seems like a valid use case: >pip install foo.bar >pip install -e svn+http:// Even if both tools have the namespace_package foo, the "foo.more" will not properly import. How is this going against standard approaches? On Wed, Dec 2, 2015 at 9:54 PM, Zvezdan Petkovic wrote: > Hi KP, > > Maybe I didn?t follow the thread from the beginning, but your use case is > not clear to me. > > On Dec 1, 2015, at 1:31 PM, KP wrote: > > (sorry for the stupid previous early send) > > Just to recap: > > 1. if you don?t put namespace_packages in the setup.py, then it will > uninstall the shared __init__.py when you uninstall any of the packages > > > Right. You need to declare the namespace in setup.py. > > 2. If you put namespace_packages, then there is a pth file created for > the shared directory (site-packages/foo) and no foo/__init__.py is created > (even if it is in your package) > > > Correct. That is exactly the behavior that?s expected. There can be many > packages in the namespace (e.g., zope.* or zope.app.*) and they all share > the namespace, but none of them owns that __init__.py. That?s why it?s not > installed there and is replaced with a *.pth file. > > #2 - breaks things like : doing a source checkout that participates in > this namespace_package?If you do this then only the > lib/site-packages/foo/ are importable > > > Again, it?s not clear what are you referring to? > Are you doing a source checkout over your installed packages / virtual > environment? > I sure hope not. > You can use pip ?editable for adding your source checkouts to virtual > environment or installed packages. > > If you use build tools (such as zc.buildout) and install your packages, > they will end up over several egg / wheel directories ?parallel? to each > other, but the proper namespace declaration will ensure that they are all > importable. You cannot even do the source checkout over such an > installation. > > It looks like you are trying to find a workaround for the problem that > perhaps is not a problem at all if you use the standard approach properly. > > Solution appears to be: > > create a standalone ?foo" package that has ONLY the shared __init__.py, > and do NOT set "namespace_packages" . > > > No, that is not the solution. That?s a sure way to break the namespace in > the egg/wheel based installations. > > This seems to associate the __init__.py with the ?foo? tool, so that only > when you uninstall the "foo" tool does the __init__.py get uninstalled. > > > Why do you worry so much about uninstalling __init__.py. > If it?s not there (and it isn?t with properly declared namespaces) why > does it matter? > > It also has the shared __init__.py in lib/site-packages, so it seems that > enables the other namespace packages that are source checkouts or added to > path via other methods. > > > What other methods? > The key here is to make decision how do you want to install your packages? > Pick one and use it instead fighting the tooling. > > > Hopefully this works long term, and maybe is useful to someone else out > there?. > > > I doubt it, > It?s an incorrect advice. > > Zvezdan > > > Thanks, > > > > > On Tue, Dec 1, 2015 at 4:29 PM, KP wrote: > >> Just to recap: >> >> 1. if you don't put namespace_packages in the setup.py, then it will >> uninstall the shared __init__.py when you uninstall any of the packages >> 2. If you put namespace_packages, then there is a pth file created for >> the shared directory (site-packages/foo) and no foo/__init__.py is created >> (even if it is in your package) >> #2 - breaks things like : doing a source checkout that participates in >> this namespace_package...If you do this then only the >> lib/site-packages/foo/ are importable >> >> Solution appears to be: >> >> >> On Tue, Dec 1, 2015 at 7:17 AM, KP wrote: >> >>> yes, both of those statements are true. >>> >>> However, with the namespace_packages = ['foo'], the >>> lib\site-packages\foo\__init__.py does not get installed (even though it is >>> in the source tree). Instead there's just a dir with "foo/bar/__init__.py" >>> and "foo/blah/__init__.py". I will try to look in the "wheel" side of >>> things next I guess. Perhaps pip is doing something since it seems to >>> install even source distributables by first converting to a wheel. >>> >>> >>> On Tue, Dec 1, 2015 at 1:38 AM, Marius Gedminas >>> wrote: >>> >>>> On Mon, Nov 30, 2015 at 06:59:31PM -0500, KP wrote: >>>> > I'm not sure where the issue is, but when I specify a >>>> namespace_package in >>>> > the setup.py file, I can indeed have multiple packages with the same >>>> base >>>> > (foo.bar, foo.blah, etc...). The files all install in to the same >>>> > directory. It drops the foo/__init__.py that would be doing the >>>> > extend_path, and instead adds a ".pth" file that is a bit over my >>>> head. >>>> > >>>> > The problem is that it does not seem to traverse the entire sys.path >>>> to >>>> > find multiple foo packages. >>>> >>>> Does every foo.x package specify namespace_packages=['foo']? >>>> >>>> Do they all ship an identical foo/__init__.py with >>>> >>>> import pkg_resources >>>> pkg_resources.declare_namespace(__name__) >>>> >>>> ? >>>> >>>> AFAIU you need both things in every package, if you want to use >>>> namespace packages. >>>> >>>> > If I do not specify namespace_packages and instead just use the >>>> > pkgutil.extend_path, then this seems to allow the packages to be in >>>> > multiple places in the sys.path. >>>> > >>>> > Is there something additional for the namespace_package that i need to >>>> > specify in order for all of the sys.path to be checked? >>>> > >>>> > I'm using 18.5 setuptools....but I am not sure if this somehow ties >>>> in to >>>> > wheel/pip, since I'm using that for the actual install. >>>> >>>> Marius Gedminas >>>> -- >>>> Give a man a computer program and you give him a headache, but teach >>>> him to >>>> program computers and you give him the power to create headaches for >>>> others for >>>> the rest of his life... >>>> -- R. B. Forest >>>> >>>> _______________________________________________ >>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>> >>>> >>> >> > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zvezdanpetkovic at gmail.com Wed Dec 2 22:10:06 2015 From: zvezdanpetkovic at gmail.com (Zvezdan Petkovic) Date: Wed, 2 Dec 2015 19:10:06 -0800 Subject: [Distutils] namespace_package In-Reply-To: References: <20151201063802.GB8007@platonas> Message-ID: <7DCE46B4-F363-46EA-9489-4C60D0ED67AF@gmail.com> Hi KP, > On Dec 2, 2015, at 7:00 PM, KP wrote: > > >It looks like you are trying to find a workaround for the problem that perhaps is not a problem at all if you use the standard approach properly. > > I'm definitely _trying_ to use a standard approach...That is why I am here posting. Put simply this seems like a valid use case: > > >pip install foo.bar > >pip install -e svn+http:// > > Even if both tools have the namespace_package foo, the "foo.more" will not properly import. > > How is this going against standard approaches? I don?t know. Without seeing the source code for these packages (which Marius already asked for) everything is hypothetical. All I know is that I?m using namespaces successfully for over a decade now and never had the need to work around them. Show us the code. We may be able to help better if we can try and reproduce. Otherwise, it?s hard. Thanks! Zvezdan > > > > On Wed, Dec 2, 2015 at 9:54 PM, Zvezdan Petkovic > wrote: > Hi KP, > > Maybe I didn?t follow the thread from the beginning, but your use case is not clear to me. > >> On Dec 1, 2015, at 1:31 PM, KP > wrote: >> >> (sorry for the stupid previous early send) >> >> Just to recap: >> >> 1. if you don?t put namespace_packages in the setup.py, then it will uninstall the shared __init__.py when you uninstall any of the packages > > Right. You need to declare the namespace in setup.py. > >> 2. If you put namespace_packages, then there is a pth file created for the shared directory (site-packages/foo) and no foo/__init__.py is created (even if it is in your package) > > Correct. That is exactly the behavior that?s expected. There can be many packages in the namespace (e.g., zope.* or zope.app.*) and they all share the namespace, but none of them owns that __init__.py. That?s why it?s not installed there and is replaced with a *.pth file. > >> #2 - breaks things like : doing a source checkout that participates in this namespace_package?If you do this then only the lib/site-packages/foo/ are importable > > Again, it?s not clear what are you referring to? > Are you doing a source checkout over your installed packages / virtual environment? > I sure hope not. > You can use pip ?editable for adding your source checkouts to virtual environment or installed packages. > > If you use build tools (such as zc.buildout) and install your packages, they will end up over several egg / wheel directories ?parallel? to each other, but the proper namespace declaration will ensure that they are all importable. You cannot even do the source checkout over such an installation. > > It looks like you are trying to find a workaround for the problem that perhaps is not a problem at all if you use the standard approach properly. > >> Solution appears to be: >> >> create a standalone ?foo" package that has ONLY the shared __init__.py, and do NOT set "namespace_packages" . > > No, that is not the solution. That?s a sure way to break the namespace in the egg/wheel based installations. > >> This seems to associate the __init__.py with the ?foo? tool, so that only when you uninstall the "foo" tool does the __init__.py get uninstalled. > > Why do you worry so much about uninstalling __init__.py. > If it?s not there (and it isn?t with properly declared namespaces) why does it matter? > >> It also has the shared __init__.py in lib/site-packages, so it seems that enables the other namespace packages that are source checkouts or added to path via other methods. > > What other methods? > The key here is to make decision how do you want to install your packages? > Pick one and use it instead fighting the tooling. > >> >> Hopefully this works long term, and maybe is useful to someone else out there?. > > I doubt it, > It?s an incorrect advice. > > Zvezdan > >> >> Thanks, >> >> >> >> >> On Tue, Dec 1, 2015 at 4:29 PM, KP > wrote: >> Just to recap: >> >> 1. if you don't put namespace_packages in the setup.py, then it will uninstall the shared __init__.py when you uninstall any of the packages >> 2. If you put namespace_packages, then there is a pth file created for the shared directory (site-packages/foo) and no foo/__init__.py is created (even if it is in your package) >> #2 - breaks things like : doing a source checkout that participates in this namespace_package...If you do this then only the lib/site-packages/foo/ are importable >> >> Solution appears to be: >> >> >> On Tue, Dec 1, 2015 at 7:17 AM, KP > wrote: >> yes, both of those statements are true. >> >> However, with the namespace_packages = ['foo'], the lib\site-packages\foo\__init__.py does not get installed (even though it is in the source tree). Instead there's just a dir with "foo/bar/__init__.py" and "foo/blah/__init__.py". I will try to look in the "wheel" side of things next I guess. Perhaps pip is doing something since it seems to install even source distributables by first converting to a wheel. >> >> >> On Tue, Dec 1, 2015 at 1:38 AM, Marius Gedminas > wrote: >> On Mon, Nov 30, 2015 at 06:59:31PM -0500, KP wrote: >> > I'm not sure where the issue is, but when I specify a namespace_package in >> > the setup.py file, I can indeed have multiple packages with the same base >> > (foo.bar, foo.blah, etc...). The files all install in to the same >> > directory. It drops the foo/__init__.py that would be doing the >> > extend_path, and instead adds a ".pth" file that is a bit over my head. >> > >> > The problem is that it does not seem to traverse the entire sys.path to >> > find multiple foo packages. >> >> Does every foo.x package specify namespace_packages=['foo']? >> >> Do they all ship an identical foo/__init__.py with >> >> import pkg_resources >> pkg_resources.declare_namespace(__name__) >> >> ? >> >> AFAIU you need both things in every package, if you want to use >> namespace packages. >> >> > If I do not specify namespace_packages and instead just use the >> > pkgutil.extend_path, then this seems to allow the packages to be in >> > multiple places in the sys.path. >> > >> > Is there something additional for the namespace_package that i need to >> > specify in order for all of the sys.path to be checked? >> > >> > I'm using 18.5 setuptools....but I am not sure if this somehow ties in to >> > wheel/pip, since I'm using that for the actual install. >> >> Marius Gedminas >> -- >> Give a man a computer program and you give him a headache, but teach him to >> program computers and you give him the power to create headaches for others for >> the rest of his life... >> -- R. B. Forest >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lele at metapensiero.it Thu Dec 3 03:19:13 2015 From: lele at metapensiero.it (Lele Gaifax) Date: Thu, 03 Dec 2015 09:19:13 +0100 Subject: [Distutils] namespace_package References: <20151201063802.GB8007@platonas> <7DCE46B4-F363-46EA-9489-4C60D0ED67AF@gmail.com> Message-ID: <87wpswosfy.fsf@metapensiero.it> Zvezdan Petkovic writes: > Hi KP, > >> On Dec 2, 2015, at 7:00 PM, KP wrote: >> >> >It looks like you are trying to find a workaround for the problem that perhaps is not a problem at all if you use the standard approach properly. >> >> I'm definitely _trying_ to use a standard approach...That is why I am here posting. Put simply this seems like a valid use case: >> >> >pip install foo.bar >> >pip install -e svn+http:// >> >> Even if both tools have the namespace_package foo, the "foo.more" will not properly import. >> >> How is this going against standard approaches? > > I don?t know. > Without seeing the source code for these packages (which Marius already asked for) everything is hypothetical. > All I know is that I?m using namespaces successfully for over a decade now and never had the need to work around them. > > Show us the code. We may be able to help better if we can try and reproduce. > Otherwise, it?s hard. It seems that KP case is either https://github.com/pypa/pip/issues/3160 or https://github.com/pypa/pip/issues/3, isn't it? Both come with sample code that demonstrate the problem. ciao, lele. -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From patter001 at gmail.com Thu Dec 3 06:57:16 2015 From: patter001 at gmail.com (KP) Date: Thu, 3 Dec 2015 06:57:16 -0500 Subject: [Distutils] namespace_package In-Reply-To: <87wpswosfy.fsf@metapensiero.it> References: <20151201063802.GB8007@platonas> <7DCE46B4-F363-46EA-9489-4C60D0ED67AF@gmail.com> <87wpswosfy.fsf@metapensiero.it> Message-ID: Yes, the https://github.com/pypa/pip/issues/3 definitely sounds like my issue. Seems there is some concern over the nspkg.pth there as well. It seems taht the nspkg.pth is a fine idea to replace the install of the __init__.py, but that it just doesn't work to fully extend the locations in which a namespace can reside. Either way, there a few other workarounds posted there as well, I will check them out to see if any of them are more palatable than the one I posted here. Thanks Lele for sending that link! -Kevin On Thu, Dec 3, 2015 at 3:19 AM, Lele Gaifax wrote: > Zvezdan Petkovic writes: > > > Hi KP, > > > >> On Dec 2, 2015, at 7:00 PM, KP wrote: > >> > >> >It looks like you are trying to find a workaround for the problem that > perhaps is not a problem at all if you use the standard approach properly. > >> > >> I'm definitely _trying_ to use a standard approach...That is why I am > here posting. Put simply this seems like a valid use case: > >> > >> >pip install foo.bar > >> >pip install -e svn+http:// > >> > >> Even if both tools have the namespace_package foo, the "foo.more" will > not properly import. > >> > >> How is this going against standard approaches? > > > > I don?t know. > > Without seeing the source code for these packages (which Marius already > asked for) everything is hypothetical. > > All I know is that I?m using namespaces successfully for over a decade > now and never had the need to work around them. > > > > Show us the code. We may be able to help better if we can try and > reproduce. > > Otherwise, it?s hard. > > It seems that KP case is either https://github.com/pypa/pip/issues/3160 or > https://github.com/pypa/pip/issues/3, isn't it? Both come with sample code > that demonstrate the problem. > > ciao, lele. > -- > nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri > real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. > lele at metapensiero.it | -- Fortunato Depero, 1929. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.m.bray at gmail.com Thu Dec 3 14:20:02 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Thu, 3 Dec 2015 14:20:02 -0500 Subject: [Distutils] distutils.command.build_clib: How to add additional compiler flags for cl.exe? In-Reply-To: References: Message-ID: On Sun, Nov 29, 2015 at 10:44 AM, Kim Walisch wrote: > Hi, > > For distutils.command.build_clib the commonly used code below does not > work for adding additional compiler flags (it works using > distutils.command.build_ext): > > extra_compile_args = '-fopenmp' > > On Unix-like system I found a workaround which allows to specify > additional compiler flags for distutils.command.build_clib: > > cflags = distutils.sysconfig.get_config_var('CFLAGS') > distutils.sysconfig._config_vars['CFLAGS'] = cflags + " -fopenmp" > > Unfortunately this does not work with Microsoft's C/C++ compiler > cl.exe. > > Does anybody know how I can add additional compiler flags for cl.exe > and distutils.command.build_clib? > > Thanks and best regards! Really what I think you want is a way to determine what compiler will be used, and to add compiler arguments accordingly. Unfortunately distutils does not provide one easy way to determine what compiler it will use--this is among the myriad ways it's not great for use as a build system. And yet there are several workarounds. For starters you can try: from distutils import ccompiler compiler = ccompiler.get_default_compiler() if compiler == 'msvc': # Add option for MS compiler elif compiler == 'unix': # Add option for gcc elif # etc.... This does not actually tell you what compiler executable will be used--for that you have to dig around more. This is telling you the name of the compiler class in distutils that will be used. But if you want to pass arguments for cl.exe, it will be 'msvc'. This also does not guarantee which compiler will actually be used--a user can override that via a command-line argument or setup.cfg. To get around that I usually use a dummy 'Distribution' object to parse any user arguments. This isn't perfect either, but it's almost exactly what happens when the setup() function is called--it just stops before actually running any of the commands: from distutils.dist import Distribution from distutils import ccompiler def get_compiler(): dist = Distribution({'script_name': os.path.basename(sys.argv[0]), 'script_args': sys.argv[1:]}) dist.parse_config_files() dist.parse_command_line() compiler = None for cmd in ['build', 'build_ext', 'build_clib']: compiler = dist.command_options.get(cmd, {}).get('compiler', ('', None))[1] if compiler is not None: break if compiler is None: return ccompiler.get_default_compiler() else: return compiler I'd love to know if there is a better way in general to do this myself... Erik From erik.m.bray at gmail.com Thu Dec 3 15:06:06 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Thu, 3 Dec 2015 15:06:06 -0500 Subject: [Distutils] setup.py install using pip Message-ID: Hi all, I've been on vacation for a bit in general, and on vacation from this mailing list even longer. I'm not entirely caught up yet on the latest developments so apologies if something like this is entirely moot by now. But I have seen some discussions here and in other lists related to using pip for all installations, and phasing out the old distutils `./setup.py install` (eg. [1]). This is not a new discussion, and there are many related discussions, for example, about changing setuptools not to default to egg installs anymore (see [2]). I'm definitely all for this in general. Nowadays whenever I install a package from source I run `pip install .` But of course there are a lot of existing tools, not to mention folk wisdom, assuming `./setup.py install`. We also don't want to change the long-existing behavior in setuptools. I have a modest proposal for a small addition to setuptools that might be helpful in a transition away from using setuptools+distutils for installation. This would be to add a `--pip` flag to setuptools' install command (or possibly straight in distutils too, but might as well start with setuptools). Therefore, running $ ./setup.py install --pip would be equivalent to running $ pip install . By extension, running $ ./setup.py install --pip --arg1 --arg2=foo would be equivalent to $ pip install --install-option="--arg1" --install-option="--arg2=foo" . and so on. By making `--pip` opt-in, it does not automatically break backward compatibility for users expecting `./setup.py install` to use easy_install. However, individual users may opt into it globally by adding [install] pip = True to their .pydistutils.cfg. Similarly, package authors who are confident that none of their users are ever going to care about egg installs (e.g. me) can add the same to their project's setup.cfg. Does something like this have any merit? I hacked together a prototype which follows the sig line. It's just a proof of concept, but seems to work in the most basic cases. I'd like to add it to my own projects too, but would appreciate some peer review. Thanks, Erik [1] https://mail.scipy.org/pipermail/numpy-discussion/2015-November/074142.html [2] https://bitbucket.org/pypa/setuptools/issues/371/setuptools-and-state-of-pep-376 $ cat pipinstall.py from distutils.errors import DistutilsArgError from setuptools.command.install import install as SetuptoolsInstall class PipInstall(SetuptoolsInstall): command_name = 'install' user_options = SetuptoolsInstall.user_options + [ ('pip', None, 'install using pip; ignored when also using ' '--single-version-externally-managed') ] boolean_options = SetuptoolsInstall.boolean_options + ['pip'] def initialize_options(self): SetuptoolsInstall.initialize_options(self) self.pip = False def finalize_options(self): SetuptoolsInstall.finalize_options(self) if self.single_version_externally_managed: self.pip = False if self.pip: try: import pip except ImportError: raise DistutilsArgError( 'pip must be installed in order to install with the ' '--pip option') def run(self): if self.pip: import pip opts = (['install', '--ignore-installed'] + ['--install-option="{0}"'.format(opt) for opt in self._get_command_line_opts()]) pip.main(opts + ['.']) else: SetuptoolsInstall.run(self) def _get_command_line_opts(self): # Generate a mapping from the attribute name associated with a # command-line option to the name of the command-line option (including # an = if the option takes an argument) attr_to_opt = dict((opt[0].rstrip('=').replace('-', '_'), opt[0]) for opt in self.user_options) opt_dict = self.distribution.get_option_dict(self.get_command_name()) opts = [] for attr, value in opt_dict.items(): if value[0] != 'command line' or attr == 'pip': # Only look at options passed in on the command line (ignoring # the pip option itself) continue opt = attr_to_opt[attr] if opt in self.boolean_options: opts.append('--' + opt) else: opts.append('--{0}{1}'.format(opt, value[1])) return opts @staticmethod def _called_from_setup(run_frame): # A hack to work around a setuptools hack return SetuptoolsInstall._called_from_setup(run_frame.f_back) From opensource at ronnypfannschmidt.de Thu Dec 3 15:37:42 2015 From: opensource at ronnypfannschmidt.de (Ronny Pfannschmidt) Date: Thu, 03 Dec 2015 21:37:42 +0100 Subject: [Distutils] setup.py install using pip In-Reply-To: References: Message-ID: Lets avoid getting setuptools even more complex in that way Putting pip-ish-ness on top of easy install is a maintenance horror and I don't think setuptools does has enough consistent developer resources to handle something like that Instead let's just give options to destroy the normal install command from setup.py so projects can phase out easy install forcefully making downstream require patches or pip usage Am 3. Dezember 2015 21:06:06 MEZ, schrieb Erik Bray : >Hi all, > >I've been on vacation for a bit in general, and on vacation from this >mailing list even longer. I'm not entirely caught up yet on the >latest developments so apologies if something like this is entirely >moot by now. > >But I have seen some discussions here and in other lists related to >using pip for all installations, and phasing out the old distutils >`./setup.py install` (eg. [1]). This is not a new discussion, and >there are many related discussions, for example, about changing >setuptools not to default to egg installs anymore (see [2]). > >I'm definitely all for this in general. Nowadays whenever I install a >package from source I run `pip install .` But of course there are a >lot of existing tools, not to mention folk wisdom, assuming >`./setup.py install`. We also don't want to change the long-existing >behavior in setuptools. > >I have a modest proposal for a small addition to setuptools that might >be helpful in a transition away from using setuptools+distutils for >installation. This would be to add a `--pip` flag to setuptools' >install command (or possibly straight in distutils too, but might as >well start with setuptools). > >Therefore, running > >$ ./setup.py install --pip > >would be equivalent to running > >$ pip install . > >By extension, running > >$ ./setup.py install --pip --arg1 --arg2=foo > >would be equivalent to > >$ pip install --install-option="--arg1" --install-option="--arg2=foo" . > >and so on. > >By making `--pip` opt-in, it does not automatically break backward >compatibility for users expecting `./setup.py install` to use >easy_install. However, individual users may opt into it globally by >adding > >[install] >pip = True > >to their .pydistutils.cfg. Similarly, package authors who are >confident that none of their users are ever going to care about egg >installs (e.g. me) can add the same to their project's setup.cfg. > >Does something like this have any merit? I hacked together a >prototype which follows the sig line. It's just a proof of concept, >but seems to work in the most basic cases. I'd like to add it to my >own projects too, but would appreciate some peer review. > >Thanks, >Erik > > > >[1] >https://mail.scipy.org/pipermail/numpy-discussion/2015-November/074142.html >[2] >https://bitbucket.org/pypa/setuptools/issues/371/setuptools-and-state-of-pep-376 > > >$ cat pipinstall.py >from distutils.errors import DistutilsArgError >from setuptools.command.install import install as SetuptoolsInstall > > >class PipInstall(SetuptoolsInstall): > command_name = 'install' > user_options = SetuptoolsInstall.user_options + [ > ('pip', None, 'install using pip; ignored when also using ' > '--single-version-externally-managed') > ] > boolean_options = SetuptoolsInstall.boolean_options + ['pip'] > > def initialize_options(self): > SetuptoolsInstall.initialize_options(self) > self.pip = False > > def finalize_options(self): > SetuptoolsInstall.finalize_options(self) > > if self.single_version_externally_managed: > self.pip = False > > if self.pip: > try: > import pip > except ImportError: > raise DistutilsArgError( > 'pip must be installed in order to install with the ' > '--pip option') > > def run(self): > if self.pip: > import pip > opts = (['install', '--ignore-installed'] + > ['--install-option="{0}"'.format(opt) > for opt in self._get_command_line_opts()]) > pip.main(opts + ['.']) > else: > SetuptoolsInstall.run(self) > > def _get_command_line_opts(self): > # Generate a mapping from the attribute name associated with a ># command-line option to the name of the command-line option (including > # an = if the option takes an argument) > attr_to_opt = dict((opt[0].rstrip('=').replace('-', '_'), opt[0]) > for opt in self.user_options) > > opt_dict = self.distribution.get_option_dict(self.get_command_name()) > opts = [] > > for attr, value in opt_dict.items(): > if value[0] != 'command line' or attr == 'pip': > # Only look at options passed in on the command line (ignoring > # the pip option itself) > continue > > opt = attr_to_opt[attr] > > if opt in self.boolean_options: > opts.append('--' + opt) > else: > opts.append('--{0}{1}'.format(opt, value[1])) > > return opts > > @staticmethod > def _called_from_setup(run_frame): > # A hack to work around a setuptools hack > return SetuptoolsInstall._called_from_setup(run_frame.f_back) >_______________________________________________ >Distutils-SIG maillist - Distutils-SIG at python.org >https://mail.python.org/mailman/listinfo/distutils-sig MFG Ronny From m.van.rees at zestsoftware.nl Thu Dec 3 16:22:23 2015 From: m.van.rees at zestsoftware.nl (Maurits van Rees) Date: Thu, 3 Dec 2015 22:22:23 +0100 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: Could you add Plone 5.1 too? It may still take half a year or more before we release this, but we are starting to think about what we want to put in there. So this one: Framework :: Plone :: 5.1 Thanks, Maurits Op 02/12/15 om 01:54 schreef Richard Jones: > At the moment it's a manual poke, but I have done this thing right now. > > On 2 December 2015 at 10:46, James Bennett > wrote: > > Reviving this old thread because today is Django 1.9's release date > and I'm unsure of the process for keeping up with new-released > versions in trove classifiers. Do we need to manually poke someone > each time (as with today, when "Framework :: Django :: 1.9" becomes > a thing), or is there a way to automate it? > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Maurits van Rees: http://maurits.vanrees.org/ Zest Software: http://zestsoftware.nl From erik.m.bray at gmail.com Thu Dec 3 17:01:21 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Thu, 3 Dec 2015 17:01:21 -0500 Subject: [Distutils] setup.py install using pip In-Reply-To: References: Message-ID: On Thu, Dec 3, 2015 at 3:37 PM, Ronny Pfannschmidt wrote: > Lets avoid getting setuptools even more complex in that way It's not deeply complex--it's just bypassing the normal behavior of using easy_install. In my example code I subclassed the existing command, but an easier approach would be to build it into setuptools. The way the 'install' command works in setuptools is to hand installation off to the easy_install command unless --single-version-externally-managed is specified (as well as --record). Otherwise it hands installation off to the default base 'install' command of distutils. This proposal just adds a third option for what actual installer the 'install' command should use. But it's opt-in instead of forced on any package by default (as easy_install is forced on us by setuptools). > Putting pip-ish-ness on top of easy install is a maintenance horror and I don't think setuptools does has enough consistent developer resources to handle something like that It doesn't put anything "on top of" easy_install; it ignores easy_install. > Instead let's just give options to destroy the normal install command from setup.py so projects can phase out easy install forcefully making downstream require patches or pip usage That's the goal, yes. This is just offering a transitional tool. Erik > > Am 3. Dezember 2015 21:06:06 MEZ, schrieb Erik Bray : >>Hi all, >> >>I've been on vacation for a bit in general, and on vacation from this >>mailing list even longer. I'm not entirely caught up yet on the >>latest developments so apologies if something like this is entirely >>moot by now. >> >>But I have seen some discussions here and in other lists related to >>using pip for all installations, and phasing out the old distutils >>`./setup.py install` (eg. [1]). This is not a new discussion, and >>there are many related discussions, for example, about changing >>setuptools not to default to egg installs anymore (see [2]). >> >>I'm definitely all for this in general. Nowadays whenever I install a >>package from source I run `pip install .` But of course there are a >>lot of existing tools, not to mention folk wisdom, assuming >>`./setup.py install`. We also don't want to change the long-existing >>behavior in setuptools. >> >>I have a modest proposal for a small addition to setuptools that might >>be helpful in a transition away from using setuptools+distutils for >>installation. This would be to add a `--pip` flag to setuptools' >>install command (or possibly straight in distutils too, but might as >>well start with setuptools). >> >>Therefore, running >> >>$ ./setup.py install --pip >> >>would be equivalent to running >> >>$ pip install . >> >>By extension, running >> >>$ ./setup.py install --pip --arg1 --arg2=foo >> >>would be equivalent to >> >>$ pip install --install-option="--arg1" --install-option="--arg2=foo" . >> >>and so on. >> >>By making `--pip` opt-in, it does not automatically break backward >>compatibility for users expecting `./setup.py install` to use >>easy_install. However, individual users may opt into it globally by >>adding >> >>[install] >>pip = True >> >>to their .pydistutils.cfg. Similarly, package authors who are >>confident that none of their users are ever going to care about egg >>installs (e.g. me) can add the same to their project's setup.cfg. >> >>Does something like this have any merit? I hacked together a >>prototype which follows the sig line. It's just a proof of concept, >>but seems to work in the most basic cases. I'd like to add it to my >>own projects too, but would appreciate some peer review. >> >>Thanks, >>Erik >> >> >> >>[1] >>https://mail.scipy.org/pipermail/numpy-discussion/2015-November/074142.html >>[2] >>https://bitbucket.org/pypa/setuptools/issues/371/setuptools-and-state-of-pep-376 >> >> >>$ cat pipinstall.py >>from distutils.errors import DistutilsArgError >>from setuptools.command.install import install as SetuptoolsInstall >> >> >>class PipInstall(SetuptoolsInstall): >> command_name = 'install' >> user_options = SetuptoolsInstall.user_options + [ >> ('pip', None, 'install using pip; ignored when also using ' >> '--single-version-externally-managed') >> ] >> boolean_options = SetuptoolsInstall.boolean_options + ['pip'] >> >> def initialize_options(self): >> SetuptoolsInstall.initialize_options(self) >> self.pip = False >> >> def finalize_options(self): >> SetuptoolsInstall.finalize_options(self) >> >> if self.single_version_externally_managed: >> self.pip = False >> >> if self.pip: >> try: >> import pip >> except ImportError: >> raise DistutilsArgError( >> 'pip must be installed in order to install with the ' >> '--pip option') >> >> def run(self): >> if self.pip: >> import pip >> opts = (['install', '--ignore-installed'] + >> ['--install-option="{0}"'.format(opt) >> for opt in self._get_command_line_opts()]) >> pip.main(opts + ['.']) >> else: >> SetuptoolsInstall.run(self) >> >> def _get_command_line_opts(self): >> # Generate a mapping from the attribute name associated with a >># command-line option to the name of the command-line option (including >> # an = if the option takes an argument) >> attr_to_opt = dict((opt[0].rstrip('=').replace('-', '_'), opt[0]) >> for opt in self.user_options) >> >> opt_dict = self.distribution.get_option_dict(self.get_command_name()) >> opts = [] >> >> for attr, value in opt_dict.items(): >> if value[0] != 'command line' or attr == 'pip': >> # Only look at options passed in on the command line (ignoring >> # the pip option itself) >> continue >> >> opt = attr_to_opt[attr] >> >> if opt in self.boolean_options: >> opts.append('--' + opt) >> else: >> opts.append('--{0}{1}'.format(opt, value[1])) >> >> return opts >> >> @staticmethod >> def _called_from_setup(run_frame): >> # A hack to work around a setuptools hack >> return SetuptoolsInstall._called_from_setup(run_frame.f_back) >>_______________________________________________ >>Distutils-SIG maillist - Distutils-SIG at python.org >>https://mail.python.org/mailman/listinfo/distutils-sig > > MFG Ronny From richard at python.org Thu Dec 3 18:53:37 2015 From: richard at python.org (Richard Jones) Date: Fri, 4 Dec 2015 10:53:37 +1100 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: I prefer not to add classifiers unless they're actually going to be used. Half a year could turn into a year :-) On 4 December 2015 at 08:22, Maurits van Rees wrote: > Could you add Plone 5.1 too? It may still take half a year or more before > we release this, but we are starting to think about what we want to put in > there. > So this one: > > Framework :: Plone :: 5.1 > > Thanks, > > Maurits > > Op 02/12/15 om 01:54 schreef Richard Jones: > >> At the moment it's a manual poke, but I have done this thing right now. >> >> On 2 December 2015 at 10:46, James Bennett > > wrote: >> >> Reviving this old thread because today is Django 1.9's release date >> and I'm unsure of the process for keeping up with new-released >> versions in trove classifiers. Do we need to manually poke someone >> each time (as with today, when "Framework :: Django :: 1.9" becomes >> a thing), or is there a way to automate it? >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > > -- > Maurits van Rees: http://maurits.vanrees.org/ > Zest Software: http://zestsoftware.nl > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.van.rees at zestsoftware.nl Thu Dec 3 18:59:17 2015 From: m.van.rees at zestsoftware.nl (Maurits van Rees) Date: Fri, 4 Dec 2015 00:59:17 +0100 Subject: [Distutils] Versioned trove classifiers for Django In-Reply-To: References: Message-ID: Fair enough. :-) See you in six or more months. ;-) Maurits Op 04/12/15 om 00:53 schreef Richard Jones: > I prefer not to add classifiers unless they're actually going to be > used. Half a year could turn into a year :-) > > On 4 December 2015 at 08:22, Maurits van Rees > > wrote: > > Could you add Plone 5.1 too? It may still take half a year or more > before we release this, but we are starting to think about what we > want to put in there. > So this one: > > Framework :: Plone :: 5.1 -- Maurits van Rees: http://maurits.vanrees.org/ Zest Software: http://zestsoftware.nl From zvezdanpetkovic at gmail.com Fri Dec 4 00:31:23 2015 From: zvezdanpetkovic at gmail.com (Zvezdan Petkovic) Date: Thu, 3 Dec 2015 21:31:23 -0800 Subject: [Distutils] namespace_package In-Reply-To: References: <20151201063802.GB8007@platonas> <7DCE46B4-F363-46EA-9489-4C60D0ED67AF@gmail.com> <87wpswosfy.fsf@metapensiero.it> Message-ID: Hi Kevin, Sorry for not being able to respond to this earlier. I have time only in the evening. > On Dec 3, 2015, at 3:57 AM, KP wrote: > > Yes, the https://github.com/pypa/pip/issues/3 definitely sounds like my issue. Seems there is some concern over the nspkg.pth there as well. It seems taht the nspkg.pth is a fine idea to replace the install of the __init__.py, but that it just doesn?t work to fully extend the locations in which a namespace can reside. I now understand what the issue is and can help perhaps with some advice below. > Either way, there a few other workarounds posted there as well, I will check them out to see if any of them are more palatable than the one I posted here. You need to make foo.bar-1.0.0-py2.7-nspkg.pth look like this: import sys, types, os, pkgutil; p = os.path.join(sys._getframe(1).f_locals['sitedir'], *('foo',)); ie = os.path.exists(os.path.join(p,'__init__.py')); m = not ie and sys.modules.setdefault('foo',types.ModuleType('foo')); mp = (m or []) and m.__dict__.setdefault('__path__',[]); (p not in mp) and mp.append(p); mp[:] = m and pkgutil.extend_path(mp, 'foo') or mp Notice the difference from the default in use of pkgutil.extend_path() that helps with editable packages. How to make this happen automatically is explained below. > Thanks Lele for sending that link! > > -Kevin I also want to thank Lele for pointing out the possible cause of your issues. Now, I?d like to separate two things here: 1. Defining namespaces correctly ? that should be done as we talked before in the standard way. 2. Issue with the tooling ? in this case pip is the tool that causes issues. Let?s talk about it more. These are the reasons I didn?t recognize Kevin?s issue. :-) - The namespaces work great when defined correctly (#1 above) with zc.buildout and other tools. There are good open-source build systems out there that take care of namespaces properly. - When one makes a good custom build system, it can fix the pip deficiencies too. For example, the build system specifically used in a company, team, ? So, my advice is to: 1. Make a custom distutils Distribution extension, for example, MyDistribution (or some better name) (a) make custom install_egg_info class that overrides either: - two templates for pth files (for newer versions of setuptools) or - install_namespaces method (for older versions of setuptools) (b) override get_command_class method in the MyDistribution class with your own install_egg_info 2. Build this package and install it. 3. Use distclass=MyDistribution in your foo package setup.py (import it from your package). Using this approach everything will work for every of your packages automatically. I attached your original code from foo_test.zip and added to it a minimal mydist package + a sample __init__.py file that I think you should use in your namespace packages (if you want). Kevin, you can try this by doing the following: - create a virtual environment - source bin/activate - go to mydist package and run ?python setup.py sdist? - pip install /path/to/mydist-1.0.0.tar.gz - go to your foo_bar package and run ?python setup.py sdist? - pip install /path/to/foo.bar-1.0.0.tar.gz - pip install -e /path/to/foo_more - start the python interpreter in your virtual environment - confirm that it works >>> import foo.bar foo.bar imported >>> import foo.more foo.more imported I know that this is not a fix for pip, but it is a generic solution that I hope helps you. Perhaps I should make a little better example (better names, better __init__.py, ?) and post it as a little project on GitHub that people can refer to. All the best, Zvezdan > > On Thu, Dec 3, 2015 at 3:19 AM, Lele Gaifax > wrote: > Zvezdan Petkovic > writes: > > > Hi KP, > > > >> On Dec 2, 2015, at 7:00 PM, KP > wrote: > >> > >> >It looks like you are trying to find a workaround for the problem that perhaps is not a problem at all if you use the standard approach properly. > >> > >> I'm definitely _trying_ to use a standard approach...That is why I am here posting. Put simply this seems like a valid use case: > >> > >> >pip install foo.bar > >> >pip install -e svn+http:// > >> > >> Even if both tools have the namespace_package foo, the "foo.more" will not properly import. > >> > >> How is this going against standard approaches? > > > > I don?t know. > > Without seeing the source code for these packages (which Marius already asked for) everything is hypothetical. > > All I know is that I?m using namespaces successfully for over a decade now and never had the need to work around them. > > > > Show us the code. We may be able to help better if we can try and reproduce. > > Otherwise, it?s hard. > > It seems that KP case is either https://github.com/pypa/pip/issues/3160 or > https://github.com/pypa/pip/issues/3 , isn't it? Both come with sample code > that demonstrate the problem. > > ciao, lele. > -- > nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri > real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. > lele at metapensiero.it | -- Fortunato Depero, 1929. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: foo-solution.zip Type: application/zip Size: 4247 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensource at ronnypfannschmidt.de Fri Dec 4 02:55:14 2015 From: opensource at ronnypfannschmidt.de (Ronny Pfannschmidt) Date: Fri, 04 Dec 2015 08:55:14 +0100 Subject: [Distutils] setup.py install using pip In-Reply-To: References: Message-ID: <5E048C95-9CC3-4C1A-88BA-428126EE5EBD@ronnypfannschmidt.de> But it still needs a change in command A direct switch to a real pip command comes with a free implementation and zero additional features to maintain in setuptools Am 3. Dezember 2015 23:01:21 MEZ, schrieb Erik Bray : >On Thu, Dec 3, 2015 at 3:37 PM, Ronny Pfannschmidt > wrote: >> Lets avoid getting setuptools even more complex in that way > >It's not deeply complex--it's just bypassing the normal behavior of >using easy_install. In my example code I subclassed >the existing command, but an easier approach would be to build it into >setuptools. > >The way the 'install' command works in setuptools is to hand >installation off to the easy_install command unless >--single-version-externally-managed is specified (as well as >--record). Otherwise it hands installation off to the default >base 'install' command of distutils. > >This proposal just adds a third option for what actual installer the >'install' command should >use. But it's opt-in instead of forced on any package by default (as >easy_install is forced on us by setuptools). > >> Putting pip-ish-ness on top of easy install is a maintenance horror >and I don't think setuptools does has enough consistent developer >resources to handle something like that > >It doesn't put anything "on top of" easy_install; it ignores >easy_install. > >> Instead let's just give options to destroy the normal install command >from setup.py so projects can phase out easy install forcefully making >downstream require patches or pip usage > >That's the goal, yes. This is just offering a transitional tool. > > >Erik > >> >> Am 3. Dezember 2015 21:06:06 MEZ, schrieb Erik Bray >: >>>Hi all, >>> >>>I've been on vacation for a bit in general, and on vacation from this >>>mailing list even longer. I'm not entirely caught up yet on the >>>latest developments so apologies if something like this is entirely >>>moot by now. >>> >>>But I have seen some discussions here and in other lists related to >>>using pip for all installations, and phasing out the old distutils >>>`./setup.py install` (eg. [1]). This is not a new discussion, and >>>there are many related discussions, for example, about changing >>>setuptools not to default to egg installs anymore (see [2]). >>> >>>I'm definitely all for this in general. Nowadays whenever I install >a >>>package from source I run `pip install .` But of course there are a >>>lot of existing tools, not to mention folk wisdom, assuming >>>`./setup.py install`. We also don't want to change the long-existing >>>behavior in setuptools. >>> >>>I have a modest proposal for a small addition to setuptools that >might >>>be helpful in a transition away from using setuptools+distutils for >>>installation. This would be to add a `--pip` flag to setuptools' >>>install command (or possibly straight in distutils too, but might as >>>well start with setuptools). >>> >>>Therefore, running >>> >>>$ ./setup.py install --pip >>> >>>would be equivalent to running >>> >>>$ pip install . >>> >>>By extension, running >>> >>>$ ./setup.py install --pip --arg1 --arg2=foo >>> >>>would be equivalent to >>> >>>$ pip install --install-option="--arg1" --install-option="--arg2=foo" >. >>> >>>and so on. >>> >>>By making `--pip` opt-in, it does not automatically break backward >>>compatibility for users expecting `./setup.py install` to use >>>easy_install. However, individual users may opt into it globally by >>>adding >>> >>>[install] >>>pip = True >>> >>>to their .pydistutils.cfg. Similarly, package authors who are >>>confident that none of their users are ever going to care about egg >>>installs (e.g. me) can add the same to their project's setup.cfg. >>> >>>Does something like this have any merit? I hacked together a >>>prototype which follows the sig line. It's just a proof of concept, >>>but seems to work in the most basic cases. I'd like to add it to my >>>own projects too, but would appreciate some peer review. >>> >>>Thanks, >>>Erik >>> >>> >>> >>>[1] >>>https://mail.scipy.org/pipermail/numpy-discussion/2015-November/074142.html >>>[2] >>>https://bitbucket.org/pypa/setuptools/issues/371/setuptools-and-state-of-pep-376 >>> >>> >>>$ cat pipinstall.py >>>from distutils.errors import DistutilsArgError >>>from setuptools.command.install import install as SetuptoolsInstall >>> >>> >>>class PipInstall(SetuptoolsInstall): >>> command_name = 'install' >>> user_options = SetuptoolsInstall.user_options + [ >>> ('pip', None, 'install using pip; ignored when also using ' >>> '--single-version-externally-managed') >>> ] >>> boolean_options = SetuptoolsInstall.boolean_options + ['pip'] >>> >>> def initialize_options(self): >>> SetuptoolsInstall.initialize_options(self) >>> self.pip = False >>> >>> def finalize_options(self): >>> SetuptoolsInstall.finalize_options(self) >>> >>> if self.single_version_externally_managed: >>> self.pip = False >>> >>> if self.pip: >>> try: >>> import pip >>> except ImportError: >>> raise DistutilsArgError( >>> 'pip must be installed in order to install with the >' >>> '--pip option') >>> >>> def run(self): >>> if self.pip: >>> import pip >>> opts = (['install', '--ignore-installed'] + >>> ['--install-option="{0}"'.format(opt) >>> for opt in self._get_command_line_opts()]) >>> pip.main(opts + ['.']) >>> else: >>> SetuptoolsInstall.run(self) >>> >>> def _get_command_line_opts(self): >>> # Generate a mapping from the attribute name associated with >a >>># command-line option to the name of the command-line option >(including >>> # an = if the option takes an argument) >>> attr_to_opt = dict((opt[0].rstrip('=').replace('-', '_'), >opt[0]) >>> for opt in self.user_options) >>> >>> opt_dict = >self.distribution.get_option_dict(self.get_command_name()) >>> opts = [] >>> >>> for attr, value in opt_dict.items(): >>> if value[0] != 'command line' or attr == 'pip': >>> # Only look at options passed in on the command line >(ignoring >>> # the pip option itself) >>> continue >>> >>> opt = attr_to_opt[attr] >>> >>> if opt in self.boolean_options: >>> opts.append('--' + opt) >>> else: >>> opts.append('--{0}{1}'.format(opt, value[1])) >>> >>> return opts >>> >>> @staticmethod >>> def _called_from_setup(run_frame): >>> # A hack to work around a setuptools hack >>> return SetuptoolsInstall._called_from_setup(run_frame.f_back) >>>_______________________________________________ >>>Distutils-SIG maillist - Distutils-SIG at python.org >>>https://mail.python.org/mailman/listinfo/distutils-sig >> >> MFG Ronny MFG Ronny From lele at metapensiero.it Fri Dec 4 03:50:37 2015 From: lele at metapensiero.it (Lele Gaifax) Date: Fri, 04 Dec 2015 09:50:37 +0100 Subject: [Distutils] namespace_package References: <20151201063802.GB8007@platonas> <7DCE46B4-F363-46EA-9489-4C60D0ED67AF@gmail.com> <87wpswosfy.fsf@metapensiero.it> Message-ID: <87zixqsile.fsf@metapensiero.it> Thank you Zvezdan! I will try to make a branch of https://github.com/lelit/pipinstall-e_bugex injecting your solution to see if it works in that case too. ciao, lele. -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From lele at metapensiero.it Fri Dec 4 10:22:16 2015 From: lele at metapensiero.it (Lele Gaifax) Date: Fri, 04 Dec 2015 16:22:16 +0100 Subject: [Distutils] namespace_package References: <20151201063802.GB8007@platonas> <7DCE46B4-F363-46EA-9489-4C60D0ED67AF@gmail.com> <87wpswosfy.fsf@metapensiero.it> <87zixqsile.fsf@metapensiero.it> Message-ID: <87r3j2s0gn.fsf@metapensiero.it> Lele Gaifax writes: > I will try to make a branch of https://github.com/lelit/pipinstall-e_bugex > injecting your solution to see if it works in that case too. No, it doesn't appear making a difference, in that case. Thanks anyway! ciao, lele. -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From zvezdanpetkovic at gmail.com Fri Dec 4 14:40:53 2015 From: zvezdanpetkovic at gmail.com (Zvezdan Petkovic) Date: Fri, 4 Dec 2015 11:40:53 -0800 Subject: [Distutils] namespace_package In-Reply-To: <87r3j2s0gn.fsf@metapensiero.it> References: <20151201063802.GB8007@platonas> <7DCE46B4-F363-46EA-9489-4C60D0ED67AF@gmail.com> <87wpswosfy.fsf@metapensiero.it> <87zixqsile.fsf@metapensiero.it> <87r3j2s0gn.fsf@metapensiero.it> Message-ID: Hi Lele, I took a quick look at your GitHub demo project. For what it?s worth, I do not have issues with editable packages and package dir, but I also: - use setuptools.find_packages(?src?) to get my packages automatically (this shouldn?t be important) - use package_dir={??: ?src?} - have my namespace properly declared namespace_package=[???] - have my package properly structured under src: src | bugex ?> __init__.py (a namespace __init__.py) | foo ?> __init__.py (empty __init__.py for a package) + other modules - use install_package_data=True See for example setup.py in https://github.com/zopefoundation/zope.minmax Did you try that? I?ll take a better look in the evening when I have more time. Zvezdan > On Dec 4, 2015, at 7:22 AM, Lele Gaifax wrote: > > Lele Gaifax writes: > >> I will try to make a branch of https://github.com/lelit/pipinstall-e_bugex >> injecting your solution to see if it works in that case too. > > No, it doesn't appear making a difference, in that case. > > Thanks anyway! > > ciao, lele. > -- > nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri > real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. > lele at metapensiero.it | -- Fortunato Depero, 1929. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From lele at metapensiero.it Fri Dec 4 15:19:00 2015 From: lele at metapensiero.it (Lele Gaifax) Date: Fri, 04 Dec 2015 21:19:00 +0100 Subject: [Distutils] namespace_package References: <20151201063802.GB8007@platonas> <7DCE46B4-F363-46EA-9489-4C60D0ED67AF@gmail.com> <87wpswosfy.fsf@metapensiero.it> <87zixqsile.fsf@metapensiero.it> <87r3j2s0gn.fsf@metapensiero.it> Message-ID: <87610ermq3.fsf@metapensiero.it> Zvezdan Petkovic writes: > Did you try that? Yes, indeed that way it works, an all my packages do that. But, as explained by the https://github.com/pypa/pip/issues/3160 issue, I would like to use a shallower tree for some of them. Thanks for taking a look! ciao, lele. -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From zvezdanpetkovic at gmail.com Sat Dec 5 01:02:27 2015 From: zvezdanpetkovic at gmail.com (Zvezdan Petkovic) Date: Fri, 4 Dec 2015 22:02:27 -0800 Subject: [Distutils] namespace_package In-Reply-To: <87610ermq3.fsf@metapensiero.it> References: <20151201063802.GB8007@platonas> <7DCE46B4-F363-46EA-9489-4C60D0ED67AF@gmail.com> <87wpswosfy.fsf@metapensiero.it> <87zixqsile.fsf@metapensiero.it> <87r3j2s0gn.fsf@metapensiero.it> <87610ermq3.fsf@metapensiero.it> Message-ID: <516C22F4-281A-44FF-B19C-32132F56CEBC@gmail.com> Hi Lele, I experimented with your GitHub demo project. First, let?s state that the custom distclass I sent previously on this thread definitely helps Kevin (KP, the original poster) with the editable packages in a namespace. Unfortunately, the fundamental problem and the reason why it doesn?t help you is as follows: - The solution relies on the presence of a *.pth file for the namespace that extends the path to include the editable package which has only the *.egg-link file - When you install a package, it gets the correct file structure in site-packages as bugex/foo/__init__.py and you have a *.pth file so everything works fine (no need for custom distclass) - When python imports such a package it follows the file-structure in site-packages and succeeds. - When you install it as an editable package, there?s no *.pth file, only *.egg-link that leads to your source - However, the source code does not have a directory for bugex (the namespace) and it fails to import In essence, the Python importer expects a directory for each part separated with dot. Thus, if there?s import bugex.foo it expects a directory for bugex which contains a foo directory. I?m afraid your editable package must follow the file structure that gets installed into site-packages or you?ll have to write a custom importer for such packages. In conclusion: 1. The solution I sent helps with regularly structured packages installed editable as a part of a namespace. (the pip issue #3) 2. I don?t see the way to use the flatten file structure as editable. (the issue #3160 that you opened) Sorry for not being of too much help for your issue. Maybe someone else might help. Honestly, I do not have a compelling motive for not keeping the packages and namespaces hierarchically structured even with Python 3. We usually do not nest our namespaces deeper than 2 anyway (e.g., zope.app.session). In Java development it?s quite common to see file structure like this: src/java/main/com/somecompany/x/y/z/SomeClass.java src/java/test/com/somecompany/x/y/z/TestSomeClass.java src/scala/main/?. You get the idea. And all the directories until z would have no code in them. At least we don?t need to do that much nesting. :-) All the best, Zvezdan > On Dec 4, 2015, at 12:19 PM, Lele Gaifax wrote: > > Zvezdan Petkovic writes: > >> Did you try that? > > Yes, indeed that way it works, an all my packages do that. But, as explained > by the https://github.com/pypa/pip/issues/3160 issue, I would like to use a > shallower tree for some of them. > > Thanks for taking a look! > > ciao, lele. > -- > nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri > real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. > lele at metapensiero.it | -- Fortunato Depero, 1929. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From ncoghlan at gmail.com Sun Dec 6 23:36:42 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 7 Dec 2015 14:36:42 +1000 Subject: [Distutils] setup.py install using pip In-Reply-To: <5E048C95-9CC3-4C1A-88BA-428126EE5EBD@ronnypfannschmidt.de> References: <5E048C95-9CC3-4C1A-88BA-428126EE5EBD@ronnypfannschmidt.de> Message-ID: On 4 December 2015 at 17:55, Ronny Pfannschmidt wrote: > But it still needs a change in command > A direct switch to a real pip command comes with a free implementation and zero additional features to maintain in setuptools The key point here is that opting in to this new behaviour would just mean setting a new configuration flag in pydistutils.cfg on the system doing the build/install, or adding a new option to the list of options passed to the install command. That's a significantly lower barrier to entry than finding all of the occurrences of "./setup.py install" in existing packaging and deployment scripts and converting them over to use "pip install" instead, especially since all the *other* command line flags would continue to be the setuptools flags rather than the pip ones. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From opensource at ronnypfannschmidt.de Mon Dec 7 02:07:53 2015 From: opensource at ronnypfannschmidt.de (Ronny Pfannschmidt) Date: Mon, 07 Dec 2015 08:07:53 +0100 Subject: [Distutils] setup.py install using pip In-Reply-To: References: <5E048C95-9CC3-4C1A-88BA-428126EE5EBD@ronnypfannschmidt.de> Message-ID: <3CEDD7D1-9678-4373-BC37-E2557C195945@ronnypfannschmidt.de> That's a straw man, this has enough inconsistency potential to break many edge cases in ugly ways, So global setup is out. Projects themselves can really just switch to pip commands, same goes for packagers and other tool makers Explicit is better than implicit, and in this case it also won't cost additional maintenance on setuptools. Please keep in mind, that setuptools is completely on volunteer time, and the time given to it is scarce. Am 7. Dezember 2015 05:36:42 MEZ, schrieb Nick Coghlan : >On 4 December 2015 at 17:55, Ronny Pfannschmidt > wrote: >> But it still needs a change in command >> A direct switch to a real pip command comes with a free >implementation and zero additional features to maintain in setuptools > >The key point here is that opting in to this new behaviour would just >mean setting a new configuration flag in pydistutils.cfg on the system >doing the build/install, or adding a new option to the list of options >passed to the install command. That's a significantly lower barrier to >entry than finding all of the occurrences of "./setup.py install" in >existing packaging and deployment scripts and converting them over to >use "pip install" instead, especially since all the *other* command >line flags would continue to be the setuptools flags rather than the >pip ones. > >Cheers, >Nick. > >-- >Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -- Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Dec 7 04:34:02 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 7 Dec 2015 19:34:02 +1000 Subject: [Distutils] setup.py install using pip In-Reply-To: <3CEDD7D1-9678-4373-BC37-E2557C195945@ronnypfannschmidt.de> References: <5E048C95-9CC3-4C1A-88BA-428126EE5EBD@ronnypfannschmidt.de> <3CEDD7D1-9678-4373-BC37-E2557C195945@ronnypfannschmidt.de> Message-ID: On 7 December 2015 at 17:07, Ronny Pfannschmidt wrote: > That's a straw man, this has enough inconsistency potential to break many > edge cases in ugly ways, > So global setup is out. No, global set up *isn't* out - the inevitable edge cases won't matter to an application integrator if none of the components they're using hit them, and installation related problems have the virtue of being relatively straightforward to pick up in a continuous integration system. Using such a switch wouldn't be the right fit for everyone, but that's not the same as it being entirely useless. > Projects themselves can really just switch to pip commands, same goes for > packagers and other tool makers > > Explicit is better than implicit, and in this case it also won't cost > additional maintenance on setuptools. > Please keep in mind, that setuptools is completely on volunteer time, and > the time given to it is scarce. Sure, that's why any decision on the desirability of this feature would be up to Jason as the setuptools lead. However, there's a trade-off to consider here, which is that offering this kind of global installer switch may help to lower the priority of some other easy_install enhancement requests. That's a risk assessment trade-off on future bug reports against attempted pip support vs future RFEs against easy_install itself, as well as a priority assessment against other open changes proposed for setuptools. Those assessments may well come down on the side of "not worth the hassle", but the scope of the proposed change still falls a long way short of being a "maintenance horror". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From erik.m.bray at gmail.com Mon Dec 7 11:21:42 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Mon, 7 Dec 2015 11:21:42 -0500 Subject: [Distutils] setup.py install using pip In-Reply-To: References: <5E048C95-9CC3-4C1A-88BA-428126EE5EBD@ronnypfannschmidt.de> <3CEDD7D1-9678-4373-BC37-E2557C195945@ronnypfannschmidt.de> Message-ID: On Mon, Dec 7, 2015 at 4:34 AM, Nick Coghlan wrote: > On 7 December 2015 at 17:07, Ronny Pfannschmidt > wrote: >> That's a straw man, this has enough inconsistency potential to break many >> edge cases in ugly ways, >> So global setup is out. > > No, global set up *isn't* out - the inevitable edge cases won't matter > to an application integrator if none of the components they're using > hit them, and installation related problems have the virtue of being > relatively straightforward to pick up in a continuous integration > system. > > Using such a switch wouldn't be the right fit for everyone, but that's > not the same as it being entirely useless. Exactly--as both a library developer / maintainer and system integrator I would find such a flag very useful (especially since I can just set it in a config file and forget it). It would be right for me. But wouldn't break anything for anyone else. Ironically, the default behavior of `setup.py install`, on projects that use setuptools, is to install an egg directory which is *definitely* not for everybody, especially not anymore. That's why --single-version-externally-managed exists. A --pip flag would be very much like --single-version-externally-managed (sort of a specialized extension of it) that also includes "do everything pip does" which includes installing dependencies and copying .egg-info/.dist-info to the appropriate location, which is what I want to replace all instances of `setup.py install` with. That includes users running `setup.py install`, who have a hard enough time as it is keeping up with Python build/installation "best practices" as it is. Anyways, it has been frequently requested that setuptools change the default behavior of the "install" command, so I wouldn't discount it as a valid use case. So far it hasn't been changed out of backward-compat concerns, so making it loosely opt-in represents a possible middle ground. >> Projects themselves can really just switch to pip commands, same goes for >> packagers and other tool makers >> >> Explicit is better than implicit, and in this case it also won't cost >> additional maintenance on setuptools. >> Please keep in mind, that setuptools is completely on volunteer time, and >> the time given to it is scarce. > > Sure, that's why any decision on the desirability of this feature > would be up to Jason as the setuptools lead. However, there's a > trade-off to consider here, which is that offering this kind of global > installer switch may help to lower the priority of some other > easy_install enhancement requests. Plus I've contributed to setuptools many times in the past (used to have commit access on distribute too). I'm offering to implement and maintain this feature if it's decided desirable. I think it *is* desirable by definition--I desire it, and I suspect others would as well. I'd be more interested in technical reasons why it's a bad idea but I haven't found any yet. > That's a risk assessment trade-off on future bug reports against > attempted pip support vs future RFEs against easy_install itself, as > well as a priority assessment against other open changes proposed for > setuptools. Those assessments may well come down on the side of "not > worth the hassle", but the scope of the proposed change still falls a > long way short of being a "maintenance horror". I think the only maintenance horror right now is easy_install :) Thanks, Erik From p.f.moore at gmail.com Mon Dec 7 11:45:30 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 7 Dec 2015 16:45:30 +0000 Subject: [Distutils] setup.py install using pip In-Reply-To: References: <5E048C95-9CC3-4C1A-88BA-428126EE5EBD@ronnypfannschmidt.de> <3CEDD7D1-9678-4373-BC37-E2557C195945@ronnypfannschmidt.de> Message-ID: On 7 December 2015 at 16:21, Erik Bray wrote: > Exactly--as both a library developer / maintainer and system > integrator I would find such a flag very useful (especially since I > can just set it in a config file and forget it). It would be right > for me. But wouldn't break anything for anyone else. > > Ironically, the default behavior of `setup.py install`, on projects > that use setuptools, is to install an egg directory which is > *definitely* not for everybody, especially not anymore. That's why > --single-version-externally-managed exists. A --pip flag would be > very much like --single-version-externally-managed (sort of a > specialized extension of it) that also includes "do everything pip > does" which includes installing dependencies and copying > .egg-info/.dist-info to the appropriate location, which is what I want > to replace all instances of `setup.py install` with. That includes > users running `setup.py install`, who have a hard enough time as it is > keeping up with Python build/installation "best practices" as it is. One thing that bothers me about this proposal. If someone does "pip install --no-binary" for your package, and you have the "--pip" flag in your setup.cfg, pip will use "setup.py install" to do the install. Which, if I understand this proposal correctly, will attempt to "fall back" to pip because "--pip" is in setup.cfg. Which results in an infinite loop of pip and setup.py invoking each other. I'm not sure how pip could detect a situation like this, so there's a risk of some *very* obscure corner cases, which I'm sure people will end up hitting. As a user level command line flag, "setup.py install --pip" isn't much better than "pip install ." As a project config, we get the issue noted above, and the user has to edit the project code to fix it. As a per-user or global config, we get the issue above, but we could reasonably say it's the user's mistake and the user has the means to fix it. But it's still not a great UX. It's quite possible I'm missing something here (at a minimum, I'm making huge assumptions about how the feature would be implemented) but I think the behaviour needs to be thought through in a bit more detail. Paul From erik.m.bray at gmail.com Mon Dec 7 13:58:02 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Mon, 7 Dec 2015 13:58:02 -0500 Subject: [Distutils] setup.py install using pip In-Reply-To: References: <5E048C95-9CC3-4C1A-88BA-428126EE5EBD@ronnypfannschmidt.de> <3CEDD7D1-9678-4373-BC37-E2557C195945@ronnypfannschmidt.de> Message-ID: On Mon, Dec 7, 2015 at 11:45 AM, Paul Moore wrote: > On 7 December 2015 at 16:21, Erik Bray wrote: >> Exactly--as both a library developer / maintainer and system >> integrator I would find such a flag very useful (especially since I >> can just set it in a config file and forget it). It would be right >> for me. But wouldn't break anything for anyone else. >> >> Ironically, the default behavior of `setup.py install`, on projects >> that use setuptools, is to install an egg directory which is >> *definitely* not for everybody, especially not anymore. That's why >> --single-version-externally-managed exists. A --pip flag would be >> very much like --single-version-externally-managed (sort of a >> specialized extension of it) that also includes "do everything pip >> does" which includes installing dependencies and copying >> .egg-info/.dist-info to the appropriate location, which is what I want >> to replace all instances of `setup.py install` with. That includes >> users running `setup.py install`, who have a hard enough time as it is >> keeping up with Python build/installation "best practices" as it is. > > One thing that bothers me about this proposal. If someone does "pip > install --no-binary" for your package, and you have the "--pip" flag > in your setup.cfg, pip will use "setup.py install" to do the install. > Which, if I understand this proposal correctly, will attempt to "fall > back" to pip because "--pip" is in setup.cfg. Which results in an > infinite loop of pip and setup.py invoking each other. I wasn't able to produce this problem. Even with --no-binary specified pip installs (by default) with --single-version-externally-managed. My prototype implicitly disables the --pip flag if --single-version-externally-managed was specified (true to the purpose of that flag). What *is* a problem is if --pip is in setup.cfg, and one invokes `pip install --egg .`. I wasn't quite able to make that go into an infinite loop, but it did invoke pip.main recursively, and stuff broke on the second invocation for reasons not clear to me. This is easily worked around, however, by detecting, from the install command, if we're already using pip. As a quick hack I added to finalize_options: if 'pip' in sys.modules: self.pip = False This did the trick. pip ran fine and installed the package as an egg using the standard setup.py install. I don't think a more robust solution would be hard. pip could set an environment variable or even a variable in the pip module itself to indicate that pip is already being invoked to install this package. There may even be something like that already in pip that I'm not aware of. > I'm not sure how pip could detect a situation like this, so there's a > risk of some *very* obscure corner cases, which I'm sure people will > end up hitting. As mentioned above I don't think it should be pip's job to detect this situation. But if the setuptools install command can detect that we're already in pip then the job is done. > As a user level command line flag, "setup.py install --pip" isn't much > better than "pip install ." > As a project config, we get the issue noted above, and the user has to > edit the project code to fix it. > As a per-user or global config, we get the issue above, but we could > reasonably say it's the user's mistake and the user has the means to > fix it. But it's still not a great UX. I don't think so either. But it's also not great UX that we've told users for decades that the way to install a Python package is to run `python setup.py install`, but now the default behavior of that (for packages using setuptools) which is to install a .egg, is old and bad. I get confusion about this from users *frequently*. It's only worse that egg installs and flat installs are incompatible with each other with respect to namespace packages. If it would help, `setup.py install --pip` could also display a warning that users should run `pip install .` instead. To cut down on noise I might only do this if the --pip option came from setup.cfg, rather than when it's explicitly asked for via the command line. This would serve to inform users who don't know any better. > It's quite possible I'm missing something here (at a minimum, I'm > making huge assumptions about how the feature would be implemented) > but I think the behaviour needs to be thought through in a bit more > detail. Completely agree! I know there are corner cases I haven't thought of. I'm also undecided on whether --pip should invoke pip.main() within the same process, or run pip from a subprocess. The former seems sufficient but I don't know all the cases. Thanks, Erik From p.f.moore at gmail.com Mon Dec 7 14:40:40 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 7 Dec 2015 19:40:40 +0000 Subject: [Distutils] setup.py install using pip In-Reply-To: References: <5E048C95-9CC3-4C1A-88BA-428126EE5EBD@ronnypfannschmidt.de> <3CEDD7D1-9678-4373-BC37-E2557C195945@ronnypfannschmidt.de> Message-ID: On 7 December 2015 at 18:58, Erik Bray wrote: > I wasn't able to produce this problem. Even with --no-binary > specified pip installs (by default) with > --single-version-externally-managed. My prototype implicitly disables > the --pip flag if --single-version-externally-managed was specified > (true to the purpose of that flag). Ah - that was the bit I was missing, the --single-version-externally-managed flag can be used to trigger ignoring --pip. > What *is* a problem is if --pip is in setup.cfg, and one invokes `pip > install --egg .`. I wasn't quite able to make that go into an > infinite loop, but it did invoke pip.main recursively, and stuff broke > on the second invocation for reasons not clear to me. Yeah, but honestly I don't think pip install --egg is that important a use case. I may be wrong (there's lots of ways people use pip that I know nothing of :-)) but as a starting point it might be OK just to say that at the same time as the --pip flag was introduced, "pip install --egg" was deprecated (and we explicitly document that pip install --egg is known to interact badly with setup.py --pip). Anyway, thanks for the explanation, I see what you're intending now. Paul From erik.m.bray at gmail.com Mon Dec 7 15:14:10 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Mon, 7 Dec 2015 15:14:10 -0500 Subject: [Distutils] setup.py install using pip In-Reply-To: References: <5E048C95-9CC3-4C1A-88BA-428126EE5EBD@ronnypfannschmidt.de> <3CEDD7D1-9678-4373-BC37-E2557C195945@ronnypfannschmidt.de> Message-ID: On Mon, Dec 7, 2015 at 2:40 PM, Paul Moore wrote: > On 7 December 2015 at 18:58, Erik Bray wrote: >> I wasn't able to produce this problem. Even with --no-binary >> specified pip installs (by default) with >> --single-version-externally-managed. My prototype implicitly disables >> the --pip flag if --single-version-externally-managed was specified >> (true to the purpose of that flag). > > Ah - that was the bit I was missing, the > --single-version-externally-managed flag can be used to trigger > ignoring --pip. > >> What *is* a problem is if --pip is in setup.cfg, and one invokes `pip >> install --egg .`. I wasn't quite able to make that go into an >> infinite loop, but it did invoke pip.main recursively, and stuff broke >> on the second invocation for reasons not clear to me. > > Yeah, but honestly I don't think pip install --egg is that important a > use case. I may be wrong (there's lots of ways people use pip that I > know nothing of :-)) but as a starting point it might be OK just to > say that at the same time as the --pip flag was introduced, "pip > install --egg" was deprecated (and we explicitly document that pip > install --egg is known to interact badly with setup.py --pip). I'd be fine with that too. IIRC pip install --egg was introduced in part to work around problems with namespace packages. This doesn't completely eliminate the need for that workaround, but it does reduce it. In either case, if the --pip feature is implemented in setuptools it would still be good to at least have some workaround for installing with pip install --egg, at least temporarily. My goal here was to not break any existing functionality when --pip is specified (either via command line or setup.cfg). > Anyway, thanks for the explanation, I see what you're intending now. Great! From erik.m.bray at gmail.com Mon Dec 7 16:25:03 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Mon, 7 Dec 2015 16:25:03 -0500 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> Message-ID: On Fri, Nov 13, 2015 at 3:09 PM, Nathaniel Smith wrote: > On Nov 13, 2015 12:00 PM, "Alexander Walters" > wrote: >> >> import pip >> pip.install(PACKAGESPEC) >> >> something like that? > > This would be extremely handy if it could be made to work reliably... But > I'm skeptical about whether it can be made to work reliably. Consider all > the fun things that could happen once you start upgrading packages while > python is running, and might e.g. have half of an upgraded package already > loaded into memory. It's like the reloading problem but even more so. Sorry to resurrect an old thread, but I have an idea about how to do this somewhat safely, at least insofar as the running interpreter is concerned. It's still a terrible idea. Not such a terrible idea in principle, but as a practical matter in the context of Python it's probably a bad idea because it uses yet-another-.pth-hack. Consider a "partial install", wherein pip installs all files into a non-imported subdirectory of the target site-packages, along with a .pth file. This distribution is then considered "partially installed" in that the files are their (whether extracted from a wheel, or installed via distutils and the appropriate --root option or similar). For example, consider running >>> pip.install('requests') It would be up to the pip.install() command to determine whether or not the requests distribution was already installed. If it's not install it would proceed as normal. For now I'm assuming the user would still have to manually run `import requests` after this. Auto-import would be nice, but is a separate issue. Now, if requests were already installed and imported we don't want to clobber the existing requests running in the interpreter. pip would install install into the relevant site-packages: <...>/site-packages/requests-2.8.1.part/ requests/ requests-2.8.1.dist-info/ <...>/site-packages/requests-2.8.1.part.pth The .part/ directory contains the results of the partial installation (for example the contents of the wheel, for wheel installs). The .part.pth file is trickier, but could be something like this: $ cat requests-2.8.1.part.pth import inspect, shutil, sys, os, atexit;p = inspect.currentframe().f_locals['sitedir'];part = os.path.join(p, 'requests-2.8.1.part');files = os.path.isdir(part) and os.listdir(part);files and list(map(lambda s, d, f: (sys.modules['shutil'].rmtree(os.path.join(d, f), sys.modules['shutil'].move(os.path.join(s, f), os.path.join(d, f))), [part] * len(files), [p] * len(files), files));os.rmdir(part);pth = part + '.pth';os.path.isfile(pth) and atexit.register(os.unlink, os.path.abspath(pth)) This rifles through the contents of requests.2.8.1.part, deletes any existing directories in the parent site-packages of the same name, completes the install by moving the contents of the .part/ directory into the correct location and then deletes the .part/ directory. The .part.pth later deletes itself. By the time the user restarts the interpreter and runs `import requests` this will be completed. Obvious it would have to be communicated to the user that to upgrade an existing package they will have to restart the interpreter which is less than ideal, but relates to a deeper limitation of Python that they should get used to anyways. At least this would enable in-process installs/upgrades. There are of course all kinds of problems with this solution too. It should perhaps only work in a virtualenv and/or .local site-packages (or at least somewhere that the user will have write permissions on the next interpreter run), and probably other error handling too. The above .pth file could also be simplified by invoking a function in pip to complete any partial installs. Erik From contact at ionelmc.ro Tue Dec 8 01:56:49 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Tue, 8 Dec 2015 08:56:49 +0200 Subject: [Distutils] Installing packages using pip In-Reply-To: <20151116162503.GA22281@platonas> References: <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> <20151116162503.GA22281@platonas> Message-ID: On Mon, Nov 16, 2015 at 6:25 PM, Marius Gedminas wrote: > What you can do Linux that you cannot do on Windows is delete a shared > library file while it's mapped into a process's address space. Then > Linux lets you create a new file with the same name, while the old file > stays around, nameless, until it's no longer used, at which point the > disk space gets garbage-collected. (If we can call reference counting > "garbage collection".) > > The result is as you said: existing processes keep running the old code > until you restart them. There are tools (based on lsof, AFAIU) that > check for this situation and remind you to restart daemons. > ?Not sure what exactly was going on but whenever I did that on linux I got the most peculiar segfaults and failures. It is certainly not a safe thing to do, even if linux lets you do it.? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Tue Dec 8 13:25:20 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 08 Dec 2015 13:25:20 -0500 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> <20151116162503.GA22281@platonas> Message-ID: <20151208182522.73D952510C8@webabinitio.net> On Tue, 08 Dec 2015 08:56:49 +0200, contact at ionelmc.ro wrote: > On Mon, Nov 16, 2015 at 6:25 PM, Marius Gedminas wrote: > > > What you can do Linux that you cannot do on Windows is delete a shared > > library file while it's mapped into a process's address space. Then > > Linux lets you create a new file with the same name, while the old file > > stays around, nameless, until it's no longer used, at which point the > > disk space gets garbage-collected. (If we can call reference counting > > "garbage collection".) > > > > The result is as you said: existing processes keep running the old code > > until you restart them. There are tools (based on lsof, AFAIU) that > > check for this situation and remind you to restart daemons. > > > > Not sure what exactly was going on but whenever I did that on linux I got > the most peculiar segfaults and failures. It is certainly not a safe thing > to do, even if linux lets you do it. I'm not sure what you did, because to my understanding it certainly should be safe on linux, at least on posix compliant file systems. --David From greg.ewing at canterbury.ac.nz Tue Dec 8 17:51:48 2015 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 09 Dec 2015 11:51:48 +1300 Subject: [Distutils] Installing packages using pip In-Reply-To: <20151208182522.73D952510C8@webabinitio.net> References: <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> <20151116162503.GA22281@platonas> <20151208182522.73D952510C8@webabinitio.net> Message-ID: <56675F04.3090601@canterbury.ac.nz> > On Tue, 08 Dec 2015 08:56:49 +0200, contact at ionelmc.ro wrote: > >>Not sure what exactly was going on but whenever I did that on linux I got >>the most peculiar segfaults and failures. It is certainly not a safe thing >>to do, even if linux lets you do it. Are you sure you were actually unlinking the old file and creating a new one, rather than overwriting the existing file? The latter would certainly cause trouble if you were able to do it. -- Greg From contact at ionelmc.ro Tue Dec 8 18:23:03 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Wed, 9 Dec 2015 01:23:03 +0200 Subject: [Distutils] Installing packages using pip In-Reply-To: <56675F04.3090601@canterbury.ac.nz> References: <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> <20151116162503.GA22281@platonas> <20151208182522.73D952510C8@webabinitio.net> <56675F04.3090601@canterbury.ac.nz> Message-ID: On Wed, Dec 9, 2015 at 12:51 AM, Greg Ewing wrote: > Are you sure you were actually unlinking the old file > and creating a new one, rather than overwriting the > existing file? The latter would certainly cause trouble > if you were able to do it. > ?I had two instances of this ?problem: - pip upgrading (pip removes old version first) some package with C extensions while processes using that still runs - removing (yes, rm -rf, not inplace) and recreating a virtualenv while processes using that still runs It's wrong to think "should be safe on linux". Linux lets you do very stupid things. But that don't make them right or feasible to do in the general case. You can do it, sure, but the utility and safety are limited and very specific in scope. You gotta applaud Windows for getting this right. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Dec 8 19:18:30 2015 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 8 Dec 2015 16:18:30 -0800 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> <20151116162503.GA22281@platonas> <20151208182522.73D952510C8@webabinitio.net> <56675F04.3090601@canterbury.ac.nz> Message-ID: On Tue, Dec 8, 2015 at 3:23 PM, Ionel Cristian M?rie? wrote: > > > On Wed, Dec 9, 2015 at 12:51 AM, Greg Ewing > wrote: > >> Are you sure you were actually unlinking the old file >> and creating a new one, rather than overwriting the >> existing file? The latter would certainly cause trouble >> if you were able to do it. >> > > ?I had two instances of this ?problem: > > - pip upgrading (pip removes old version first) some package with C > extensions while processes using that still runs > - removing (yes, rm -rf, not inplace) and recreating a virtualenv > while processes using that still runs > > It's wrong to think "should be safe on linux". Linux lets you do very > stupid things. But that don't make them right or feasible to do in the > general case. > > You can do it, sure, but the utility and safety are limited and very > specific in scope. You gotta applaud Windows for getting this right. > It's true that this feature of Unix filesystems doesn't automatically make all forms of upgrade safe; in particular, it breaks in cases where an already-running process needs to open some sort of resource/plugin file, and an upgrade process has removed the file or replaced it with an incompatible one in between when the program was started and when it tried to access the resource. But, seriously, I've been swapping out libraries like libc on running systems on a weekly basis for years (this is pretty standard for debian users), and it basically just works. It's definitely better to reboot after such upgrades to make sure that the new version is in use (e.g. a new version of openssl with security fixes), and to avoid issues like the ones described in the previous paragraph, but generally speaking it's easily possible to have a program that runs fine despite its virtualenv having been deleted out from under it -- the rule is simply that any open/mmap'ed file will continue, perfectly reliably, to refer to the original file until the program exits, even if that file no longer has a name in the filesystem (which is what rm does). A common example of where you can get weirdness in Python is that Python waits until it has to actually print a traceback before loading the original source code (.py file -- most of the time it just uses the .pyc file), so if you upgrade a python library in-place then existing processes will continue to execute the original code and show correct file names and line numbers in tracebacks, but the actual source lines printed in tracebacks will be incorrect. I don't really care about trying to rank Windows vs Unix as being "better", obviously there are trade-offs here. (Though it would be nice if Windows had SOME more reasonable solution to the upgrade problem.) Just want to make sure that the actual semantics here are clear -- there's nothing mysterious about the Unix semantics, and it's pretty easy to predict what will work and what won't once you understand what's going on. -n -- Nathaniel J. Smith -- http://vorpus.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Tue Dec 8 20:55:00 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 9 Dec 2015 14:55:00 +1300 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> Message-ID: Updated - tl;dr: - specify encoding of stdout streams that matter - change build command template to a list of strings so we don't need to attempt to address shell escaping for paths with spaces(or other such things) - be clear about which variables apply in command templating, and which are environment variables [we dont' want an implicit dependency on shell, since this is for Windows too. I've written a proof of concept of a setuptools shim (looks like setup.py, calls the build system from pypa.json) - right now it only handles 'develop', so its missing wheel support, and for pip < 6 also install support. It is however sufficient to let us reason about things. https://pypi.python.org/pypi/setuptools_shim The thing I'm least happy about is that implementing install support will require recursively calling back into pip, that or reimplementing the installation of wheels logic from within pip - because sufficiently old pip's won't call wheel at all. And even modern pips can be told *not to call wheel*. -Rob diff --git a/build-system-abstraction.rst b/build-system-abstraction.rst index 762cd88..a6e4712 100644 --- a/build-system-abstraction.rst +++ b/build-system-abstraction.rst @@ -84,18 +84,19 @@ bootstrap_requires Optional list of dependency specifications [#dependencyspec] that must be installed before running the build tool. For instance, if using flit, then the requirements might be:: - + bootstrap_requires: ["flit"] build_command - A mandatory Python format string [#strformat]_ describing the command to - run. For instance, if using flit then the build command might be:: + A mandatory key, this is a list of Python format strings [#strformat]_ + describing the command to run. For instance, if using flit then the build + command might be:: - build_command: "flit" + build_command: ["flit"] If using a command which is a runnable module fred:: - {PYTHON} -m fred + build_command: ["{PYTHON}", "-m", "fred"] Process interface ----------------- @@ -114,14 +115,23 @@ redirected and no communication with the user is possible. As usual with processes, a non-zero exit status indicates an error. -Available variables -------------------- +Available format variables +-------------------------- PYTHON The Python interpreter in use. This is important to enable calling things which are just Python entry points. - ${PYTHON} -m foo + {PYTHON} -m foo + +Available environment variables +------------------------------- + +These variables are set by the caller of the build system and will always be +available. + +PYTHON + As for format variables. PYTHONPATH Used to control sys.path per the normal Python mechanisms. @@ -133,9 +143,9 @@ There are a number of separate subcommands that build systems must support. The examples below use a build_command of ``flit`` for illustrative purposes. build_requires - Query build requirements. Build requirements are returned as a JSON - document with one key ``build_requires`` consisting of a list of - dependency specifications [#dependencyspec]_. Additional keys must be + Query build requirements. Build requirements are returned as a UTF-8 + encoded JSON document with one key ``build_requires`` consisting of a list + of dependency specifications [#dependencyspec]_. Additional keys must be ignored. The build_requires command is the only command run without setting up a build environment. @@ -145,9 +155,9 @@ build_requires metadata Query project metadata. The metadata and only the metadata should - be output on stdout. pip would run metadata just once to determine what - other packages need to be downloaded and installed. The metadata is output - as a wheel METADATA file per PEP-427 [#pep427]_. + be output on stdout in UTF-8 encoding. pip would run metadata just once to + determine what other packages need to be downloaded and installed. The + metadata is output as a wheel METADATA file per PEP-427 [#pep427]_. Note that the metadata generated by the metadata command, and the metadata present in a generated wheel must be identical. @@ -169,7 +179,7 @@ wheel -d OUTPUT_DIR develop [--prefix PREFIX] [--root ROOT] Command to do an in-place 'development' installation of the project. Stdout and stderr have no semantic meaning. - + Not all build systems will be able to perform develop installs. If a build system cannot do develop installs, then it should error when run. Note that doing so will cause use operations like ``pip install -e foo`` to From contact at ionelmc.ro Tue Dec 8 22:10:40 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Wed, 9 Dec 2015 05:10:40 +0200 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> <20151116162503.GA22281@platonas> <20151208182522.73D952510C8@webabinitio.net> <56675F04.3090601@canterbury.ac.nz> Message-ID: On Wed, Dec 9, 2015 at 2:18 AM, Nathaniel Smith wrote: > Just want to make sure that the actual semantics here are clear -- there's > nothing mysterious about the Unix semantics, and it's pretty easy to > predict what will work and what won't once you understand what's going on. ?You don't have any guarantees that running process won't try to use stuff from disk later on do you? If it segfaults (and it does in my "general usecases") it's hard to debug - you got nothing conveniently on disk?. And no, "upgrading libc" is not a general usecase, it's just one of those few things that work because they were written in a very specific way, and you should not apply that technique in the general usecase. If you want, I can provide you some reproducers but lets not continue this "but, seriously, it works fine for me" kind of discussion. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Dec 8 22:38:20 2015 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 8 Dec 2015 19:38:20 -0800 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> <20151116162503.GA22281@platonas> <20151208182522.73D952510C8@webabinitio.net> <56675F04.3090601@canterbury.ac.nz> Message-ID: On Tue, Dec 8, 2015 at 7:10 PM, Ionel Cristian M?rie? wrote: > > On Wed, Dec 9, 2015 at 2:18 AM, Nathaniel Smith wrote: >> >> Just want to make sure that the actual semantics here are clear -- there's >> nothing mysterious about the Unix semantics, and it's pretty easy to predict >> what will work and what won't once you understand what's going on. > > > You don't have any guarantees that running process won't try to use stuff > from disk later on do you? If it segfaults (and it does in my "general > usecases") it's hard to debug - you got nothing conveniently on disk. Yes, exactly: if a running process calls 'open' after a file has been replaced then it gets the new file; if it calls 'open' before a file has been replaced then it gets the old file (and keeps it until it calls 'close', even if it's deleted or renamed-over in the mean time). There's nothing intrinsically segfaulty about this, but sure, if you write your program in such a way that it (a) opens a file while running, and (b) segfaults if the file it wants to open is missing or from the wrong version, then yeah, this will trigger that segfault. Probably it would be better to write your program so that missing or corrupted files produce a more controlled error rather than a segfault, and it would then be more robust regardless of upgrade issues, but I can certainly believe that such buggy programs exist. > And > no, "upgrading libc" is not a general usecase, it's just one of those few > things that work because they were written in a very specific way, and you > should not apply that technique in the general usecase. It's the general technique that Linux systems always use when upgrading executables and shared libraries, which are the cases that Windows handles differently, so it's probably worth understanding, is all. -n -- Nathaniel J. Smith -- http://vorpus.org From erik.m.bray at gmail.com Wed Dec 9 14:39:58 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Wed, 9 Dec 2015 14:39:58 -0500 Subject: [Distutils] Partial installs for in-process distribution upgrades (was: Installing packages using pip) Message-ID: Apologies for resending this--my original message got buried in unrelated commentary about how *nix filesystems work. Resending with new subject line... On Fri, Nov 13, 2015 at 3:09 PM, Nathaniel Smith wrote: > On Nov 13, 2015 12:00 PM, "Alexander Walters" > wrote: >> >> import pip >> pip.install(PACKAGESPEC) >> >> something like that? > > This would be extremely handy if it could be made to work reliably... But > I'm skeptical about whether it can be made to work reliably. Consider all > the fun things that could happen once you start upgrading packages while > python is running, and might e.g. have half of an upgraded package already > loaded into memory. It's like the reloading problem but even more so. Sorry to resurrect an old thread, but I have an idea about how to do this somewhat safely, at least insofar as the running interpreter is concerned. It's still a terrible idea. Not such a terrible idea in principle, but as a practical matter in the context of Python it's probably a bad idea because it uses yet-another-.pth-hack. Consider a "partial install", wherein pip installs all files into a non-imported subdirectory of the target site-packages, along with a .pth file. This distribution is then considered "partially installed" in that the files are their (whether extracted from a wheel, or installed via distutils and the appropriate --root option or similar). For example, consider running >>> pip.install('requests') It would be up to the pip.install() command to determine whether or not the requests distribution was already installed. If it's not install it would proceed as normal. For now I'm assuming the user would still have to manually run `import requests` after this. Auto-import would be nice, but is a separate issue. Now, if requests were already installed and imported we don't want to clobber the existing requests running in the interpreter. pip would install install into the relevant site-packages: <...>/site-packages/requests-2.8.1.part/ requests/ requests-2.8.1.dist-info/ <...>/site-packages/requests-2.8.1.part.pth The .part/ directory contains the results of the partial installation (for example the contents of the wheel, for wheel installs). The .part.pth file is trickier, but could be something like this: $ cat requests-2.8.1.part.pth import inspect, shutil, sys, os, atexit;p = inspect.currentframe().f_locals['sitedir'];part = os.path.join(p, 'requests-2.8.1.part');files = os.path.isdir(part) and os.listdir(part);files and list(map(lambda s, d, f: (sys.modules['shutil'].rmtree(os.path.join(d, f), sys.modules['shutil'].move(os.path.join(s, f), os.path.join(d, f))), [part] * len(files), [p] * len(files), files));os.rmdir(part);pth = part + '.pth';os.path.isfile(pth) and atexit.register(os.unlink, os.path.abspath(pth)) This rifles through the contents of requests.2.8.1.part, deletes any existing directories in the parent site-packages of the same name, completes the install by moving the contents of the .part/ directory into the correct location and then deletes the .part/ directory. The .part.pth later deletes itself. By the time the user restarts the interpreter and runs `import requests` this will be completed. Obvious it would have to be communicated to the user that to upgrade an existing package they will have to restart the interpreter which is less than ideal, but relates to a deeper limitation of Python that they should get used to anyways. At least this would enable in-process installs/upgrades. There are of course all kinds of problems with this solution too. It should perhaps only work in a virtualenv and/or .local site-packages (or at least somewhere that the user will have write permissions on the next interpreter run), and probably other error handling too. The above .pth file could also be simplified by invoking a function in pip to complete any partial installs. Erik From ralf.gommers at gmail.com Wed Dec 9 14:59:26 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 9 Dec 2015 20:59:26 +0100 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> Message-ID: On Wed, Dec 9, 2015 at 2:55 AM, Robert Collins wrote: > Updated - tl;dr: > > The thing I'm least happy about is that implementing install support > will require recursively calling back into pip, that or reimplementing > the installation of wheels logic from within pip - because > sufficiently old pip's won't call wheel at all. You're specifying a new interface here, and updating pip itself is quite easy. So why would you do things you're not happy about to support "sufficiently old pip's"? > And even modern pips > can be told *not to call wheel*. Isn't that something you can ignore? If the plan for pip anyway is to always go sdist-wheel-install, why support this flag for a new build interface? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Wed Dec 9 15:56:21 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 10 Dec 2015 09:56:21 +1300 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> Message-ID: On 10 December 2015 at 08:59, Ralf Gommers wrote: > > > On Wed, Dec 9, 2015 at 2:55 AM, Robert Collins > wrote: >> >> Updated - tl;dr: >> >> The thing I'm least happy about is that implementing install support >> will require recursively calling back into pip, that or reimplementing >> the installation of wheels logic from within pip - because >> sufficiently old pip's won't call wheel at all. > > > You're specifying a new interface here, and updating pip itself is quite > easy. So why would you do things you're not happy about to support > "sufficiently old pip's"? Because the goal is to make the barrier to entry for new build systems low. Right now, for instance, flit doesn't support building an sdist (From its pypi page : " Flit only creates packages in the new ?wheel? format. People using older versions of pip (<1.5) or easy_install will not be able to install them. People may also want a traditional sdist for other reasons, such as Linux distro packaging. I hope that these problems will diminsh over time. ") The current defacto contract is: - egg-info - install - wheel - develop The issues with this are: - install is a responsibility for pip/conda etc, not build systems - ditto develop - there's no way to bootstrap the build system - there's no way to determine build requirements Of those four things, two are very low hanging fruit (the last two), one is nontrivial - develop mode - and one is inbetween - since we're requiring 'wheel', "install" is something pip can do itself with a trivial change... but it shuffles the complexity onto the setuptools shim, which we likely want to enable rapid adoption. So we can choose: - delay the ability for pip/conda etc to own the install stage [by putting install in the contract] - centralise that complexity by having setuptools_shim wear it So - so far the second is better I think, I just don't like it :) >> And even modern pips >> can be told *not to call wheel*. > > > Isn't that something you can ignore? If the plan for pip anyway is to always > go sdist-wheel-install, why support this flag for a new build interface? Well, there's still debate about that. I think its waste and will piss developers off (heck, even in tox OpenStack folk find sdist too long and disable it routinely - we've added CI checks that sdist doesn't error to allow keeping the local developer workflow smooth). -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From donald at stufft.io Wed Dec 9 16:07:04 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 9 Dec 2015 16:07:04 -0500 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> Message-ID: > On Dec 9, 2015, at 3:56 PM, Robert Collins wrote: > > On 10 December 2015 at 08:59, Ralf Gommers wrote: >> >>> And even modern pips >>> can be told *not to call wheel*. >> >> >> Isn't that something you can ignore? If the plan for pip anyway is to always >> go sdist-wheel-install, why support this flag for a new build interface? > > Well, there's still debate about that. I think its waste and will piss > developers off (heck, even in tox OpenStack folk find sdist too long > and disable it routinely - we've added CI checks that sdist doesn't > error to allow keeping the local developer workflow smooth). > I?m in process of moving so I?m a bit scattered brained at the moment and I don?t have the time to look into the specifics but if this is for the build interface (vs the shim) then I don?t think we should support plain ``install``. Opting into the new format should mandate the capability of producing a wheel and then installing from that instead of being able to directly install. If we consider the setuptools/distutils era to be ?Make it Work?, then we?re now at ?Make it Right?, making it fast can come later but sacrificing correctness for speed isn?t something I think we should be doing and so speed arguments (vs why it?s more correct to do X instead of Y) don?t matter much to me. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From robertc at robertcollins.net Wed Dec 9 16:17:43 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 10 Dec 2015 10:17:43 +1300 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> Message-ID: On 10 December 2015 at 10:07, Donald Stufft wrote: > >> On Dec 9, 2015, at 3:56 PM, Robert Collins wrote: >> >> On 10 December 2015 at 08:59, Ralf Gommers wrote: >>> >>>> And even modern pips >>>> can be told *not to call wheel*. >>> >>> >>> Isn't that something you can ignore? If the plan for pip anyway is to always >>> go sdist-wheel-install, why support this flag for a new build interface? >> >> Well, there's still debate about that. I think its waste and will piss >> developers off (heck, even in tox OpenStack folk find sdist too long >> and disable it routinely - we've added CI checks that sdist doesn't >> error to allow keeping the local developer workflow smooth). >> > > I?m in process of moving so I?m a bit scattered brained at the moment and I don?t have the time to look into the specifics but if this is for the build interface (vs the shim) then I don?t think we should support plain ``install``. Opting into the new format should mandate the capability of producing a wheel and then installing from that instead of being able to directly install. It is neither; Ralf was referring to the long term pip internals stuff. The new format does mandate wheel and does not support direct 'install', nor require building an sdist. > If we consider the setuptools/distutils era to be ?Make it Work?, then we?re now at ?Make it Right?, making it fast can come later but sacrificing correctness for speed isn?t something I think we should be doing and so speed arguments (vs why it?s more correct to do X instead of Y) don?t matter much to me. Developer speed is a correctness issue: this took ages to get my head fully around, but at the heart of it, there's a very narrow window between in-flow and breaking-flow and the reason developers care so much about latency of local operations is staying in-flow. Yes, there are a wide set of correctness issues to preserve, but if we can't do that and retain a certain minimum performance level, developers will route around us, and we'll be legislating things folk will ignore. The current interface, preserving develop, is sufficient for now, I was commenting in my mail on the longer term view that Ralf was suggesting: tl;dr - this whole sub-bit is not a subject for now. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Wed Dec 9 16:35:57 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 10 Dec 2015 10:35:57 +1300 Subject: [Distutils] Partial installs for in-process distribution upgrades (was: Installing packages using pip) In-Reply-To: References: Message-ID: On 10 December 2015 at 08:39, Erik Bray wrote: > Apologies for resending this--my original message got buried in > unrelated commentary about how *nix filesystems work. Resending with > new subject line... I think its going to break in nasty ways for users on Windows, on a regular basis. For instance: start two interpreters at once with a partially installed thing present and watch them race. For instance: put a .dll extension in a partially installed thing and start a second interpreter without closing the one that triggered the install. Watch the move error because the .dll is locked. (And this is ignoring the questions around module namespaces in sys.modules and the poorly-supported-in-practice semantics of reload()). -Rob From ncoghlan at gmail.com Wed Dec 9 21:19:02 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 Dec 2015 12:19:02 +1000 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> Message-ID: On 10 December 2015 at 07:17, Robert Collins wrote: > On 10 December 2015 at 10:07, Donald Stufft wrote: >> If we consider the setuptools/distutils era to be ?Make it Work?, then we?re now at ?Make it Right?, making it fast can come later but sacrificing correctness for speed isn?t something I think we should be doing and so speed arguments (vs why it?s more correct to do X instead of Y) don?t matter much to me. > > Developer speed is a correctness issue: this took ages to get my head > fully around, but at the heart of it, there's a very narrow window > between in-flow and breaking-flow and the reason developers care so > much about latency of local operations is staying in-flow. +1 I think this is also the practical difference between things like unit tests, functional tests and acceptance tests: it's not about who writes them, or how they're written, but which kinds of activities trigger them being run. The key timelines: * local development cycle (checking what you're currently working on) * per-change activities (checking suitability for your fellow contributors) * per-release activities (checking suitability for end users) How those activities are carried out and the exact timelines involved varies greatly based on the kind of software you're writing (e.g. safety critical software vs a language runtime vs your typical web application), and the context you're writing it in (e.g. a large organisation vs a start-up vs a personal side project), but it's the first one that has an essentially human element to it: how long we're able to wait for a build-and-test cycle before we get distracted and start thinking about something other than the change we're working on. The value of setup.py develop and pip -e is that they take the sdist->wheel->install process out of the local development cycle (most of the time), so the speed of that can be governed by your test suite and your test runners ability to tailor the tests it runs to the code you're changing, rather than the duration of your build and install process. Regarding the digression into needing to duplicate pip's wheel installation logic in the setuptools abstract buildsystem shim, I think that's a reasonable thing to do as an initial step. Armed with those two use cases, we can then consider how best to factor that code out into a shared library that pip vendors (perhaps the existing packaging library used for PEP 440 version specifier support, perhaps a new library) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From mal at egenix.com Thu Dec 10 04:28:28 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 10 Dec 2015 10:28:28 +0100 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> Message-ID: <566945BC.7000200@egenix.com> In all this discussion, please don't forget that distutils and setuptools differentiate between building the code and creating a distribution file. The later is not restricted to just the sdist and wheel formats. distutils can create a wide variety of other distribution formats such as MSI files, Windows .exe installers, binary in-place distributions. With extensions it's possible to create the ActiveState package format, RPMs, DEBs, Solaris PKG files and other formats such as our eGenix prebuilt format or web installers. Apart from that it's also possible to have distutils completely drop the distribution file generation and go straight to installation after the build step, e.g. when not using a package manager at all. IMO, it would be better to use the existing "setup.py build" and "setup.py bdist_wheel" APIs to create a build system abstraction, since that'll keep the other existing distribution methods working in the same context, which is important since pip is not the only way to distribution Python packages. The PEP's ideas as well as many other approaches to building packages can be implemented using a "setup.py build" and "setup.py bdist_wheel" interface ("bdist_wheel" will implicitly run "build"). To specifically use the PEP's mechanism, a package could be created which overrides and implements the PEP's build strategy, e.g. "pep778build" (assuming the PEP number is 778 as example). The dependency of a package on this pep778build package would then signal the compatibility of the package with the PEP's build mechanism. In summary: As long as the final result of a call to "setup.py bdist_wheel" is a .whl file in dist/, pip should be able to continue working as normal (and as it does today), regardless of what "setup.py build" does under the hood. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Dec 10 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From p.f.moore at gmail.com Thu Dec 10 05:16:05 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 10 Dec 2015 10:16:05 +0000 Subject: [Distutils] Partial installs for in-process distribution upgrades (was: Installing packages using pip) In-Reply-To: References: Message-ID: On 9 December 2015 at 21:35, Robert Collins wrote: > On 10 December 2015 at 08:39, Erik Bray wrote: >> Apologies for resending this--my original message got buried in >> unrelated commentary about how *nix filesystems work. Resending with >> new subject line... > > I think its going to break in nasty ways for users on Windows, on a > regular basis. Agreed. I think that, regardless of debates on how well (or otherwise) particular operating systems handle "in place upgrades", it's a bad idea in principle to try to upgrade a running system, unless that system was designed from the ground up for such behaviour (e.g., Erlang). Even if it can be made to work, documenting the implications and restrictions would be a nightmare. I think that it's better to take a simple approach, and say that you should never upgrade a Python package that's in use, and it's recommended not to import a package that was installed since your Python session started (this latter probably does work in theory, but "unsupported" is an easier position to take than trying to make sure we didn't miss anything). If other languages in Python's niche *do* properly support in-place upgrading, then we can reconsider, but in that case we'd have prior art to look at to see how they did it. Paul From robertc at robertcollins.net Thu Dec 10 13:06:26 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 11 Dec 2015 07:06:26 +1300 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: <566945BC.7000200@egenix.com> References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> <566945BC.7000200@egenix.com> Message-ID: On 10 December 2015 at 22:28, M.-A. Lemburg wrote: > In all this discussion, please don't forget that distutils and > setuptools differentiate between building the code and creating > a distribution file. > The later is not restricted to just the sdist and wheel formats. > distutils can create a wide variety of other distribution formats > such as MSI files, Windows .exe installers, binary in-place > distributions. With extensions it's possible to create the > ActiveState package format, RPMs, DEBs, Solaris PKG files and > other formats such as our eGenix prebuilt format or web > installers. None of which can be installed by pip, which is the blessed installation path, nor can most of them be uploaded to PyPI, which is the index format we're working too. > Apart from that it's also possible to have distutils > completely drop the distribution file generation and go > straight to installation after the build step, e.g. > when not using a package manager at all. This is something there has been clear consensus on getting away from. Different distributions (e.g. RPM/Deb/Conda/etc) may have different requirements around how installation is done, and getting away from the cross product to a single interop point is a key design point. > IMO, it would be better to use the existing "setup.py build" > and "setup.py bdist_wheel" APIs to create a build system > abstraction, since that'll keep the other existing distribution > methods working in the same context, which is important since > pip is not the only way to distribution Python packages. Since 'build' and 'bdist_wheel' are defacto, and since setuptools is infinitely variable... I'd like it if you could specify what you want, rather than just referring to those commands. As it stands I've read your mail but don't understand what is missing from the proposed contract, nor what things are present but objectionable. > The PEP's ideas as well as many other approaches to building > packages can be implemented using a "setup.py build" > and "setup.py bdist_wheel" interface ("bdist_wheel" will > implicitly run "build"). A second key design input was that competing build system authors *do not want* to have a setup.py in the tree of projects using them (specifically the flit folk put this forward) - because they aren't setuptools, and blindly running setuptools commands against their UI won't work... and they don't want to offer all the complexity of setuptools commands because 99% of it is irrelevant to them and their users. > To specifically use the PEP's mechanism, a package could be > created which overrides and implements the PEP's build strategy, > e.g. "pep778build" (assuming the PEP number is 778 as example). > > The dependency of a package on this pep778build package > would then signal the compatibility of the package with > the PEP's build mechanism. That would fail to be bootstrappable, since there is no programmatic API to get build requirements out of setuptools packages today without triggering easy-install. At a minimum we'd have to address that before we could consider utilising setup.py as the contract itself. > In summary: As long as the final result of a call to > "setup.py bdist_wheel" is a .whl file in dist/, pip should be > able to continue working as normal (and as it does today), > regardless of what "setup.py build" does under the hood. pip will be able to keep working if folk don't opt into the new system, because backwards compat with the 10's of thousands of distributions on PyPI is a big deal - I don't see that being deliberately broken ever. And, a setuptools_pepxxx adapter should be easy to write - in fact I plan to write a pbr_pepxxx adapter which will be nearly identical, as a proof of concept - for folk that want to allow easy-install to not be involved but still use setuptools. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Thu Dec 10 17:12:36 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 11 Dec 2015 11:12:36 +1300 Subject: [Distutils] setup.py install using pip In-Reply-To: References: <5E048C95-9CC3-4C1A-88BA-428126EE5EBD@ronnypfannschmidt.de> <3CEDD7D1-9678-4373-BC37-E2557C195945@ronnypfannschmidt.de> Message-ID: On 8 December 2015 at 09:14, Erik Bray wrote: > On Mon, Dec 7, 2015 at 2:40 PM, Paul Moore wrote: >> On 7 December 2015 at 18:58, Erik Bray wrote: >>> I wasn't able to produce this problem. Even with --no-binary >>> specified pip installs (by default) with >>> --single-version-externally-managed. My prototype implicitly disables >>> the --pip flag if --single-version-externally-managed was specified >>> (true to the purpose of that flag). >> >> Ah - that was the bit I was missing, the >> --single-version-externally-managed flag can be used to trigger >> ignoring --pip. >> >>> What *is* a problem is if --pip is in setup.cfg, and one invokes `pip >>> install --egg .`. I wasn't quite able to make that go into an >>> infinite loop, but it did invoke pip.main recursively, and stuff broke >>> on the second invocation for reasons not clear to me. >> >> Yeah, but honestly I don't think pip install --egg is that important a >> use case. I may be wrong (there's lots of ways people use pip that I >> know nothing of :-)) but as a starting point it might be OK just to >> say that at the same time as the --pip flag was introduced, "pip >> install --egg" was deprecated (and we explicitly document that pip >> install --egg is known to interact badly with setup.py --pip). > > I'd be fine with that too. IIRC pip install --egg was introduced in > part to work around problems with namespace packages. This doesn't > completely eliminate the need for that workaround, but it does reduce > it. Huh? No, my understanding was that it was introduced solely to support interop with folk using 'easy-install', and its considered deprecated and delete-as-soon-as-practical. -Rob From kevgliss at gmail.com Mon Dec 14 16:00:07 2015 From: kevgliss at gmail.com (Kevin Glisson) Date: Mon, 14 Dec 2015 21:00:07 +0000 Subject: [Distutils] Release/file cannot add file Message-ID: I am having trouble adding files to my package. It seems as no matter what I try I am always getting the error "file exists". This is after version bumpsm the indexes lists no files at all. I have tried with both `setup.py upload` and `twine upload`. The rest of the request and meta data for the release gets update correctly. I believe this might have something to do with the fact that I took over the namespace from a different package. My package has never successfully been publish to pypi so I am not working about breaking anyone's builds. Package in question: https://pypi.python.org/pypi/lemur Cheers, Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Tue Dec 15 05:12:40 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Tue, 15 Dec 2015 12:12:40 +0200 Subject: [Distutils] Release/file cannot add file In-Reply-To: References: Message-ID: On Mon, Dec 14, 2015 at 11:00 PM, Kevin Glisson wrote: > I believe this might have something to do with the fact that I took over > the namespace from a different package. My package has never successfully > been publish to pypi so I am not working about breaking anyone's builds. > > Package in question: > https://pypi.python.org/pypi/lemur > ?Did you have previous releases that were removed? https://pypi.python.org/pypi/lemur/json indicates you had a 0.2.1 release at least ...? PyPI don't allow re-uploading the same release. Use different version number instead. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevgliss at gmail.com Tue Dec 15 10:03:14 2015 From: kevgliss at gmail.com (Kevin Glisson) Date: Tue, 15 Dec 2015 15:03:14 +0000 Subject: [Distutils] Release/file cannot add file In-Reply-To: References: Message-ID: I've tried several things including bumps, the release always changes, but I've never successfully been able to add any files. On Tue, Dec 15, 2015, 2:13 AM Ionel Cristian M?rie? wrote: > > On Mon, Dec 14, 2015 at 11:00 PM, Kevin Glisson > wrote: > >> I believe this might have something to do with the fact that I took over >> the namespace from a different package. My package has never successfully >> been publish to pypi so I am not working about breaking anyone's builds. >> >> Package in question: >> https://pypi.python.org/pypi/lemur >> > > ?Did you have previous releases that were removed? > > https://pypi.python.org/pypi/lemur/json indicates you had a 0.2.1 release > at least ...? > > PyPI don't allow re-uploading the same release. Use different version > number instead. > > > > Thanks, > -- Ionel Cristian M?rie?, http://blog.ionelmc.ro > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Dec 15 10:20:14 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 15 Dec 2015 15:20:14 +0000 Subject: [Distutils] Metadata 2.0: Warning if optional features are missing Message-ID: This is more a thought for something that would be good to include in Metadata 2.0, or whatever ends up taking its place. I was installing some packages on a new PC, that doesn't have a compiler. As I did so, I noticed a dependency on sqlalchemy fly by, and I thought "oh, that's going to fail", as it's always needed a compiler. But it didn't. I presume sqlalchemy has an optional dependency on something like a speedup module. What would be good is a way for packages to declare that they provide optional features like this, and for pip (and user code) to be able to introspect that data. The n"extras" feature in Metadata 2.0 is sort of like that, but in an "on request" form (I can install foo[speedups] and that has additional requirements, but the install will fail if they are not present). What I'd like to be able to do: 1. pip install sqlalchemy works, but shows a warning "optional feature speedups not installed - no C compiler" 2. A command to list any installed packages with optional features that aren't installed: $ pip options -l sqlalchemy speedups [C compiler] 3. A command to reinstall the currently installed version with new options pip install --add-options sqlalchemy[speedups] (Note that a plain pip install doesn't do this, as it won't reinstall. And --upgrade or --ignore-installed will install newer versions). The use case is the one I described: installing on a system with no compiler (and no wheels provided by the project) where I want to do the install for now, but know I'm missing some stuff, and at a later date when I get a compiler installed I want to update things to add what I would have got if I'd had the compiler in the first place. Thoughts? Is this something that could be practical (obviously it needs projects to *add* the metadata, but they'd need the facility to be there before they could do so). Paul From mmericke at gmail.com Tue Dec 15 11:37:32 2015 From: mmericke at gmail.com (Michael Merickel) Date: Tue, 15 Dec 2015 10:37:32 -0600 Subject: [Distutils] Metadata 2.0: Warning if optional features are missing In-Reply-To: References: Message-ID: On Tue, Dec 15, 2015 at 9:20 AM, Paul Moore wrote: > This is more a thought for something that would be good to include in > Metadata 2.0, or whatever ends up taking its place. > > I was installing some packages on a new PC, that doesn't have a > compiler. As I did so, I noticed a dependency on sqlalchemy fly by, > and I thought "oh, that's going to fail", as it's always needed a > compiler. But it didn't. I presume sqlalchemy has an optional > dependency on something like a speedup module. > > What would be good is a way for packages to declare that they provide > optional features like this, and for pip (and user code) to be able to > introspect that data. The n"extras" feature in Metadata 2.0 is sort of > like that, but in an "on request" form (I can install foo[speedups] > and that has additional requirements, but the install will fail if > they are not present). > It seems to me this would be easily accomplished by declaring some extras like "cext" as default-included and if the install fails someone can depend on "sqlalchemy[-cext]". The UI isn't quite as nice as your proposal but reuses existing machinery. > > What I'd like to be able to do: > > 1. pip install sqlalchemy works, but shows a warning "optional feature > speedups not installed - no C compiler" > Extras wouldn't give a nice message like this. The install would fail and the user would have to guess as to why and then opt out of the default extra. Perhaps some better error message could be displayed if the package failed to install and had a default extra included to show how to opt out. > 2. A command to list any installed packages with optional features > that aren't installed: > $ pip options -l > sqlalchemy speedups [C compiler] > It'd be nice to have a command to list the extras, I'm unaware of one right now. > 3. A command to reinstall the currently installed version with new options > pip install --add-options sqlalchemy[speedups] > (Note that a plain pip install doesn't do this, as it won't > reinstall. And --upgrade or --ignore-installed will install newer > versions). > This should be done by simply reinstalling the package via "pip install sqlalchemy[speedups]". I doubt you need an extra --add-options flag to compete with extras. - Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Dec 15 12:58:48 2015 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 15 Dec 2015 09:58:48 -0800 Subject: [Distutils] Metadata 2.0: Warning if optional features are missing In-Reply-To: References: Message-ID: This is very similar to what Debian calls "Recommends". A recommends relationship is like a soft version of Depends -- if A recommends B, then installing A will by default pull in B, but if B is unavailable or the user asks that B not be installed, then that's OK (rather then being an error like it would be for a full Depends). This is commonly used for cases where B is the docs for A, or an admin console for the server A installs, or the client library needed to access this server. It would also mostly cover your use case, with the addition of some commands to query for unsatisfied recommendations. As a general rule, I like the Debian strategy of trying to provide a few mechanisms like this that are general purpose and semantically simple (in the sense that it's barely different from the existing depends concept), instead of adding special case concepts like "optional features". -n On Dec 15, 2015 7:20 AM, "Paul Moore" wrote: > This is more a thought for something that would be good to include in > Metadata 2.0, or whatever ends up taking its place. > > I was installing some packages on a new PC, that doesn't have a > compiler. As I did so, I noticed a dependency on sqlalchemy fly by, > and I thought "oh, that's going to fail", as it's always needed a > compiler. But it didn't. I presume sqlalchemy has an optional > dependency on something like a speedup module. > > What would be good is a way for packages to declare that they provide > optional features like this, and for pip (and user code) to be able to > introspect that data. The n"extras" feature in Metadata 2.0 is sort of > like that, but in an "on request" form (I can install foo[speedups] > and that has additional requirements, but the install will fail if > they are not present). > > What I'd like to be able to do: > > 1. pip install sqlalchemy works, but shows a warning "optional feature > speedups not installed - no C compiler" > 2. A command to list any installed packages with optional features > that aren't installed: > $ pip options -l > sqlalchemy speedups [C compiler] > 3. A command to reinstall the currently installed version with new options > pip install --add-options sqlalchemy[speedups] > (Note that a plain pip install doesn't do this, as it won't > reinstall. And --upgrade or --ignore-installed will install newer > versions). > > The use case is the one I described: installing on a system with no > compiler (and no wheels provided by the project) where I want to do > the install for now, but know I'm missing some stuff, and at a later > date when I get a compiler installed I want to update things to add > what I would have got if I'd had the compiler in the first place. > > Thoughts? Is this something that could be practical (obviously it > needs projects to *add* the metadata, but they'd need the facility to > be there before they could do so). > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Dec 15 13:30:45 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 15 Dec 2015 18:30:45 +0000 Subject: [Distutils] Metadata 2.0: Warning if optional features are missing In-Reply-To: References: Message-ID: On 15 December 2015 at 16:37, Michael Merickel wrote: > It seems to me this would be easily accomplished by declaring some extras > like "cext" as default-included and if the install fails someone can depend on > "sqlalchemy[-cext]". The UI isn't quite as nice as your proposal but reuses > existing machinery. Hmm, so sqlalchemy says it provides an extra "speedups" (or "cext") by default, checks for a compiler, and removes that extra from what's installed if there's no compiler available? Not sure why anyone would depend on sqlalchemy[-cext] - they should always depend on sqlalchemy, as there's no functional difference between with speedups and without. And the user should never specify the extra, all they do is install a compiler and rebuild. >> What I'd like to be able to do: >> >> 1. pip install sqlalchemy works, but shows a warning "optional feature >> speedups not installed - no C compiler" > > Extras wouldn't give a nice message like this. The install would fail and > the user would have to guess as to why and then opt out of the default > extra. Perhaps some better error message could be displayed if the package > failed to install and had a default extra included to show how to opt out. But the install doesn't fail - it succeeds and works fine, just without some speedups. That's exactly what I want. In my particular case I was installing csvkit which depends on sqlalchemy. It doesn't (nor should it) say that it doesn't need the speedups, nor should I have to manually locate the specific dependency (from a list of many) and install it by hand before my install works. The current behaviour (pip install csvkit -> a working csvkit with no issues) is perfect. But if I later want to use SQLalchemy independently, or I find that a particular usage of csvkit is too slow, I want to know that there's a speedups module I can get by downloading a binary build or installing my own compiler. And I want to be able to install it transparently. >> 3. A command to reinstall the currently installed version with new options >> pip install --add-options sqlalchemy[speedups] >> (Note that a plain pip install doesn't do this, as it won't >> reinstall. And --upgrade or --ignore-installed will install newer >> versions). > > This should be done by simply reinstalling the package via "pip install > sqlalchemy[speedups]". I doubt you need an extra --add-options flag to > compete with extras. I guess you're saying add [speedups] as a way of requesting a rebuild? But if the build fails, would that remove sqlalchemy, or leave the existing build there? (I'd hope the latter). Won't that say "sqlalchemy is already installed"? (I've never used extras with pip, so I don't know). Also what if there's been a newer version of sqlalchemy released? Won't it get that one? I specifically want to say here "just reinstall the exact version I have here, but try again to include optional stuff that I didn't get last time". (In practice I don't really care much if I upgrade, so --upgrade or --ignore-installed is probably fine in reality). Anyway, all of this requires people to implement it, in pip and build tools, as well as projects to adopt it. So it's not really important that the details get thrashed out right now, just that we establish whether it's a practical scenario to support, and get a feel for how much work it would be for projects to adopt it (if it's too much, they won't, and the feature will end up unused). So the fact that extras might be able to support this is the main point here, not the details of how it would work - so thanks. Paul From mmericke at gmail.com Tue Dec 15 13:40:42 2015 From: mmericke at gmail.com (Michael Merickel) Date: Tue, 15 Dec 2015 12:40:42 -0600 Subject: [Distutils] Metadata 2.0: Warning if optional features are missing In-Reply-To: References: Message-ID: On Tue, Dec 15, 2015 at 12:30 PM, Paul Moore wrote: > I guess you're saying add [speedups] as a way of requesting a rebuild? > But if the build fails, would that remove sqlalchemy, or leave the > existing build there? (I'd hope the latter). > Well in this world of wheels we aren't necessarily building anything right.. so there's no "rebuild" or "build". We are just unzipping a bdist and some optional deps. I've seen in the past that a recommended way of accomplishing this would be to create an sqlalchemy-cext package on pypi and have sqlalchemy depend on it. This is what I meant with the "cext": "sqlalchemy-cext" extra. Then people who wanted speedups (most people obviously) would depend on sqlalchemy[cext]. This example works entirely within the current framework. Where I'm at is that it'd be nice if it were extended to support "recommends" or "default extras" but have some way for the extra to not be installed on systems that couldn't handle it (via metadata cross-checked with system/python info). -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Dec 15 13:52:50 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 15 Dec 2015 18:52:50 +0000 Subject: [Distutils] Metadata 2.0: Warning if optional features are missing In-Reply-To: References: Message-ID: On 15 December 2015 at 18:40, Michael Merickel wrote: > > Well in this world of wheels we aren't necessarily building anything right.. > so there's no "rebuild" or "build". We are just unzipping a bdist and some > optional deps. I've seen in the past that a recommended way of accomplishing > this would be to create an sqlalchemy-cext package on pypi and have > sqlalchemy depend on it. This is what I meant with the "cext": > "sqlalchemy-cext" extra. Then people who wanted speedups (most people > obviously) would depend on sqlalchemy[cext]. This example works entirely > within the current framework. If sqlalchemy[1] provided binary wheels for Windows, none of this would be needed. The speedups would be in the wheel, end of story - no-one is likely to ever deliberately to want to *not* have the speedups if they had the choice. I keep trying to set up private indexes of binary wheels to get round issues like this, but maintaining such an index always ends up being more work than is justified by the benefits. It's worth keeping that in mind, though - any proposed solution probably has to be less work for the relevant projects to adopt than providing Windows binary builds would be. [1] And pyyaml, and any other projects with optional C speedups... Paul From opensource at ronnypfannschmidt.de Tue Dec 15 13:54:49 2015 From: opensource at ronnypfannschmidt.de (Ronny Pfannschmidt) Date: Tue, 15 Dec 2015 19:54:49 +0100 Subject: [Distutils] Idea: Positive/negative extras in general and as replacement for features Message-ID: <567061F9.1040500@ronnypfannschmidt.de> Hello everyone, the talk about the sqlalchemy feature extra got me thinking what if i could specify extras that where installed by default, but users could opt out a concrete example i have in mind is the default set of pytest plugins i would like to be able to externalize legacy support details of pytest without needing people to suddenly depend on pytest[recommended] instead of pytest to have what they know function as is instead i would like to be able to do something like a dependency on pytest[-nose,-unittest,-cache] to subtract the items the extras declaration in setup.py would just include a * sign at the beginning an elaborate concrete example could be extras = { +cache: pytest-cache +lastfailed: pyytest[+cache], pytest-lastfailed +looponfail: pytest[+lastfailed], pytest-looponfail +unittest: pytest-unittest +nose: pytest[+unittest], pytest-nose } also a dependency declaration using the + sign in the extras should not imply the default extras of the package, while usage of the - should so depending on pytest[+unittest] would only imply the unittest support but depending on pytest[-nose] would include all positive extras except for nose please note in particular the dependencies on other postive extras, those are put in, so a negative for unittest can imply that nose can't be sensibly used as well if +nose did instead depend directly on pytest-unittest, then excluding it would be a unreasonably tricky resolving algorithm with potential for lots of mistakes instead spelling the direct dependency on positives/negatives can resolve inside of a package and still leave room for more outside of it this is in particular relevant if a package with extras is depended on twice in different packages, because in that case each of the dependence's requirements should add up to the combined set there is also room for fleshing out algorithms for interacting the positive/negative dependence sets, but i'll leave that for later as an addition to that later on there could be support for partial wheel wheels, so features could be materealiyed as wheel package with a special name and build tools could fall back to making them from a sdist as a example, there would be a sqlalchemy package as source wheel, a sqlalchemy*cext package as windows wheels, and pip would have to find a source distribution to compile the wheel package -- Ronny From robertc at robertcollins.net Tue Dec 15 13:59:00 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 16 Dec 2015 07:59:00 +1300 Subject: [Distutils] Metadata 2.0: Warning if optional features are missing In-Reply-To: References: Message-ID: On 16 December 2015 at 07:30, Paul Moore wrote: > On 15 December 2015 at 16:37, Michael Merickel wrote: >> It seems to me this would be easily accomplished by declaring some extras >> like "cext" as default-included and if the install fails someone can depend on >> "sqlalchemy[-cext]". The UI isn't quite as nice as your proposal but reuses >> existing machinery. > > Hmm, so sqlalchemy says it provides an extra "speedups" (or "cext") by > default, checks for a compiler, and removes that extra from what's > installed if there's no compiler available? > > Not sure why anyone would depend on sqlalchemy[-cext] - they should > always depend on sqlalchemy, as there's no functional difference > between with speedups and without. And the user should never specify > the extra, all they do is install a compiler and rebuild. > >>> What I'd like to be able to do: >>> >>> 1. pip install sqlalchemy works, but shows a warning "optional feature >>> speedups not installed - no C compiler" >> >> Extras wouldn't give a nice message like this. The install would fail and >> the user would have to guess as to why and then opt out of the default >> extra. Perhaps some better error message could be displayed if the package >> failed to install and had a default extra included to show how to opt out. > > But the install doesn't fail - it succeeds and works fine, just > without some speedups. That's exactly what I want. In my particular > case I was installing csvkit which depends on sqlalchemy. It doesn't > (nor should it) say that it doesn't need the speedups, nor should I > have to manually locate the specific dependency (from a list of many) > and install it by hand before my install works. The current behaviour > (pip install csvkit -> a working csvkit with no issues) is perfect. > > But if I later want to use SQLalchemy independently, or I find that a > particular usage of csvkit is too slow, I want to know that there's a > speedups module I can get by downloading a binary build or installing > my own compiler. And I want to be able to install it transparently. > >>> 3. A command to reinstall the currently installed version with new options >>> pip install --add-options sqlalchemy[speedups] >>> (Note that a plain pip install doesn't do this, as it won't >>> reinstall. And --upgrade or --ignore-installed will install newer >>> versions). >> >> This should be done by simply reinstalling the package via "pip install >> sqlalchemy[speedups]". I doubt you need an extra --add-options flag to >> compete with extras. > > I guess you're saying add [speedups] as a way of requesting a rebuild? > But if the build fails, would that remove sqlalchemy, or leave the > existing build there? (I'd hope the latter). > > Won't that say "sqlalchemy is already installed"? (I've never used > extras with pip, so I don't know). Also what if there's been a newer > version of sqlalchemy released? Won't it get that one? > > I specifically want to say here "just reinstall the exact version I > have here, but try again to include optional stuff that I didn't get > last time". (In practice I don't really care much if I upgrade, so > --upgrade or --ignore-installed is probably fine in reality). > > Anyway, all of this requires people to implement it, in pip and build > tools, as well as projects to adopt it. So it's not really important > that the details get thrashed out right now, just that we establish > whether it's a practical scenario to support, and get a feel for how > much work it would be for projects to adopt it (if it's too much, they > won't, and the feature will end up unused). So the fact that extras > might be able to support this is the main point here, not the details > of how it would work - so thanks. I'm not sure that extras would support it cleanly. I agree that it is a common use case; in general I'd say 1) consumers (users and depending projects) shouldn't need to know about accelerators 2) some environments will need to be able to exclude them [known to build badly] 3) some consumers will need to be able to mandate them [either using an acclerator only feature, or their thing is known to be infeasible without them] 4) there needs to be a means to get accelerators if they didn't install first time around 5) projects with accelerators shouldn't be forced to split the accelerators into separate projects Some issues with reusing extras are: - extras refer to things in the dependency graph, but as distributions are the installable things and the graph nodes are distributions, foo[fast] is - in widespread deployment - entirely and only a list of additional distributions. - there's no concept of 'default extra', and there is no clear path for bringing it in compatibly, at least so far - we haven't worked through the ui implications about which end of the relation this should be configured: should consumers be specifying them, or providers? - negative operators on extras are as yet undefined, and due to the dependencies of an install being a graph, not a tree, a naive definition is likely very hard to use IMO Recommends and suggests are an interesting way of modelling this, and its possible we don't need an exclude relation- rather users should blacklist them globally in the target environment somehow, which would contain that partcular complexity. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From mmericke at gmail.com Tue Dec 15 15:18:18 2015 From: mmericke at gmail.com (Michael Merickel) Date: Tue, 15 Dec 2015 14:18:18 -0600 Subject: [Distutils] Metadata 2.0: Warning if optional features are missing In-Reply-To: References: Message-ID: FWIW here is the original SQLAlchemy thread from last year that I was talking about when suggesting the external dependency on a sqlalchemy-cext package. http://thread.gmane.org/gmane.comp.python.distutils.devel/21020 On Tue, Dec 15, 2015 at 12:59 PM, Robert Collins wrote: > On 16 December 2015 at 07:30, Paul Moore wrote: > > On 15 December 2015 at 16:37, Michael Merickel > wrote: > >> It seems to me this would be easily accomplished by declaring some > extras > >> like "cext" as default-included and if the install fails someone can > depend on > >> "sqlalchemy[-cext]". The UI isn't quite as nice as your proposal but > reuses > >> existing machinery. > > > > Hmm, so sqlalchemy says it provides an extra "speedups" (or "cext") by > > default, checks for a compiler, and removes that extra from what's > > installed if there's no compiler available? > > > > Not sure why anyone would depend on sqlalchemy[-cext] - they should > > always depend on sqlalchemy, as there's no functional difference > > between with speedups and without. And the user should never specify > > the extra, all they do is install a compiler and rebuild. > > > >>> What I'd like to be able to do: > >>> > >>> 1. pip install sqlalchemy works, but shows a warning "optional feature > >>> speedups not installed - no C compiler" > >> > >> Extras wouldn't give a nice message like this. The install would fail > and > >> the user would have to guess as to why and then opt out of the default > >> extra. Perhaps some better error message could be displayed if the > package > >> failed to install and had a default extra included to show how to opt > out. > > > > But the install doesn't fail - it succeeds and works fine, just > > without some speedups. That's exactly what I want. In my particular > > case I was installing csvkit which depends on sqlalchemy. It doesn't > > (nor should it) say that it doesn't need the speedups, nor should I > > have to manually locate the specific dependency (from a list of many) > > and install it by hand before my install works. The current behaviour > > (pip install csvkit -> a working csvkit with no issues) is perfect. > > > > But if I later want to use SQLalchemy independently, or I find that a > > particular usage of csvkit is too slow, I want to know that there's a > > speedups module I can get by downloading a binary build or installing > > my own compiler. And I want to be able to install it transparently. > > > >>> 3. A command to reinstall the currently installed version with new > options > >>> pip install --add-options sqlalchemy[speedups] > >>> (Note that a plain pip install doesn't do this, as it won't > >>> reinstall. And --upgrade or --ignore-installed will install newer > >>> versions). > >> > >> This should be done by simply reinstalling the package via "pip install > >> sqlalchemy[speedups]". I doubt you need an extra --add-options flag to > >> compete with extras. > > > > I guess you're saying add [speedups] as a way of requesting a rebuild? > > But if the build fails, would that remove sqlalchemy, or leave the > > existing build there? (I'd hope the latter). > > > > Won't that say "sqlalchemy is already installed"? (I've never used > > extras with pip, so I don't know). Also what if there's been a newer > > version of sqlalchemy released? Won't it get that one? > > > > I specifically want to say here "just reinstall the exact version I > > have here, but try again to include optional stuff that I didn't get > > last time". (In practice I don't really care much if I upgrade, so > > --upgrade or --ignore-installed is probably fine in reality). > > > > Anyway, all of this requires people to implement it, in pip and build > > tools, as well as projects to adopt it. So it's not really important > > that the details get thrashed out right now, just that we establish > > whether it's a practical scenario to support, and get a feel for how > > much work it would be for projects to adopt it (if it's too much, they > > won't, and the feature will end up unused). So the fact that extras > > might be able to support this is the main point here, not the details > > of how it would work - so thanks. > > I'm not sure that extras would support it cleanly. > > I agree that it is a common use case; in general I'd say > > 1) consumers (users and depending projects) shouldn't need to know > about accelerators > 2) some environments will need to be able to exclude them [known to build > badly] > 3) some consumers will need to be able to mandate them [either using > an acclerator only feature, or their thing is known to be infeasible > without them] > 4) there needs to be a means to get accelerators if they didn't > install first time around > 5) projects with accelerators shouldn't be forced to split the > accelerators into separate projects > > Some issues with reusing extras are: > - extras refer to things in the dependency graph, but as > distributions are the installable things and the graph nodes are > distributions, foo[fast] is - in widespread deployment - entirely and > only a list of additional distributions. > - there's no concept of 'default extra', and there is no clear path > for bringing it in compatibly, at least so far > - we haven't worked through the ui implications about which end of > the relation this should be configured: should consumers be specifying > them, or providers? > - negative operators on extras are as yet undefined, and due to the > dependencies of an install being a graph, not a tree, a naive > definition is likely very hard to use IMO > > Recommends and suggests are an interesting way of modelling this, and > its possible we don't need an exclude relation- rather users should > blacklist them globally in the target environment somehow, which would > contain that partcular complexity. > > -Rob > > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Tue Dec 15 23:56:14 2015 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 15 Dec 2015 20:56:14 -0800 Subject: [Distutils] workflow recommendations to update requirements.txt Message-ID: Hi, I have a development workflow question I was wondering if people on this list had a recommended solution for. Say you're working on a web application that you deploy using a requirements.txt file. And say you have a set of "abstract dependencies" that your application depends on. What are some convenient ways of storing your abstract dependencies in source control and periodically generating an updated requirements file from that information (e.g. when your dependencies come out with new versions)? The main idea that occurs to me is making a setup.py for the purposes of representing your abstract dependencies (e.g. using "install_requires," etc), creating a new virtualenv, running "pip install .", and then "pip freeze." One problem with this approach is that the pip freeze output includes an entry for the setup.py application itself, when the output should only include the _dependencies_ of the application and not the application itself. It also seems clunky to me to create a virtualenv and install dependencies only for the purposes of computing dependencies. Thanks for any help or suggestions. --Chris From robertc at robertcollins.net Wed Dec 16 00:43:32 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 16 Dec 2015 18:43:32 +1300 Subject: [Distutils] workflow recommendations to update requirements.txt In-Reply-To: References: Message-ID: You might find https://github.com/openstack/requirements/blob/master/openstack_requirements/cmds/generate.py useful. Basically it takes a open-ended (or partly so) set of requirements, expressed as in requirements.txt format, and generates an exact-match requirements.txt file as output. So, have one file, say 'dependencies.txt', which is open ended, and your requirements.txt which is exact, then run generate from time to time. There are other such tools around, but I forget their names ;) -Rob On 16 December 2015 at 17:56, Chris Jerdonek wrote: > Hi, > > I have a development workflow question I was wondering if people on > this list had a recommended solution for. > > Say you're working on a web application that you deploy using a > requirements.txt file. And say you have a set of "abstract > dependencies" that your application depends on. > > What are some convenient ways of storing your abstract dependencies in > source control and periodically generating an updated requirements > file from that information (e.g. when your dependencies come out with > new versions)? > > The main idea that occurs to me is making a setup.py for the purposes > of representing your abstract dependencies (e.g. using > "install_requires," etc), creating a new virtualenv, running "pip > install .", and then "pip freeze." > > One problem with this approach is that the pip freeze output includes > an entry for the setup.py application itself, when the output should > only include the _dependencies_ of the application and not the > application itself. It also seems clunky to me to create a virtualenv > and install dependencies only for the purposes of computing > dependencies. > > Thanks for any help or suggestions. > > --Chris > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -- Robert Collins Distinguished Technologist HP Converged Cloud From glyph at twistedmatrix.com Wed Dec 16 01:40:02 2015 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Tue, 15 Dec 2015 22:40:02 -0800 Subject: [Distutils] workflow recommendations to update requirements.txt In-Reply-To: References: Message-ID: > On Dec 15, 2015, at 8:56 PM, Chris Jerdonek wrote: > > Hi, > > I have a development workflow question I was wondering if people on > this list had a recommended solution for. > > Say you're working on a web application that you deploy using a > requirements.txt file. And say you have a set of "abstract > dependencies" that your application depends on. > > What are some convenient ways of storing your abstract dependencies in > source control and periodically generating an updated requirements > file from that information (e.g. when your dependencies come out with > new versions)? > > The main idea that occurs to me is making a setup.py for the purposes > of representing your abstract dependencies (e.g. using > "install_requires," etc), creating a new virtualenv, running "pip > install .", and then "pip freeze." > > One problem with this approach is that the pip freeze output includes > an entry for the setup.py application itself, when the output should > only include the _dependencies_ of the application and not the > application itself. It also seems clunky to me to create a virtualenv > and install dependencies only for the purposes of computing > dependencies. > > Thanks for any help or suggestions. This is what I'm doing right now (occasionally manually curating the output of `pip freeze?) but I have heard good things about https://github.com/nvie/pip-tools/ and I intend to investigate it. As I understand it, pip-compile is the tool you want. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Dec 16 05:59:38 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 16 Dec 2015 20:59:38 +1000 Subject: [Distutils] Metadata 2.0: Warning if optional features are missing In-Reply-To: References: Message-ID: On 16 December 2015 at 04:59, Robert Collins wrote: > Some issues with reusing extras are: > - extras refer to things in the dependency graph, but as > distributions are the installable things and the graph nodes are > distributions, foo[fast] is - in widespread deployment - entirely and > only a list of additional distributions. > - there's no concept of 'default extra', and there is no clear path > for bringing it in compatibly, at least so far > - we haven't worked through the ui implications about which end of > the relation this should be configured: should consumers be specifying > them, or providers? > - negative operators on extras are as yet undefined, and due to the > dependencies of an install being a graph, not a tree, a naive > definition is likely very hard to use IMO One of the other challenges posed by the current extras system is how best to map it to alternate packaging systems. It *can* be done, but it's not particularly clean (the main approaches I'm aware of involves defining a meta-package for each extra, which is rather horrible, or just translating them to Recommends or Suggests and accepting the loss of granularity). > Recommends and suggests are an interesting way of modelling this, and > its possible we don't need an exclude relation- rather users should > blacklist them globally in the target environment somehow, which would > contain that partcular complexity. I doubt it will surprise anyone to learn I'd be a fan of aiming to learn from Linux distro experience with describing complex dependency graphs on this front :) The RPM folks recently decided the Debian design was essentially a good one, so the new(ish) weak dependency support in that ecosystem has a lot of similarities to Debian's approach: http://www.rpm.org/wiki/PackagerDocs/DependenciesOverview#Weakdependencies Cheers, Nick. P.S. I had to check the actual degree of similarity myself, so for reference, the relevant Debian docs link is at https://www.debian.org/doc/debian-policy/ch-relationships.html#s-binarydeps -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Dec 16 06:11:29 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 16 Dec 2015 21:11:29 +1000 Subject: [Distutils] workflow recommendations to update requirements.txt In-Reply-To: References: Message-ID: On 16 December 2015 at 16:40, Glyph Lefkowitz wrote: > On Dec 15, 2015, at 8:56 PM, Chris Jerdonek > wrote: > Thanks for any help or suggestions. > > > This is what I'm doing right now (occasionally manually curating the output > of `pip freeze?) but I have heard good things about > https://github.com/nvie/pip-tools/ and I intend to investigate it. As I > understand it, pip-compile is the tool you want. I just ran across pip-tools recently myself, and while I haven't actually tried it out yet, I think the design makes a lot of sense for the VCS-based deployment case: * you write a requirements.in file with your direct dependencies (including any pinning required for API compatibility) * pip-compile turns that into a requirements.txt that pins all your dependencies to their latest versions * pip-sync makes a virtualenv *exactly* match a requirements.txt file (installing, uninstalling, upgrading and downgrading as needed) That makes "upgrade dependencies" (by rerunning pip-compile) a clearly distinct operation from "deploy current dependencies" (by using pip-sync in an existing environment, or by installing into a fresh environment based on the generated requirements.txt). I'm not sure it makes as much sense in the case where the thing you're working on is itself a distributable Python package with its own setup.py - it seems like you'd end up duplicating information between setup.py and requirements.in. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From chris.jerdonek at gmail.com Wed Dec 16 16:44:11 2015 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Wed, 16 Dec 2015 13:44:11 -0800 Subject: [Distutils] workflow recommendations to update requirements.txt In-Reply-To: References: Message-ID: On Wed, Dec 16, 2015 at 3:11 AM, Nick Coghlan wrote: > On 16 December 2015 at 16:40, Glyph Lefkowitz wrote: >> On Dec 15, 2015, at 8:56 PM, Chris Jerdonek >> wrote: >> Thanks for any help or suggestions. >> >> >> This is what I'm doing right now (occasionally manually curating the output >> of `pip freeze?) but I have heard good things about >> https://github.com/nvie/pip-tools/ and I intend to investigate it. As I >> understand it, pip-compile is the tool you want. > > I just ran across pip-tools recently myself, and while I haven't > actually tried it out yet, I think the design makes a lot of sense for > the VCS-based deployment case: Thanks Glyph and Nick for the suggestion to use pip-compile. It worked for me the first time with no issues. The only thing I haven't decided on with that workflow is how to manage potentially more than one requirements file for different combinations of optional dependencies. With the setup.py approach, you can use "extras." It looks like for pip-compile you can accomplish this by having multiple *.in files (an additional one for each "extra"), and invoking pip-compile with different combinations of these files. --Chris > > * you write a requirements.in file with your direct dependencies > (including any pinning required for API compatibility) > * pip-compile turns that into a requirements.txt that pins all your > dependencies to their latest versions > * pip-sync makes a virtualenv *exactly* match a requirements.txt file > (installing, uninstalling, upgrading and downgrading as needed) > > That makes "upgrade dependencies" (by rerunning pip-compile) a clearly > distinct operation from "deploy current dependencies" (by using > pip-sync in an existing environment, or by installing into a fresh > environment based on the generated requirements.txt). > > I'm not sure it makes as much sense in the case where the thing you're > working on is itself a distributable Python package with its own > setup.py - it seems like you'd end up duplicating information between > setup.py and requirements.in. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From carlos.barera at gmail.com Thu Dec 17 12:27:34 2015 From: carlos.barera at gmail.com (Carlos Barera) Date: Thu, 17 Dec 2015 19:27:34 +0200 Subject: [Distutils] pypi simple index Inbox x Message-ID: Hi, I'm using install_requires in setup.py to specify a specific package my project is dependant on. When running python setup.py install, apparently the simple index is used as an older package is taken from pypi. While in https://pypi.python.org/pypi, there's a newer package. When installing directly using pip, the latest package is installed successfully. I noticed that the new package is only available as a wheel and older versions of setup tools won't install wheels for install_requires. However, upgrading setuptools didn't help. Several questions: 1. What's the difference between the pypi simple index and the general pypi index? 2. Why is setup.py defaulting to the simple index? 3. How can I make the setup.py triggered install use the main pypi index instead of simple Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Thu Dec 17 14:52:15 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 18 Dec 2015 08:52:15 +1300 Subject: [Distutils] pypi simple index Inbox x In-Reply-To: References: Message-ID: As I answered to your python-dev post, this is due to easy-install not supporting wheel. The fix is easy - don't use easy-install. -Rob On 18 December 2015 at 06:27, Carlos Barera wrote: > Hi, > > I'm using install_requires in setup.py to specify a specific package my > project is dependant on. > When running python setup.py install, apparently the simple index is used > as an older package is taken from pypi. While in > https://pypi.python.org/pypi, there's a newer package. > When installing directly using pip, the latest package is installed > successfully. > I noticed that the new package is only available as a wheel and older > versions of setup tools won't install wheels for install_requires. > However, upgrading setuptools didn't help. > > Several questions: > 1. What's the difference between the pypi simple index and the general > pypi index? > 2. Why is setup.py defaulting to the simple index? > 3. How can I make the setup.py triggered install use the main pypi index > instead of simple > > Thanks! > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -- Robert Collins Distinguished Technologist HP Converged Cloud -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlos.barera at gmail.com Fri Dec 18 04:09:50 2015 From: carlos.barera at gmail.com (Carlos Barera) Date: Fri, 18 Dec 2015 11:09:50 +0200 Subject: [Distutils] cached index url Message-ID: Hi, In the past I installed devpi locally on my laptop to serve as a local mirror of pypi and for testing of the distribution and install of packages. I stopped using it. But now, seems like the index url of my local devpi server is still cached somewhere. I thought that the index url should be found in pip.conf but I don't seem to have any pip.conf anywhere on my Mac. However, running python setup.py install fails as it's still trying to access my old devpi server for some reason. Questions: 1. I guess it must be cached some place else. Any idea where? 2. Why don't I have any pip.conf on my Mac Regards, Carlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From xav.fernandez at gmail.com Fri Dec 18 07:59:21 2015 From: xav.fernandez at gmail.com (Xavier Fernandez) Date: Fri, 18 Dec 2015 13:59:21 +0100 Subject: [Distutils] cached index url In-Reply-To: References: Message-ID: Hello, python -c "from pip.baseparser import ConfigOptionParser;print(ConfigOptionParser(name='fooooo').get_config_files())" could help you. Tested on version 7.1.2. Regards, Xavier On Fri, Dec 18, 2015 at 10:09 AM, Carlos Barera wrote: > Hi, > > In the past I installed devpi locally on my laptop to serve as a local > mirror of pypi and for testing of the distribution and install of packages. > > I stopped using it. > But now, seems like the index url of my local devpi server is still cached > somewhere. > I thought that the index url should be found in pip.conf but I don't seem > to have any pip.conf anywhere on my Mac. > > However, running python setup.py install fails as it's still trying to > access my old devpi server for some reason. > > Questions: > 1. I guess it must be cached some place else. Any idea where? > 2. Why don't I have any pip.conf on my Mac > > > Regards, > Carlos > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomnyberg at gmail.com Sat Dec 19 22:15:04 2015 From: tomnyberg at gmail.com (Thomas Nyberg) Date: Sat, 19 Dec 2015 19:15:04 -0800 Subject: [Distutils] Trouble using setuptools to build separate c++ extension Message-ID: <56761D38.5010409@gmail.com> Hello I'm having trouble understanding the right way to build a c++ module using setuptools. I've been reading the docs, but I'm confused where I should be putting my build options. Everything builds fine on its own. I have my sources in src/ and my headers in include/. My first problem is that I'm having trouble figuring out where to put my build flags. Here is the Makefile I'm currently using: -------- srcs=$(wildcard *.cpp) srcs+=$(wildcard src/*.cpp) objs=$(patsubst %.cpp,%.o,$(srcs)) cc=g++ ccflags=-std=c++11 -g -O3 -fPIC includes=-I. -I./include/ -I/usr/include/python2.7/ -I/usr/include/boost libflags=-L. -L/usr/lib/x86_64-linux-gnu ldflags= -shared -Wl,--export-dynamic patent_computer_cpp.so: $(objs) $(cc) $(libflags) $(ldflags) $(objs) -o patent_computer_cpp.so -lboost_python -lpython2.7 %.o:%.cpp $(cc) $(ccflags) $(includes) -c -o $@ $< -------- Unfortunately I can't post the sources, but they compile fine to produce the `patent_computer_cpp.so` file which can be imported as a module. Maybe I should also point out that I'm using boost-python (I don't think this is the issue though). I just can't figure out how to get setuptools.Extension to use these build flags. I've seen recommendations online saying that I should set CFLAGS as an environment variable and set OPT='' as an environment variable as well, but this just feels wrong given the simplicity of my setup. (Besides the shared object doesn't seem to compile correctly in this case.) I've tried using the extra_compile_args option in setup.py, but that fails. Is there a way to avoid setting environment variables like this or is this the accepted way to build this kind of software? Am I missing some obvious docs somewhere? Thanks for any help. Cheers, Thomas From erik.m.bray at gmail.com Wed Dec 23 12:35:54 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Wed, 23 Dec 2015 12:35:54 -0500 Subject: [Distutils] setup.py install using pip In-Reply-To: References: <5E048C95-9CC3-4C1A-88BA-428126EE5EBD@ronnypfannschmidt.de> <3CEDD7D1-9678-4373-BC37-E2557C195945@ronnypfannschmidt.de> Message-ID: On Thu, Dec 10, 2015 at 5:12 PM, Robert Collins wrote: > On 8 December 2015 at 09:14, Erik Bray wrote: >> On Mon, Dec 7, 2015 at 2:40 PM, Paul Moore wrote: >>> On 7 December 2015 at 18:58, Erik Bray wrote: >>>> I wasn't able to produce this problem. Even with --no-binary >>>> specified pip installs (by default) with >>>> --single-version-externally-managed. My prototype implicitly disables >>>> the --pip flag if --single-version-externally-managed was specified >>>> (true to the purpose of that flag). >>> >>> Ah - that was the bit I was missing, the >>> --single-version-externally-managed flag can be used to trigger >>> ignoring --pip. >>> >>>> What *is* a problem is if --pip is in setup.cfg, and one invokes `pip >>>> install --egg .`. I wasn't quite able to make that go into an >>>> infinite loop, but it did invoke pip.main recursively, and stuff broke >>>> on the second invocation for reasons not clear to me. >>> >>> Yeah, but honestly I don't think pip install --egg is that important a >>> use case. I may be wrong (there's lots of ways people use pip that I >>> know nothing of :-)) but as a starting point it might be OK just to >>> say that at the same time as the --pip flag was introduced, "pip >>> install --egg" was deprecated (and we explicitly document that pip >>> install --egg is known to interact badly with setup.py --pip). >> >> I'd be fine with that too. IIRC pip install --egg was introduced in >> part to work around problems with namespace packages. This doesn't >> completely eliminate the need for that workaround, but it does reduce >> it. > > Huh? No, my understanding was that it was introduced solely to support > interop with folk using 'easy-install', and its considered deprecated > and delete-as-soon-as-practical. The original issue that motivated it did have to do with (lack of) interoperability of different ways namespace packages are implemented: https://github.com/pypa/pip/issues/3 The fact that it introduced general backwards-compat for easy-install-like installation was a side "benefit", useful I'm sure to a few people. But otherwise as you say, was intended to be deleted as soon as practical. Erik From robertc at robertcollins.net Wed Dec 23 14:21:59 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 24 Dec 2015 08:21:59 +1300 Subject: [Distutils] setup.py install using pip In-Reply-To: References: <5E048C95-9CC3-4C1A-88BA-428126EE5EBD@ronnypfannschmidt.de> <3CEDD7D1-9678-4373-BC37-E2557C195945@ronnypfannschmidt.de> Message-ID: Thanks for digging that up. -Rob On 24 December 2015 at 06:35, Erik Bray wrote: > On Thu, Dec 10, 2015 at 5:12 PM, Robert Collins > wrote: >> On 8 December 2015 at 09:14, Erik Bray wrote: >>> On Mon, Dec 7, 2015 at 2:40 PM, Paul Moore wrote: >>>> On 7 December 2015 at 18:58, Erik Bray wrote: >>>>> I wasn't able to produce this problem. Even with --no-binary >>>>> specified pip installs (by default) with >>>>> --single-version-externally-managed. My prototype implicitly disables >>>>> the --pip flag if --single-version-externally-managed was specified >>>>> (true to the purpose of that flag). >>>> >>>> Ah - that was the bit I was missing, the >>>> --single-version-externally-managed flag can be used to trigger >>>> ignoring --pip. >>>> >>>>> What *is* a problem is if --pip is in setup.cfg, and one invokes `pip >>>>> install --egg .`. I wasn't quite able to make that go into an >>>>> infinite loop, but it did invoke pip.main recursively, and stuff broke >>>>> on the second invocation for reasons not clear to me. >>>> >>>> Yeah, but honestly I don't think pip install --egg is that important a >>>> use case. I may be wrong (there's lots of ways people use pip that I >>>> know nothing of :-)) but as a starting point it might be OK just to >>>> say that at the same time as the --pip flag was introduced, "pip >>>> install --egg" was deprecated (and we explicitly document that pip >>>> install --egg is known to interact badly with setup.py --pip). >>> >>> I'd be fine with that too. IIRC pip install --egg was introduced in >>> part to work around problems with namespace packages. This doesn't >>> completely eliminate the need for that workaround, but it does reduce >>> it. >> >> Huh? No, my understanding was that it was introduced solely to support >> interop with folk using 'easy-install', and its considered deprecated >> and delete-as-soon-as-practical. > > The original issue that motivated it did have to do with (lack of) > interoperability of different ways namespace packages are implemented: > > https://github.com/pypa/pip/issues/3 > > The fact that it introduced general backwards-compat for > easy-install-like installation was a side "benefit", useful I'm sure > to a few people. But otherwise as you say, was intended to be deleted > as soon as practical. > > Erik -- Robert Collins Distinguished Technologist HP Converged Cloud From nalani at fea.st Tue Dec 22 08:51:04 2015 From: nalani at fea.st (KM) Date: Tue, 22 Dec 2015 14:51:04 +0100 Subject: [Distutils] Building and installing packages on a (unix) system lacking network accesss Message-ID: <56795548.4090202@fea.st> Greetings distutils-sig, I have a project with an autogenerated structure - that is, I ran a "helper" application which creates a directory structure and a setup.py for me. I am trying to build this package in a virtualenv on an isolated machine, necessiating the step of downloading all the prerequisite packages and making them available. I have done this and I can successfully install most of the prerequisites by passing "--no-index" and "-f " to pip, making my full command-line: pip install -e . --no-index -f This works up until a certain point, where pip (or something launched by pip) tries to download from the internet. I tried adding ''--global-option '--no-index -f '" to my options, but this failed. I get the error: option --no-index -f not recognized. This occurs right after "Running setup.py install for TurboGears2" in the pip output and after it has installed a number of other packages. [Using --global-option was a shot in the dark, after all]. The package it is looking for is pytz. It appears to not be finding the package when it looks for it the first time - i.e., in the build log, it doesn't announce that it has found the package, it just moves on to the next package. The system is a FreeBSD 10.2 system, vanilla except for some packages like bash, python2.7 and well openjdk which installs a bunch of X stuff, unfortunately. It runs as a VM under VirtualBox (latest version) on Windows 8 (kept "up-to-date" but on version 8 not 8.1). The pip in use is 7.0.3 under python 2.7 (because that is the latest version of pip available in the FreeBSD ports). I am hoping for some feedback or information that will allow me to install my application with all of its dependencies downloaded from a locally-available url or directory. Does anyone have any suggestions for me? KM From carlos.barera at gmail.com Wed Dec 23 17:23:07 2015 From: carlos.barera at gmail.com (Carlos Barera) Date: Thu, 24 Dec 2015 00:23:07 +0200 Subject: [Distutils] Reinstall with pip Message-ID: Hi, I have an automation task that installs a python package from within the project dir using: pip install . What's the right way to reinstall the same package (same version)? --upgrade + --force-reinstall ? Is --no-deps recommended ? How about uninstalling and reinstalling ? Cheers, Carlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+python at benfinney.id.au Wed Dec 23 17:59:07 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Thu, 24 Dec 2015 09:59:07 +1100 Subject: [Distutils] Building and installing packages on a (unix) system lacking network accesss References: <56795548.4090202@fea.st> Message-ID: <85lh8kzs90.fsf@benfinney.id.au> KM writes: > This works up until a certain point, where pip (or something launched > by pip) tries to download from the internet. I tried adding > ''--global-option '--no-index -f '" to my options That specifies a *single* option to be passed to Distutils, containing spaces: ?--no-index -f ?. > but this failed. I get the error: option --no-index -f not > recognized. Right, there is no ?--no-index -f ? option recognised by Distutils. The ?--global-option? option takes a single argument, which it passes to the ?setup.py? invocation. If you want multiple options passed that way, I think you need to use ?--global-option? several times. Once you use that, though, you'll find that Distutils also doesn't recognise ?--no-index? nor ?--find-links? options. What are you hoping those will do? -- \ ?Pinky, are you pondering what I'm pondering?? ?I think so, | `\ Brain, but if they called them ?Sad Meals?, kids wouldn't buy | _o__) them!? ?_Pinky and The Brain_ | Ben Finney From glyph at twistedmatrix.com Wed Dec 23 23:23:42 2015 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Wed, 23 Dec 2015 20:23:42 -0800 Subject: [Distutils] Building and installing packages on a (unix) system lacking network accesss In-Reply-To: <56795548.4090202@fea.st> References: <56795548.4090202@fea.st> Message-ID: > On Dec 22, 2015, at 5:51 AM, KM wrote: > > Greetings distutils-sig, > > I have a project with an autogenerated structure - that is, I ran a "helper" application which creates a directory structure and a setup.py for me. I am trying to build this package in a virtualenv on an isolated machine, necessiating the step of downloading all the prerequisite packages and making them available. I have done this and I can successfully install most of the prerequisites by passing "--no-index" and "-f " to pip, making my full command-line: > > pip install -e . --no-index -f > > This works up until a certain point, where pip (or something launched by pip) tries to download from the internet. [snip] This is in fact the entire point of saying --no-index; "don't download stuff from the internet". So it's working as designed. You need to make sure all your dependencies are available before you run that `pip install? command. Have you? > I am hoping for some feedback or information that will allow me to install my application with all of its dependencies downloaded from a locally-available url or directory. Does anyone have any suggestions for me? You're doing this in vaguely the right way with `pip install -f --no-index?. As Ben Finney already pointed out, these are not distutils options so the whole thing with --global-option is a complete red herring, give up on that :-). In fact, as stated, your example works just fine: $ mkdir offline_stuff $ pip wheel --wheel-dir offline_stuff pytz Collecting pytz Saved ./offline_stuff/pytz-2015.7-py2.py3-none-any.whl Skipping pytz, due to already being wheel. $ mktmpenv ; cd - New python executable in tmp-a3e6ab08e84f351d/bin/python2.7 Also creating executable in tmp-a3e6ab08e84f351d/bin/python Installing setuptools, pip, wheel...done. virtualenvwrapper.user_scripts creating /Users/glyph/.virtualenvs/tmp-a3e6ab08e84f351d/bin/predeactivate virtualenvwrapper.user_scripts creating /Users/glyph/.virtualenvs/tmp-a3e6ab08e84f351d/bin/postdeactivate virtualenvwrapper.user_scripts creating /Users/glyph/.virtualenvs/tmp-a3e6ab08e84f351d/bin/preactivate virtualenvwrapper.user_scripts creating /Users/glyph/.virtualenvs/tmp-a3e6ab08e84f351d/bin/postactivate virtualenvwrapper.user_scripts creating /Users/glyph/.virtualenvs/tmp-a3e6ab08e84f351d/bin/get_env_details This is a temporary environment. It will be deleted when you run 'deactivate'. /Users/glyph (tmp-a3e6ab08e84f351d)$ pip install -f offline_stuff --no-index pytz Ignoring indexes: https://pypi.python.org/simple Collecting pytz Installing collected packages: pytz Successfully installed pytz-2015.7 (tmp-a3e6ab08e84f351d)$ python -c 'import pytz; print(pytz)' (tmp-a3e6ab08e84f351d)$ So if you could please provide a sample `setup.py? or sample `requirements.txt?, along with exactly what unexpected output you got, it would be helpful in understanding what went wrong. This should be a minimal example demonstrating just the problem you're seeing, not your whole project; see http://sscce.org for a longer explanation. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Dec 24 08:48:03 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 24 Dec 2015 08:48:03 -0500 Subject: [Distutils] PEP 470 - Repository Side Complete Message-ID: <2FFDBD85-5D3C-4A2D-AA6A-4E3808D6A100@stufft.io> Just as an FYI, as the specified time has passed since the warnings specified in PEP 470, I went ahead and finished the implementation of PEP 470 on PyPI and Test PyPI. I hope to have a new version of pip out that has it?s side of that as well soon. Hope everyone has a Merry Christmas! ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From erik.m.bray at gmail.com Thu Dec 24 14:13:59 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Thu, 24 Dec 2015 14:13:59 -0500 Subject: [Distutils] Reinstall with pip In-Reply-To: References: Message-ID: On Wed, Dec 23, 2015 at 5:23 PM, Carlos Barera wrote: > Hi, > > I have an automation task that installs a python package from within the > project dir using: pip install . > > What's the right way to reinstall the same package (same version)? > --upgrade + --force-reinstall ? > Is --no-deps recommended ? > How about uninstalling and reinstalling ? I've generally had luck with pip install --upgrade --force-reinstall --no-deps This should automatically take care of uninstalling before reinstalling too, so you shouldn't wind up with a broken package or outdated files hanging around. Erik From carlos.barera at gmail.com Sun Dec 27 08:24:28 2015 From: carlos.barera at gmail.com (Carlos Barera) Date: Sun, 27 Dec 2015 15:24:28 +0200 Subject: [Distutils] Reinstall with pip In-Reply-To: References: Message-ID: Thanks! I'll give it a try. On Thu, Dec 24, 2015 at 9:13 PM, Erik Bray wrote: > On Wed, Dec 23, 2015 at 5:23 PM, Carlos Barera > wrote: > > Hi, > > > > I have an automation task that installs a python package from within the > > project dir using: pip install . > > > > What's the right way to reinstall the same package (same version)? > > --upgrade + --force-reinstall ? > > Is --no-deps recommended ? > > How about uninstalling and reinstalling ? > > I've generally had luck with > > pip install --upgrade --force-reinstall --no-deps > > This should automatically take care of uninstalling before > reinstalling too, so you shouldn't wind up with a broken package or > outdated files hanging around. > > Erik > -------------- next part -------------- An HTML attachment was scrubbed... URL: