From dalke@bioreason.com Sun Mar 7 03:00:20 1999 From: dalke@bioreason.com (Andrew Dalke) Date: Sat, 06 Mar 1999 19:00:20 -0800 Subject: [Distutils] working on a build system Message-ID: <36E1EBC4.8D5B6A18@bioreason.com> My current task here at Bioreason is to set up a build/distrib system for our projects. We're developing pure Python modules, Python extensions in C/C++, and stand-alone C++ executables. I'm trying to get a basic system that lets use work with these in a standard framework and supports automatic regression tests with the results emailed or sent to a status web page. Also, in order to keep the web pages up to date with the source code, I need to be able to construct some of the pages automatically from the source and README. This is not quite what the distutils-sig does, but I figured it there's a reasonable close fit that I can both contribute and get some ideas. The overview of our system (still in the design phase) is something like this: setup -- sets up the original directory structure for a project This is an interactive/console Python script which asks for the following information (with defaults as needed): project name -- used for the directory and module name product name -- can be different since this is the marketing name alternate product name -- I've found that having a variant for for the product name is useful, eg, the abbrev. or acronym. version number -- in the form \d+(\.\d+)* status -- a string of letters, like "alpha", "p", "RC" cvs root -- we use CVS for our version control, but using "none" specifies not to put this into CVS contact name -- who's in charge of this thing? contact email -- how to contact that person (and where regression test results will be sent) contact url -- where to find more information about this project language -- python, python extension, c++ (eventually more?) (this seems to be the core set of information I need, but the framework will be set up so more can be added as needed.) After this information is gathered, it (will) creates a subdirectory with the given project name and add the following files: info.py -- contains the configuration information configure -- used to generate the Makefile and maybe other files; like a "make.dat" file which is include'd by all Makefiles. buildno -- a number starting at 1 which is incremented for every new build. In this case, a build is not the C level build like Python's C code, but the number passed to QA/testing. test/ -- a subdirectory for the regression tests README -- a pod (?) file containing the template for the standard README. Let me explain that last one some more. It seems that Perl's "pod" format is the easiest to learn and use. I want to put one level on top of that. The hardest thing in my ASCII documentation is to keep version numbers, URLs, etc. in sync with the main code base, since I have to change those by hand. Instead, I propose that during the process of making a source distribution or doing an install, the pod files are read into a Python string and %'ed with the info.py data. Thus, I want a file which looks like: ###### =head1 NAME %(PRODUCT_NAME)s =head1 SYNOPSIS This software does something. =head1 INFORMATION For more information about %(ALT_PRODUCT_NAME)s see: %(CONTACT_URL)s ###### and which will be filled out with the appropriate information as needed for an install. Is there a Python way to read/parse pod files? If not, using perl for this is fine with us. BTW, to better support names, I've also introduced a few other variables from those given in info.py, like: FULL_NAME = PROJECT_NAME + "-" + VERSION if STATUS: FULL_NAME = FULL_NAME + "." + BUILDNO + STATUS so we can have names like: daylight-1.0.alpha9 daylight-1.0.beta3 daylight-1.0.rc1 (for us, "rc" == "release candidate") daylight-1.0 -- build number not included in final release Okay, so once those files are made, if CVS is being used, the whole subdirectory is added to CVS, the directory renamed, and the project pulled out from CVS. Then cd into the directory and run "configure". This produces a Makefile. The Makefile is not put into CVS because supposedly it can always be made by running configure; so edit "configure" instead. The system is now ready for development. I'll assume it is using straight Python scripts with no submodules. In that case, the Makefile supports the following targets: buildversion: increment the "buildno" file by one buildtag: tag everything in CVS with the FULL_NAME build: buildversion buildtag $(MAKE) src.dist tests: cd test; ./testall clean: probably remove the .pyc and .pyo files veryclean: clean probaly remove emacs *~ files install: do the "standard" install and probably a few more. Could someone tell me some other standard target names; eg, those expected from a Python or GNU project? I'm playing around with Makefile suffix rules. What I want to do is make the ".dist" targets get forwarded to Python. Under GNU make I can do something like: %.dist: $(PYTHON) -c "import build; build.distrib('$*')" so "src.dist" becomes: /usr/local/bin/python -c "import build; build.distrib('src')" but I don't know how to do this sort of trick under non-GNU make. Any advice? I mentioned the phrase "standard" install. That's a tricky one. What I envision is that the build module reads the "info.py" file to determine the project language settings then imports the module which handles installs for it. This will probably find all the files with the extension ".py" and pass those to the routine which does the actual installs (eg, copy the files and generate .pyc and .pyo files). However, this must also be able to import project level extensions, for example, to also install some needed data files into the install directory. I'm not sure about the right way to go about doing this. Probably the "build.install.python" will try to import some code from the package, and this will return a list of file names mapped to install methods, like: ('__init__.py', build.install.python_code), # normal python installer ('file.dat', build.install.data_file), # just copies the file ('special.zz', localbuild.install_zz), # something package specific then iterate over the list and apply the function on the file. (And yes, these should likely be class instances and not a 2-ple.) Again, haven't figured this out. Then there's the question of how to deal with submodules. Most likely the configure script will check each subdirectory for an __init__.py file and make the Makefile accordingly. Of course, it will have to ignore certain "well known" ones, like whichever directory contains the project specific build information. The new Makefile will change a few things, like make some targets recurse, as "clean:" clean: probably remove the .pyc and .pyo files cd submodule1; $(MAKE) clean cd submodule2; $(MAKE) clean and add the appropriate Makefiles to those subdirectories. My, this is getting more complicated than I thought it would. Okay, so the final step is the "src.dist" (and "bin.dist" and "rpm-src.dist" and ... targets). I figure that the raw source can always be made available from a "cvs export" followed by a manual tar/gz, so that doesn't need to be automated. What does need to be done is the ability to make a source distribution "for others." For example, at one place I worked we stripped out all the RCS log comments when we did a source build. Or perhaps some of the modules cannot be distributed (eg, they are proprietary). So a basic source distribution must be able to take the existing files, apply any transformations as needed (eg, convert from README in pod form to README in straight text) and tar/gzip the result. We'll be saving the resulting tarball for archival purposes, and our installs will likely be done from this distribution, which means it needs its own Makefile. In all likelyhood, there will be no difference between this Makefile and the normal one. If it is, I guess it would be generated from the configure script although with some special command line option. Then there's questions of how to handle documentation (eg, some of my documentation is in LaTeX). For now, I'll just put stuff under "doc/" and let it lie. Though the configure script should be able to build a top-level Makefile which includes targets for building the documentation and converting into an appropriate form, such as HTML pages for automated updates to our internal web servers. Eh, probably something which forward certain targets to "doc/", like: docs: cd doc; $(MAKE) doc or even do the make suffix forwarding trick, so I can have targets like: user.doc prog.doc install.doc Ugg, to get even more customizable I suppose you would need to tell LaTeX that certain sections should/should not be used when making a distribution, in order to reflect the code. I suppose the configure script could be made to generate a tex file for inclusion, but again, I'm not going to worry about that for now. Most likely the configure script will have the ability for someone to add: make = make + build.makefile.latex_doc( ) (Thinking about it some, it would be best if "make" were a list of terms, like: make = ( MakeData("tests", "", ("cd test; ./testall",)), MakeData(target, dependencies, (list, of, actions)), ) then the conversion to Makefile routine could double check that there are no duplicate targets, and the package author can fiddle around with the list before emitting the file.) Of course, all of this is unix centric. I know nothing about how to build these types of systems for MS Windows platforms. But then, we don't develop on those platforms, though we will likely distribute python applications for them. That's where the ability to have special ".dist" targets comes in handy. I've considered less what's needed for a shared library Python extension or for pure C++ code. I envison similar mechanisms, but there will need to be support for things like generating dependencies for make which isn't needed for a straight Python scripts. Of course, this also isn't needed for the distutils-sig so I'll not go into them here. I also don't know much about GNU's configure system, which will be more useful this sort of environment. Thus, any solution I give for that problem will likely only be useful for our environment. As I said, this is still in the planning stages for us, so I would like input on these ideas. Of course, I plan to make this framework available, and think part of it -- at least some of the ideas -- will be useful for distutil. Andrew dalke@bioreason.com From Fred L. Drake, Jr." References: <36E1EBC4.8D5B6A18@bioreason.com> Message-ID: <14051.59579.675513.829027@weyr.cnri.reston.va.us> Andrew Dalke writes: > configure -- used to generate the Makefile and maybe other files; > like a "make.dat" file which is include'd by all Makefiles. I'd avoid using this name if it isn't an autoconf-generated configure file. If it is, what you want to save is the configure.in file. > Is there a Python way to read/parse pod files? If not, using perl for > this is fine with us. I suspect it would be trivial, but haven't written any POD documentation myself, so I'm probably not the one to do it. ;-) > %.dist: > $(PYTHON) -c "import build; build.distrib('$*')" ... > but I don't know how to do this sort of trick under non-GNU make. Any > advice? I don't either. Most makes can't do nearly as much as GNU make; the most portable solution is "don't do that". If there are only a few targets that need to be phrased like this (meaning: if the set doesn't change often), just write them out. > As I said, this is still in the planning stages for us, so I would > like input on these ideas. Of course, I plan to make this framework > available, and think part of it -- at least some of the ideas -- will > be useful for distutil. This sounds like an impressive system. Discussions and collaboration are definately in order. -Fred -- Fred L. Drake, Jr. Corporation for National Research Initiatives 1895 Preston White Dr. Reston, VA 20191 From dalke@bioreason.com Mon Mar 8 20:25:59 1999 From: dalke@bioreason.com (Andrew Dalke) Date: Mon, 08 Mar 1999 12:25:59 -0800 Subject: [Distutils] working on a build system References: <36E1EBC4.8D5B6A18@bioreason.com> <14051.59579.675513.829027@weyr.cnri.reston.va.us> Message-ID: <36E43257.D3F99D40@bioreason.com> > > configure -- used to generate the Makefile and maybe other files; > > like a "make.dat" file which is include'd by all Makefiles. > > I'd avoid using this name if it isn't an autoconf-generated > configure file. I disagree. Unix people are used to ./configure make make tests make install (sometimes leaving out the "make tests" :) All that file is is an entry into the configuration system, regardless of being generated from autoconf or even hand written. I don't expect that anything I can generate will be useful by everyone for everything, and someday an autoconf type configure script will be needed (esp. for C/C++ code). When that happens, I expect that the new configure should be a drop-in replacement for the existing one, and end users should not notice the change. > I suspect it would be trivial, but haven't written any POD > documentation myself, so I'm probably not the one to do it. ;-) I've not used it either, but it seems to be the best solution available. I'll probably just use pod2text since we know that tool exists on our systems. > Most makes can't do nearly as much as GNU make; the most portable > solution is "don't do that". True enough. On one product I worked on we just shipped the gmake binary (and source) for the different platforms, since we couldn't get the Makefiles working everywhere. That's what taught me to start using $(MAKE) for recusive Makefiles. Still, according to the make documentation, I should be able to have a single suffix rule that works the way I want it to, as in: SUFFIXES: .dist .dist: $(ECHO) Do something but that doesn't work. Luckily, I again get the luxury of designing this for our in-house use, where I can mandate "we will use GNU make for our Makefiles". Andrew dalke@bioreason.com From sanner@scripps.edu Tue Mar 9 02:56:02 1999 From: sanner@scripps.edu (Michel Sanner) Date: Mon, 8 Mar 1999 18:56:02 -0800 Subject: [Distutils] Python environment variables In-Reply-To: Andrew Dalke "Re: [Distutils] working on a build system" (Mar 8, 12:25pm) References: <36E1EBC4.8D5B6A18@bioreason.com> <14051.59579.675513.829027@weyr.cnri.reston.va.us> <36E43257.D3F99D40@bioreason.com> Message-ID: <9903081856.ZM13047@cain.scripps.edu> Hi, Has the following idea ever been discussed ? is it possible/desirable to have the Python environment variables with version number ? I have run into problems with that because I have version 1.5 embeded in an application and I use PYTHONHOME to tell this interpreter where to find it's common .py files and I run 1.5.2b1 as my current python version. The PYTHONHOME environment variable set for the embeded interpreter created quite a confusion for 1.5.2b1. -Michel From gstein@lyra.org Tue Mar 9 02:58:39 1999 From: gstein@lyra.org (Greg Stein) Date: Mon, 08 Mar 1999 18:58:39 -0800 Subject: [Distutils] Python environment variables References: <36E1EBC4.8D5B6A18@bioreason.com> <14051.59579.675513.829027@weyr.cnri.reston.va.us> <36E43257.D3F99D40@bioreason.com> <9903081856.ZM13047@cain.scripps.edu> Message-ID: <36E48E5F.351A22C@lyra.org> Michel Sanner wrote: > > Hi, > > Has the following idea ever been discussed ? > > is it possible/desirable to have the Python environment variables with version > number ? > > I have run into problems with that because I have version 1.5 embeded in an > application and I use PYTHONHOME to tell this interpreter where to find it's > common .py files and I run 1.5.2b1 as my current python version. The PYTHONHOME > environment variable set for the embeded interpreter created quite a confusion > for 1.5.2b1. The best thing to do is to avoid using environment variables, *especially* for embedded interpreters. On Windows, I would always modify the "Version" (which is now in a resource) to effectively create a private distribution. In the registry, I'd set up PythonCore/MyVersion/... keys. At that point, it just always used the keys that I set. Since you're building an embedded version, then you may as well set up that version to have the proper paths within it, rather than relying on external factors which can change... Cheers, -g -- Greg Stein, http://www.lyra.org/ From gward@cnri.reston.va.us Tue Mar 9 15:33:15 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Tue, 9 Mar 1999 10:33:15 -0500 Subject: [Distutils] Tcl extension architecture Message-ID: <19990309103314.C394@cnri.reston.va.us> Thanks to Jeremy Hylton for sending this my way: looks like the Tcl crowd have also realized that a standard way to build and distribute extensions is a Good Thing. See: http://www.scriptics.com/tea-summit/index.html One more fire under the collective bottom of the Distutils SIG (which is NOT dead, it just smells funny). Stay tuned for impending developments. Or subscribe to distutils-checkins; with luck there will be a flurry of activity there in the next week or so: http://www.python.org/mailman/listinfo/distutils-checkins Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From gward@cnri.reston.va.us Tue Mar 9 19:44:34 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Tue, 9 Mar 1999 14:44:34 -0500 Subject: [Distutils] working on a build system In-Reply-To: <36E1EBC4.8D5B6A18@bioreason.com>; from Andrew Dalke on Sat, Mar 06, 1999 at 07:00:20PM -0800 References: <36E1EBC4.8D5B6A18@bioreason.com> Message-ID: <19990309144433.A1464@cnri.reston.va.us> Quoth Andrew Dalke, on 06 March 1999: > My current task here at Bioreason is to set up a build/distrib > system for our projects. We're developing pure Python modules, > Python extensions in C/C++, and stand-alone C++ executables. > I'm trying to get a basic system that lets use work with these > in a standard framework and supports automatic regression tests > with the results emailed or sent to a status web page. Hmmm, from the first paragraph, it sounds like there's the potential for a lot of overlap with Distutils -- or, to look at it more positively, a lot of potential for you to use Distutils! The rest of your post diverges a bit from this; certainly, stock, out-of-the-box Distutils won't be able to handle all the neat stuff you want to do. However, the architecture is quite flexible: module developers will easily be able to add new commands to the system just by defining a "command class" that follows a few easy rules. I posted a design proposal to this list back in January; you might want to give that a look. I finally got around to HTMLizing it and putting it on the Distutils web page a week or so ago, but hadn't announced it yet because I wanted to tweak it a bit more. So much for that -- consider this the announcement. You'll find the design proposal at http://www.python.org/sigs/distutils-sig/design.html By no means is it a comprehensive design; since I've finally started implementing the Distutils (shh! it's still a secret!), I've found all sorts of holes. However, it gives a good idea of the level of flexibility I'm aiming for. Note that one big difference between your idea and Distutils is that we're avoiding dependence on Makefiles: for the most part, they're not needed, and they're unportable as hell (as you're finding out). Make should be used to carry out timestamp-based file dependency analysis, which means it's just great for building C programs. When you use it mainly to bundle up little sequences of shell commands, you're misusing it. (For the ultimate example of Makefile abuse, see any Makefile created by Perl's MakeMaker module. The most astonishing thing is that it all *works*...) > Let me explain that last one some more. It seems that Perl's "pod" > format is the easiest to learn and use. Yes. XML is waaay cool, but pulls in a hell of a lot of baggage. Until that baggage is standard everywhere (hello, Python 1.6 and Perl 5.006! ;-), low-overhead light-weight solutions like pod are probably preferable. Heck, even when all the world is Unicode and XML for everything, low-overhead and light-weight will still be nice (if only to remember how things were, back in the good ol' days). > Is there a Python way to read/parse pod files? If not, using perl for > this is fine with us. As Fred said, it *should* be trivial. It's a bit trickier if you want a really flexible framework for processing pod; see Brad Appleton's Pod::Parser module (available on CPAN) for that. The current generation of pod tools that ship with Perl are getting a bit long-in-the-tooth; they all have their own parsers, and unsurprisingly have diverged somewhat over the years. Hopefully Pod::Parser will start to fix this. My recommendation: spend a few hours learning to use Pod::Parser, and then write your own custom pod tools (yes, in Perl ;-). Slightly less trivial than writing your own custom, limited parser (in either language), or using pod2text, but should be more robust and scalable. Anyways, with any luck the Distutils CVS archive will be a lot busier within a week or so -- so there might actually be something that you (and everyone else on this SIG!) can muck around with. Keeping my fingers crossed (except while writing code) -- Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From gward@cnri.reston.va.us Wed Mar 10 16:28:01 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Wed, 10 Mar 1999 11:28:01 -0500 Subject: [Distutils] Backwards compatability: opinions? Message-ID: <19990310112800.A2024@cnri.reston.va.us> Hi folks -- I wonder if anyone has strong opinions as to which version(s) of Python the Distutils should support/require. I've found a number of silly little bugs in the 1.5.1 library (I'm coding on my home PC, which isn't running the latest beta... yet) that have been fixed in 1.5.2. The temptation is to assume that everyone is running 1.5.2, and if they're not... too bad. The alternative is to allow (at least) 1.5.1, and provide reimplementations of the buggy functions so that Distutils can work on pre-1.5.2 Pythons. I'm not super keen about either option; anyone have strong opinions one way or the other, or a "middle road" solution? Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From Fred L. Drake, Jr." References: <19990310112800.A2024@cnri.reston.va.us> Message-ID: <14054.40783.38261.263522@weyr.cnri.reston.va.us> Greg, My inclination is that 1.5.1 should be supported. Heck, it's still the current stable release! Even though 1.5.2 will probably be stable before distutils, 1.5.1 just isn't old enough to ignore. I think any compatibility hacks that are needed can go in a module "distutils.compat" (or some such) that gets imported by the "main" distutils module. The compat module can deal with any version checking and update installation it needs to; this makes the updates as transparent as possible to the rest of the code, and allows really old things to be taken out later. -Fred -- Fred L. Drake, Jr. Corporation for National Research Initiatives From petrilli@amber.org Wed Mar 10 16:44:11 1999 From: petrilli@amber.org (Christopher Petrilli) Date: Wed, 10 Mar 1999 11:44:11 -0500 Subject: [Distutils] Backwards compatability: opinions? In-Reply-To: <19990310112800.A2024@cnri.reston.va.us>; from Greg Ward on Wed, Mar 10, 1999 at 11:28:01AM -0500 References: <19990310112800.A2024@cnri.reston.va.us> Message-ID: <19990310114411.D28029@amber.org> On Wed, Mar 10, 1999 at 11:28:01AM -0500, Greg Ward wrote: > Hi folks -- > > I wonder if anyone has strong opinions as to which version(s) of Python > the Distutils should support/require. I've found a number of silly > little bugs in the 1.5.1 library (I'm coding on my home PC, which isn't > running the latest beta... yet) that have been fixed in 1.5.2. The > temptation is to assume that everyone is running 1.5.2, and if they're > not... too bad. > > The alternative is to allow (at least) 1.5.1, and provide > reimplementations of the buggy functions so that Distutils can work on > pre-1.5.2 Pythons. Well, how about grabbing the fixed versions out of 1.5.2, shoving them into a sub-package (Distutils.Fixed), and then just loading the correct ones on startup? I don't think there's anything in the 1.5.2 library that can't be used under 1.5.1. Chris -- | Christopher Petrilli ``Television is bubble-gum for | petrilli@amber.org the mind.''-Frank Lloyd Wright From Fred L. Drake, Jr." References: <19990310112800.A2024@cnri.reston.va.us> <19990310114411.D28029@amber.org> Message-ID: <14054.41512.252708.198838@weyr.cnri.reston.va.us> Christopher Petrilli writes: > Well, how about grabbing the fixed versions out of 1.5.2, shoving them > into a sub-package (Distutils.Fixed), and then just loading the correct > ones on startup? I don't think there's anything in the 1.5.2 library Christopher, Good idea, but not quite there yet. It can't be a sub-package; they have to actually be on the standard sys.path. In Grail, some versions have included a directory "pythonlib" that included updates to standard modules. pythonlib was then added to sys.path before everything else (along with the other Grail-specific directories). Perhaps the best approach is to add a directory pythonlib within distutils, and distutils/__init__ can add it to sys.path: import os, sys pythonlib = os.path.join(os.path.dirname(__file__), "pythonlib") if os.path.isdir(pythonlib): sys.path.insert(0, pythonlib) This ensures that other standard modules that use the modules we're providing updates for get the fixed versions. -Fred -- Fred L. Drake, Jr. Corporation for National Research Initiatives From hinsen@cnrs-orleans.fr Wed Mar 10 17:44:17 1999 From: hinsen@cnrs-orleans.fr (Konrad Hinsen) Date: Wed, 10 Mar 1999 18:44:17 +0100 Subject: [Distutils] Backwards compatability: opinions? In-Reply-To: <19990310112800.A2024@cnri.reston.va.us> (message from Greg Ward on Wed, 10 Mar 1999 11:28:01 -0500) References: <19990310112800.A2024@cnri.reston.va.us> Message-ID: <199903101744.SAA14180@dirac.cnrs-orleans.fr> > The alternative is to allow (at least) 1.5.1, and provide > reimplementations of the buggy functions so that Distutils can work on > pre-1.5.2 Pythons. Maybe I am overly conservative, but I have hesitated for quite a while before making my code require 1.5! Keep in mind that our target group includes users who don't care about Python. They may simply want to use an application written in Python. Most people I know (computational scientists) use Unix systems that were set up two to four years ago and then never updated. I am sure there are still many systems with Python 1.4 out there. So my vote is for supporting at least 1.5, and if possible even 1.4. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen@cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From Fred L. Drake, Jr." References: <19990310112800.A2024@cnri.reston.va.us> <199903101744.SAA14180@dirac.cnrs-orleans.fr> Message-ID: <14054.45369.773554.269290@weyr.cnri.reston.va.us> Konrad Hinsen writes: > So my vote is for supporting at least 1.5, and if possible even 1.4. I think the approaches we've discussed allow supporting versions quite a ways back, as long as the bugs we need to work around are in Python modules, and not in C modules. The biggest issues for older versions will probably show up in testing -- this becomes even more important. -Fred -- Fred L. Drake, Jr. Corporation for National Research Initiatives From gward@cnri.reston.va.us Wed Mar 10 19:24:24 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Wed, 10 Mar 1999 14:24:24 -0500 Subject: [Distutils] Backwards compatability: opinions? In-Reply-To: <199903101744.SAA14180@dirac.cnrs-orleans.fr>; from Konrad Hinsen on Wed, Mar 10, 1999 at 06:44:17PM +0100 References: <19990310112800.A2024@cnri.reston.va.us> <199903101744.SAA14180@dirac.cnrs-orleans.fr> Message-ID: <19990310142424.B2024@cnri.reston.va.us> Quoth Konrad Hinsen, on 10 March 1999: > Keep in mind that our target group includes users who don't care about > Python. They may simply want to use an application written in Python. > Most people I know (computational scientists) use Unix systems that > were set up two to four years ago and then never updated. I am sure > there are still many systems with Python 1.4 out there. Absolutely. For example, my old job was at a scientific lab with a lot of interest in scripting. We used MATLAB, relied heavily on Perl, and dabbled (once, long ago) in Python. If I login there today: % python Python 1.0.3 (Aug 14 1994) Copyright 1991-1994 Stichting Mathematisch Centrum, Amsterdam >>> Blast from the past, anyone? ;-) > So my vote is for supporting at least 1.5, and if possible even 1.4. Seems to me that the three most important language/library features in 1.5 were: * packages * re and r'' * class-based exceptions Of course, those are the three that I use a lot, so I may be biased. And Distutils relies on all of them. I could probably hack it so that re and class-based exceptions are only used under 1.5, but I'm not sure if I could sweep the heavily "packagized" nature of the Distutils under the rug. Would we have to resurrect use of the 'ni' module in the 1.4 case? Would that be painful? I think for the initial versions, I'll strive for compatibility with 1.5, 1.5.1, and 1.5.2. Switching the exception model and how the Distutils are organized on a high-level (1.5-style packages vs. 'ni') should be doable without extensive, low-level changes to the code. That leaves regular expressions; I'll try to minimize their use with a view to making them completely optional. (Eg. if version is 1.4, don't even do this regex-based sanity check. If a regex is required to parse some string, then I'll have to figure out how to do it otherwise. So far that's what I've done.) Any other language features I should be wary of avoiding? I am willing to consider supporting 1.4 if it won't be too much trouble, but not in the initial phases -- we should probably worry about that when it comes time to make a public release in a few months. Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From Fred L. Drake, Jr." References: <19990310112800.A2024@cnri.reston.va.us> <199903101744.SAA14180@dirac.cnrs-orleans.fr> <19990310142424.B2024@cnri.reston.va.us> Message-ID: <14054.60660.318149.906481@weyr.cnri.reston.va.us> Greg Ward writes: > Seems to me that the three most important language/library features in > 1.5 were: > * packages Try "import ni" in the script for older Pythons. This worked fine for Grail until other aspects of the code required 1.5. Make sure the __init__.py files contain nothing by comments and docstrings. > * re and r'' Use regex and more backslashes. These are convenient, but not required for functionality. > * class-based exceptions Not required for functionality. > re and class-based exceptions are only used under 1.5, but I'm not sure > if I could sweep the heavily "packagized" nature of the Distutils under > the rug. Would we have to resurrect use of the 'ni' module in the 1.4 > case? Would that be painful? import sys; if sys.version[:3] < "1.5": import ni > Any other language features I should be wary of avoiding? I am willing > to consider supporting 1.4 if it won't be too much trouble, but not in > the initial phases -- we should probably worry about that when it comes > time to make a public release in a few months. Don't use assert; write a function that does the same thing. This is trivial, but you're welcome to steal the Assert module from Grail if it's too much typing. ;-) -Fred -- Fred L. Drake, Jr. Corporation for National Research Initiatives From gward@cnri.reston.va.us Mon Mar 22 15:10:22 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Mon, 22 Mar 1999 10:10:22 -0500 Subject: [Distutils] Some code to play with Message-ID: <19990322101021.A489@cnri.reston.va.us> Hi all -- as promised, I have finally checked in some real Distutils code. There's just enough functionality for the Distutils to build and install itself. To try it out, you'll need to download the code from the anonymous CVS archive at cvs.python.org; see http://www.python.org/sigs/distutils-sig/cvs.html for instructions. If you just want to try it out, then from the top-level Distutils directory (the one that has setup.py), run: ./setup.py build install This will copy all the .py files to a mockup installation tree in 'build', compile them (to .pyc -- don't run setup.py with "python -O", as I don't handle that yet and it will get confused), and then copy the whole mockup tree to the 'site-packages' directory under your Python library directory. Figuring the installation directory relies on Fred Drake's 'sysconfig' module -- I don't recall what the status of sysconfig was on non-Unix systems; hope someone out there can try it and let us know! Come to think of it, I'd like to hear how it works on *any* system apart from my Red Hat Linux 5.2, stock Python 1.5.1, home PC. ;-) If you want to build in a different directory, use the '--basedir' option to the 'build' command; to install in a different place, you can use either the '--prefix' or '--install-lib' options to the 'install' command. Here are a couple of samples to demonstrate: ./setup.py build --basedir=/tmp/build install --prefix=/tmp/usr/local or ./setup.py build --basedir=/tmp/build ./setup.py install --build-base=/tmp/build --prefix=/tmp/usr/local Go crazy. No documentation yet -- please read the code! Start with distutils/core.py which, as its name implies, is the start of everything. I went nuts with docstrings in that module last night, so hopefully there's just enough there that you can figure things out in the absence of the "Distutils Implementation Notes" document that I'd like to write. Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From dalke@bioreason.com Wed Mar 24 04:09:12 1999 From: dalke@bioreason.com (Andrew Dalke) Date: Tue, 23 Mar 1999 20:09:12 -0800 Subject: [Distutils] Re: chmod symbolic mode parser References: <36F49684.36386894@bioreason.com> Message-ID: <36F86568.E957DC5F@bioreason.com> If anyone is interested, I have implemented an "install" program compatable with the GNU version, excepting internalization and a bug I think is in their code. I placed a copy at ftp://starship.python.net/pub/crew/dalke/install.py for your enjoyment. It uses the Python license, which is doable since I never looked at the GNU source code. Its reason for being is that I think it could be useful for the distutils-sig, and because we (my company) needs something like it for distributing our own software on different platforms not all of which will have "install", and because, well, because it was there. I've tested it pretty well; not fully, but enough that I'm having problems coming up with new test cases. 'Course, now that I've finished that project I started looking at Greg Ward's initial distutils CVS distribution, and I see we have some overlap, though I like my "_mkdir" over his "mkpath", and I now understand why he copied and modified shutil.copy :) Andrew dalke@bioreason.com From gward@cnri.reston.va.us Wed Mar 24 21:17:57 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Wed, 24 Mar 1999 16:17:57 -0500 Subject: [Distutils] Re: chmod symbolic mode parser In-Reply-To: <36F86568.E957DC5F@bioreason.com>; from Andrew Dalke on Tue, Mar 23, 1999 at 08:09:12PM -0800 References: <36F49684.36386894@bioreason.com> <36F86568.E957DC5F@bioreason.com> Message-ID: <19990324161757.A3982@cnri.reston.va.us> Quoth Andrew Dalke, on 23 March 1999: > If anyone is interested, I have implemented an "install" program > compatable with the GNU version, excepting internalization and > a bug I think is in their code. I placed a copy at > ftp://starship.python.net/pub/crew/dalke/install.py for your > enjoyment. It uses the Python license, which is doable since > I never looked at the GNU source code. Cool! I don't *think* it's directly applicable to Distutils; a quick look at the code reveals what that my suspicious are true: the bulk of the work is in parsing the command-line options and symbolic mode. It's good to know this code is "out there" in Python, but I don't *think* it'll be needed for the Distutils. Andrew, since you've looked at my code you can probably see that my general approach is to put this sort of functionality into smallish functions in the distutils.util module. (Eventually this will probably have to be split into multiple modules, but for now it's manageable.) When something can be easily implemented in Python -- copying files or making directory trees -- then I say, "Do it in Python!". Don't screw around with running external utilities whose behaviour (or even presence) is unlikely to be consistent across platforms, especially when you consider the Mac. That said, I consider the fact that you announced your utility to this list as licence to rip off bits of your code and put them into distutils.util. There's more than one way to reuse code... Aside: I bet your install.py is a damn sight faster than the install-sh distributed with autoconf, and thus with Python. Might be nice to use it instead of install-sh when installing Python. (Of course, it would be even nicer if Python could install itself with the functions in distutils.util, but that's a ways down the road yet... ;-) Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From dalke@bioreason.com Mon Mar 29 01:57:26 1999 From: dalke@bioreason.com (Andrew Dalke) Date: Sun, 28 Mar 1999 17:57:26 -0800 Subject: [Distutils] generating pyc and pyo files Message-ID: <36FEDE06.9A77C2BF@bioreason.com> This is a multi-part message in MIME format. --------------E888D53BE2DBB90B8E16359A Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit When should the .pyc and .pyo files be generated during the install process, in the "build" directory or the "install" one? I ask because I've been looking into GNU autoconf and automake. They compile emacs lisp code (.el->.elc) in the build dir before the actual installation, and it might be nice to follow their lead on that, and it would ensure that the downloaded python files are compileable before they are installed. OTOH, I know the normal Python install does a compileall after the .py files have been transfered to the install directory. Also, looking at buildall, it doesn't give any sort of error status on exit if a file could not be compiled. I would prefer the make process stop if that occurs, so compileall needs to exit with a non-zero value. Attached is a context diff patch to "compileall" from 1.5.1 which supports this ability. I can send the full file as well if someone wants it. Andrew dalke@bioreason.com --------------E888D53BE2DBB90B8E16359A Content-Type: text/plain; charset=us-ascii; name="diff-compileall.py" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="diff-compileall.py" *** /usr/local/lib/python1.5/compileall.py Mon Jul 20 12:33:01 1998 --- compileall.py Sun Mar 28 17:50:51 1999 *************** *** 34,39 **** --- 34,40 ---- print "Can't list", dir names = [] names.sort() + success = 1 for name in names: fullname = os.path.join(dir, name) if ddir: *************** *** 54,64 **** --- 55,67 ---- else: exc_type_name = sys.exc_type.__name__ print 'Sorry:', exc_type_name + ':', print sys.exc_value + success = 0 elif maxlevels > 0 and \ name != os.curdir and name != os.pardir and \ os.path.isdir(fullname) and \ not os.path.islink(fullname): compile_dir(fullname, maxlevels - 1, dfile) + return success def compile_path(skip_curdir=1, maxlevels=0): """Byte-compile all module on sys.path. *************** *** 69,79 **** maxlevels: max recursion level (default 0) """ for dir in sys.path: if (not dir or dir == os.curdir) and skip_curdir: print 'Skipping current directory' else: ! compile_dir(dir, maxlevels) def main(): """Script main program.""" --- 72,84 ---- maxlevels: max recursion level (default 0) """ + success = 1 for dir in sys.path: if (not dir or dir == os.curdir) and skip_curdir: print 'Skipping current directory' else: ! success = success and compile_dir(dir, maxlevels) ! return success def main(): """Script main program.""" *************** *** 98,109 **** sys.exit(2) try: if args: for dir in args: ! compile_dir(dir, maxlevels, ddir) else: ! compile_path() except KeyboardInterrupt: print "\n[interrupt]" if __name__ == '__main__': ! main() --- 103,118 ---- sys.exit(2) try: if args: + success = 1 for dir in args: ! success = success and compile_dir(dir, maxlevels, ddir) else: ! success = compile_path() except KeyboardInterrupt: print "\n[interrupt]" + return success + if __name__ == '__main__': ! if not main(): ! sys.exit(1) --------------E888D53BE2DBB90B8E16359A-- From gstein@lyra.org Mon Mar 29 02:01:37 1999 From: gstein@lyra.org (Greg Stein) Date: Sun, 28 Mar 1999 18:01:37 -0800 Subject: [Distutils] generating pyc and pyo files References: <36FEDE06.9A77C2BF@bioreason.com> Message-ID: <36FEDF01.3DF22FCB@lyra.org> Andrew Dalke wrote: > > When should the .pyc and .pyo files be generated during the > install process, in the "build" directory or the "install" one? > > I ask because I've been looking into GNU autoconf and automake. > They compile emacs lisp code (.el->.elc) in the build dir before > the actual installation, and it might be nice to follow their > lead on that, and it would ensure that the downloaded python files > are compileable before they are installed. > > OTOH, I know the normal Python install does a compileall after > the .py files have been transfered to the install directory. Compiling in the build area before installation means that you can install them with arbitrary privileges, owner, and group. That gets tricker using compileall.py. For example, let's say that "root" is doing an installation and the target should be owned by bin:bin. Can't do that with compileall.py, AFAIK. Cheers, -g -- Greg Stein, http://www.lyra.org/ From dalke@bioreason.com Mon Mar 29 05:02:51 1999 From: dalke@bioreason.com (Andrew Dalke) Date: Sun, 28 Mar 1999 21:02:51 -0800 Subject: [Distutils] generating pyc and pyo files References: <36FEDE06.9A77C2BF@bioreason.com> <36FEDF01.3DF22FCB@lyra.org> Message-ID: <36FF097B.F7FA45FA@bioreason.com> Greg Ward says: > Compiling in the build area before installation means that you can > install them with arbitrary privileges, owner, and group. That gets > tricker using compileall.py. > > For example, let's say that "root" is doing an installation and the > target should be owned by bin:bin. Can't do that with compileall.py, > AFAIK. Sure, you cannot do that with compileall, but you can in the Makefile. After all, this is pretty much identical to setting the right permissions for compiled emacs files. I'm not sure I see the problem. The possibility I see is a Makefile like (though there is no way to tell compileall to compile a given list of files, it's just an example): MODULE = SpamSkit PYTHON_FILES = __init__.py spam.py eggs.py vikings.py PYTHON_CLEAN = $(PYTHON_FILES:.py.pyc) $(PYTHON_FILES:.py.pyo) PYTHON_INSTALL = $(PYTHON_FILES) $(PYTHON_CLEAN) PYTHON_SITE = /usr/local/lib/python1.5/site-packages INSTALL = /usr/bin/install -c INSTALL_DATA = ${INSTALL} -m 644 all: $(compileall) -f $(PYTHON_FILES) install: $(INSTALL_DATA) $(PYTHON_INSTALL) $(PYTHON_SITE)/$(MODULE) clean: rm -f $(PYTHON_CLEAN) so the .pyc and .pyo files would be installed with the same permissions as the .py files. And if you wanted to be more specific, you could tweak install (or the install-hook for automake) or INSTALL_DATA as needed. Ummm, after rereading your message I realized I don't understand it as well as I thought I did. It looks like you're suggesting to do the byte compiles locally before the install (which I'm now leaning towards) because you can modify the permissions better that way. If so, I disagree with that. With a standard unix user account I cannot modify all the permissions bits (like setuid) or the owner/group. I need to be root for that, and normally that only happens before the "make install". The problem there is that some (many?) places don't NFS export with root write privs to their clients. My home directory is not writeable by root except on the file server. So if I am root and root tries to modify the permissions before the copy, it will fail, while if it copies first then modifies the permissions, it will work. So the options I see are: 1) copy .py files to the install directory run compileall on that directory (or the equivalent) change permissions on the files in the install directory as needed 2) run compileall in the build directory copy the .py{,c,o} files to the install directory change permissions on the files in the install directory as needed and #2 is my lean-to, as it were. It seems that having a "compileall" which takes a list of files rather than directories will be useful. Otherwise for option (1) installing a single file (not module) to /usr/local/lib/python1.5/site-packages (or site-python) and then doing compileall on that directory may cause all modules to be (re)compiled. For option (2) you will run into problems with test/development .py files in the developer's build directory which aren't valid python files and hence will cause compileall to file. This isn't a direct problem for distutils since we can assume that all .py files will be installed, but it is important to bear in mind. Oh! Plus, what's the policy for installing python files in some place like /usr/local/bin? Must all such scripts be compiled during the distutils install process? If not, then compileall on /usr/local/bin could cause problems. How about added an option, like "-f", to compileall to support a list of files which should be compiled? Andrew dalke@bioreason.com From gstein@lyra.org Mon Mar 29 06:03:45 1999 From: gstein@lyra.org (Greg Stein) Date: Sun, 28 Mar 1999 22:03:45 -0800 Subject: [Distutils] generating pyc and pyo files References: <36FEDE06.9A77C2BF@bioreason.com> <36FEDF01.3DF22FCB@lyra.org> <36FF097B.F7FA45FA@bioreason.com> Message-ID: <36FF17C1.3C299857@lyra.org> Andrew Dalke wrote: > > Greg Ward says: Actually, it was "Greg Stein" :-) > > Compiling in the build area before installation means that you can > > install them with arbitrary privileges, owner, and group. That gets > > tricker using compileall.py. > > > > For example, let's say that "root" is doing an installation and the > > target should be owned by bin:bin. Can't do that with compileall.py, > > AFAIK. > > Sure, you cannot do that with compileall, but you can in the > Makefile. After all, this is pretty much identical to setting > the right permissions for compiled emacs files. That was my point. Nothing complicated or devious. Simply that if you do the compile into the local "build" area, then you can use "install" to copy them to the install area with the right permissions (and if you're root, with the right owner/group). You asked which would be best. I suggested "build" area with the above point as a reason. Cheers, -g -- Greg Stein, http://www.lyra.org/ From Fred L. Drake, Jr." References: <36FEDE06.9A77C2BF@bioreason.com> Message-ID: <14079.50216.744470.510455@weyr.cnri.reston.va.us> Andrew Dalke writes: > When should the .pyc and .pyo files be generated during the > install process, in the "build" directory or the "install" one? ... > OTOH, I know the normal Python install does a compileall after > the .py files have been transfered to the install directory. I think the structure of the Python build process may be due to an older behavior in Python which I think has been fixed. Originally, the __file__ name in a module was initialized at compile time, not at import time. I think it is now set in the .pyc/.pyo at compile time and re-set in the module at import time. If you load the .pyc/.pyo without going through the import machinery you should see the name of the file as it was accessed at compile time (which might be relative). In general, the compile-time __file__ value may be invalid in the importing process. I think the .pyc/.pyo files can be built in the work area and then installed using the normal file installation mechanisms. This provides a little more flexibility in the installation machinery as well. > Also, looking at buildall, it doesn't give any sort of error > status on exit if a file could not be compiled. I would prefer > the make process stop if that occurs, so compileall needs to exit > with a non-zero value. Attached is a context diff patch to > "compileall" from 1.5.1 which supports this ability. I can send I will integrate this patch; thanks! -Fred -- Fred L. Drake, Jr. Corporation for National Research Initiatives From dalke@bioreason.com Mon Mar 29 19:26:36 1999 From: dalke@bioreason.com (Andrew Dalke) Date: Mon, 29 Mar 1999 11:26:36 -0800 Subject: [Distutils] generating pyc and pyo files References: <36FEDE06.9A77C2BF@bioreason.com> <14079.50216.744470.510455@weyr.cnri.reston.va.us> Message-ID: <36FFD3EC.FEF479C8@bioreason.com> Fred Drake pointed out: > I think the .pyc/.pyo files can be built in the work area and then > installed using the normal file installation mechanisms. This > provides a little more flexibility in the installation machinery as > well. Thanks for the pointer on __file__ changes. I'll verify that things work as part of my testing. Andrew From dalke@bioreason.com Mon Mar 29 22:16:49 1999 From: dalke@bioreason.com (Andrew Dalke) Date: Mon, 29 Mar 1999 14:16:49 -0800 Subject: [Distutils] install location(s) Message-ID: <36FFFBD1.D7944116@bioreason.com> This is an easy couple of questions (I hope). 1) What is the correct location for "pure" python module installations? $(prefix)/lib/python$VERSION/site-packages or $(prefix)/lib/site-python I believe most packages tend to install in site-packages but site-python makes more sense since I usually write my modules to deal with differences in python versions. 2) where should python shared libraries be placed if you want to have one install of python per site which includes different architectures? (For example, we may distribute our software on both SGI and Linux boxes.) As I understand it now, the python specific .so files need to be on the PYTHONPATH, which currently contains no information about the specific platform. The only information available is from sys.platform (along with os.uname) and that doesn't appear sufficient to distinguish between different SGI binary interfaces. Specifically, SGIs have 3 interfaces: "old" 32, "new" 32 and 64. These are specified during compilation by the environment variable SGI_ABI or by the command-line options (-32/-o32, -n32, -64). In theory we could compile libraries for each of the different interfaces, though we only support o32 at present. To do it fully we would have to write a wrapper script which runs the specified ABI version of Python ... oh, but then we could modify the PYTHONPATH to reflect the differences. Has there been a proposal for how to distribute/manage/install distributions with multiple architectures? The best I can think of for now is to modify site.py to add os.path.join(prefix, "lib", "python" + sys.version[:3], "site-packages", sys.platform), to the sitedirs list. Does this make sense, and should it be added to the 1.5.2 (or discussed more on c.l.py)? Andrew dalke@bioreason.com BTW, here's how SGI's java, which is only for 32 bits, manages things. "java" is actuall a shell script containing the following three snippets: # use -n32 binaries by default if [[ $SGI_ABI = -32 ]] then export JAVA_N32=0 elif [[ $SGI_ABI = -o32 ]] then export JAVA_N32=0 else export JAVA_N32=1 fi case $a in -32) JAVA_N32=0 shift ;; -o32) JAVA_N32=0 shift ;; -n32) JAVA_N32=1 shift ;; if [ $JAVA_N32 = 1 ] then check_path $LD_LIBRARYN32_PATH "LD_LIBRARYN32_PATH" if [ -z "$LD_LIBRARYN32_PATH" ] then if [ -z "$LD_LIBRARY_PATH" ] then LD_LIBRARYN32_PATH=$JAVA_HOME/lib32/sgi/$THREADS_TYPE else check_path $LD_LIBRARY_PATH "LD_LIBRARY_PATH" LD_LIBRARYN32_PATH="$JAVA_HOME/lib32/sgi/$THREADS_TYPE:$LD_LIBRARY_PATH" fi else LD_LIBRARYN32_PATH="$JAVA_HOME/lib32/sgi/$THREADS_TYPE:$LD_LIBRARYN32_PATH" fi export LD_LIBRARYN32_PATH prog=$JAVA_HOME/bin32/sgi/${THREADS_TYPE}/${progname} else check_path $LD_LIBRARY_PATH "LD_LIBRARY_PATH" if [ -z "$LD_LIBRARY_PATH" ] then LD_LIBRARY_PATH=$JAVA_HOME/lib/sgi/$THREADS_TYPE else LD_LIBRARY_PATH="$JAVA_HOME/lib/sgi/$THREADS_TYPE:$LD_LIBRARY_PATH" fi export LD_LIBRARY_PATH prog=$JAVA_HOME/bin/sgi/${THREADS_TYPE}/${progname} fi From Fred L. Drake, Jr." References: <36FFFBD1.D7944116@bioreason.com> Message-ID: <14079.65438.525050.414004@weyr.cnri.reston.va.us> Andrew Dalke writes: > 1) > What is the correct location for "pure" python module installations? > > $(prefix)/lib/python$VERSION/site-packages > or > $(prefix)/lib/site-python site-python/ is the original location, but never had a platform-specific counterpart. site-packages/ was intended to solve this, which is why it comes in both flavors, and deals with version- related issues. In general, 100% Python packages should be installed in site-packages/, because not all packages are sure to work across major version changes: it's too easy to rely on the bahavior of bugs in the library. ;-( This results in the safest (most conservative) installation and allows for faster detection on libraries which are not available for the Python version: ImportError is harder to misinterpret than buggy behavior. Only use site-python/ if you really are confident that your package will stand the test of Python interpreter updates. Unless there is a major problem with diskspace, just don't do this! Packages that contain native code should install in $(exec_prefix)/lib/python$VERSION/site-packages/. > As I understand it now, the python specific .so files need to be > on the PYTHONPATH, which currently contains no information about This varies by platform, but I don't think the various binary flavors for SGIs are indicated; the directories on sys.path which are platform-specific are either computed from $exec_prefix or contain Python modules under $prefix that are only meaningful for the platform. I presume each of these platform variations would get a different $exec_prefix, or the distinction would be hidden at a lower level (such as mounting a filesystem at a common point based on binary flavor; not hard with NFS). -Fred -- Fred L. Drake, Jr. Corporation for National Research Initiatives From dalke@bioreason.com Mon Mar 29 22:54:17 1999 From: dalke@bioreason.com (Andrew Dalke) Date: Mon, 29 Mar 1999 14:54:17 -0800 Subject: [Distutils] install location(s) References: <36FFFBD1.D7944116@bioreason.com> <14079.65438.525050.414004@weyr.cnri.reston.va.us> Message-ID: <37000499.8F2D22B8@bioreason.com> Fred Drake said: > In general, 100% Python packages should be installed in > site-packages/, Okay, I'll follow that guideline. > I presume each of these platform variations would get a different > $exec_prefix, or the distinction would be hidden at a lower level > (such as mounting a filesystem at a common point based on binary > flavor; not hard with NFS). Thinking about your comment some more, my question is much less of an issue than I had thought. If we (Bioreason) distribute binary libraries and want to be specific to the given architecture we can install under the existing python tree, or somewhere else. If we install under the existing "site-packages" directory, then the sysadmin has already figured out how to handle the differences between OSes (eg, by having distinct install or with the NFS mount trick). And there's no way we can predict that method. If we distribute as our own directory tree, then the PYTHONPATH will already have to be modified, so we can say: Add "source /blah/bling/blang/blang.csh" to your ".cshrc" and then define "blang.csh" as something like: setenv OUR_PACKAGE /blah/bling/blang set arch=`$OUR_PACKAGE/getarch.sh` set libdir=$OUR_PACKAGE/lib/$arch if ${?PYTHONPATH} then setenv PYTHONPATH $PYTHONPATH:$OUR_PACKAGE/python:$libdir else setenv PYTHONPATH $OUR_PACKAGE/python:$libdir endif The only time this is an issue is if the sysadmin doesn't already have a mechanism for installing multiple architectures, and there's no way that can be mandated. (For example, I think we'll end up internally choosing libraries by setting up our own PYTHONPATH as needed for each architecture, and we'll be able to specify our own criterion to distinguish between then in ways that the normal Python install *cannot* discern.) Andrew dalke@bioreason.com From gward@cnri.reston.va.us Tue Mar 30 01:53:13 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Mon, 29 Mar 1999 20:53:13 -0500 Subject: [Distutils] generating pyc and pyo files In-Reply-To: <36FEDE06.9A77C2BF@bioreason.com>; from Andrew Dalke on Sun, Mar 28, 1999 at 05:57:26PM -0800 References: <36FEDE06.9A77C2BF@bioreason.com> Message-ID: <19990329205312.A7865@cnri.reston.va.us> Quoth Andrew Dalke, on 28 March 1999: > When should the .pyc and .pyo files be generated during the > install process, in the "build" directory or the "install" one? Sounds like everyone is in favour of compiling at build time: good. Nobody mentioned my reason for favouring this, which is simple: installation should consist of nothing more than copying files and (possibly) changing modes and ownerships. All files that will be installed should be generated at build time. This makes lots of things easier, notably: installation itself; updating the mythical database of installed files; and creating "built distributions" such as RPM. Also, if you check the code, you'll note that I don't use the 'compileall' module, but rather explicitly follow the list of module to build. Being able to catch errors didn't occur to me, but it's one good reason. (And Andrew's patch probably won't make it into versions 1.4 through 1.5.1, which I would still like to support.) I think I just did it that way because I don't like ceding control over which files are processed to an external entity. (You'll note that distutils supplies it's own 'copy_tree()' function, for basically the same reason.) Would anyone interested in error handling care to look into what happens when 'compile' fails? Doesn't look like I've done anything in particular to handle it (see distutils/command/build_py.py, towards the bottom of the 'run()' method) -- I probably blithely assumed that it would raise an exception like most IO routines do. Wow, a thread where everybody agrees... we must be on to to something. Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From gward@cnri.reston.va.us Tue Mar 30 02:02:14 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Mon, 29 Mar 1999 21:02:14 -0500 Subject: [Distutils] install location(s) In-Reply-To: <14079.65438.525050.414004@weyr.cnri.reston.va.us>; from Fred L. Drake on Mon, Mar 29, 1999 at 05:33:02PM -0500 References: <36FFFBD1.D7944116@bioreason.com> <14079.65438.525050.414004@weyr.cnri.reston.va.us> Message-ID: <19990329210213.B7865@cnri.reston.va.us> Quoth Fred L. Drake, on 29 March 1999: > site-python/ is the original location, but never had a > platform-specific counterpart. site-packages/ was intended to solve > this, which is why it comes in both flavors, and deals with version- > related issues. > In general, 100% Python packages should be installed in > site-packages/, because not all packages are sure to work across major > version changes: it's too easy to rely on the bahavior of bugs in the > library. ;-( This results in the safest (most conservative) > installation and allows for faster detection on libraries which are > not available for the Python version: ImportError is harder to > misinterpret than buggy behavior. I've sorta been wondering about that myself -- I've had to resort to dissecting Python Makefiles, doing multiple test installations with slight parameter tweaks, and taking careful notes on the whole process to try to figure out what's happening and what Distutils should try to emulate/enforce/whatever. One thing that bugs me: there doesn't seem to be an elegant way to have multiple sub-versions of Python installed on the same machine, eg. 1.5.1 and 1.5.2b2 (or whatever the latest alpha/beta is at a given time). I have taken to setting prefix=/usr/local/python-1.5.1 and prefix=/usr/local/python-1.5.2b2 respectively, and screwing around with symlinks in /usr/local/bin to get things the way I want them. Is this the best anyone has come up with? It would be nice to subdivide /usr/local/lib/python1.5, but the fact that Python figures out sys.path at runtime seems to preclude this. Are my impressions correct? Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From dalke@bioreason.com Tue Mar 30 02:01:47 1999 From: dalke@bioreason.com (Andrew Dalke) Date: Mon, 29 Mar 1999 18:01:47 -0800 Subject: [Distutils] generating pyc and pyo files References: <36FEDE06.9A77C2BF@bioreason.com> <19990329205312.A7865@cnri.reston.va.us> Message-ID: <3700308B.7451FB0C@bioreason.com> I just did a scan of Greg's "build_py" which caused me to recheck compileall. Is it true that the only way to generate .pyo files is to rerun python with -O? Looks like things are that way, so I'll need to change things in my Makefiles to generate both sets of compiled files. > Wow, a thread where everybody agrees... we must be on to to > something. Umm... Umm... I disagree with that ! Phew, the universe has regained some stability :) Andrew From gward@cnri.reston.va.us Tue Mar 30 02:35:13 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Mon, 29 Mar 1999 21:35:13 -0500 Subject: [Distutils] Compiler abstractiom model Message-ID: <19990329213512.C7865@cnri.reston.va.us> Hi all -- I've finally done some thinking and scribbling on how to build extensions -- well, C/C++ extensions for CPython. Java extensions for JPython will have to wait, but they are definitely looming on the horizon as something Distutils will have to handle. Anyways, here are the conclusions I've arrived at. * Stick with C/C++ for now; don't worry about other languages (yet). That way we can be smart about C/C++ things like preprocessor tokens and macros, include directories, shared vs static libraries, source and object files, etc. * At the highest level, we should just be able to say "I know nothing, just give me a compiler object". This implies a factory function returning instances of concrete classes derived from an abstract CCompiler class. These compiler objects must know how to: - compile .c -> .o (or local equivalent) - compile multiple .c's to matching .o's - be able to define/undefine preprocessor macros/tokens - be able to supply preprocessor search directories - link multiple .o's to static library (libfoo.a, or local equiv.) - link multiple .o's to shared library (libfoo.so, or local equiv.) - link multiple .o's to shared object (foo.so, or local equiv.) - for all link steps: + be able to supply explicit libraries (/foo/bar/libbaz.a) + be able to supply implicit libraries (-lbaz) + be able to supply search directories for implicit libraries - do all this with timestamp-based dependency analysis (non-trivial because it requires analyzing header dependencies!) Linking to static/shared libraries and dependency analysis are optional for now; everything else is required to build C/C++ extensions for Python. (At least that's my impression!) "Local equivalent" is meant to encompass different filenames for C++ (eg. .C -> .o) and different operating systems/compilers (eg. .c -> .obj, multiple .obj's to foo.dll or foo.lib) BIG QUESTION: I know this will work on Unix, and from my distant recollections of past work on other systems, it should work on MS-DOS and VMS too. I gather that Windows is pretty derivative of MS-DOS, so will this model work for Windows compilers too? Do we have to worry about Windows compilers other than VC++? But I have *no clue* about Macintosh compilers -- presumably somebody "out there" (not necessarily on this SIG, but I hope so!) knows how to compile Python on the Mac, so hopefully it's possible to compile Python extensions on the Mac. But will this compiler abstraction model work there? Brushing that moment of self-doubt aside, here's a proposed interface for CCompiler and derived classes. define_macro (name [, value]) define a preprocessor macro or token; this will affect all invocations of the 'compile()' method undefine_macro (name) undefine a preprocessor macro or token add_include_dir (dir) add 'dir' to the list of directories that will be searched by the preprocessor for header files set_include_dir ([dirs]) reset the list of preprocessor search directories; 'dirs' should be a list or tuple of directory names; if not supplied, the list is cleared compile (source, define=macro_list, undef=names, include_dirs=dirs) compile source file(s). 'source' may be a sequence of source filenames, all of which will be compiled, or a single filename to compile. The optional 'define', 'undef', and 'include_dirs' named parameters all augment the lists setup by the above four methods. 'macro_list' is a list of either 2-tuples (macro_name, value) or bare macro names. 'names' is a list of macro names, and 'dirs' a list of directories. add_lib (libname) add a library name to the list of implicit libraries ("-lfoo") to link with set_libs ([libnames]) reset the list of implicit libraries (or clear if 'libnames' not supplied) add_lib_dir (dir) add a directory to the list of library search directories ("-L/foo/bar/baz") used when we link set_lib_dirs ([dirs]) reset (or clear) the list of library search directorie link_shared_object (objects, shared_object, libs=libnames, lib_dirs=dirs) link a set of object files together to create a shared object file. The optional 'libs' and 'lib_dirs' parameters only augment the lists setup by the previous four methods. Things to think about: should there be explicit support for "explicit libraries" (eg. where you put "/foo/bar/libbaz.a" on the command line instead of trusting "-lbaz" to figure it out)? I don't think we can expect the caller to put them in the 'objects' list, because the filenames are too system-dependent. My inclination, as you could probably guess, would be to add methods 'add_explicit_lib()' and 'set_explicit_libs()', and a named parameter 'explicit_libs' to 'link_shared_objects()'. Also, there would have to be methods to support creating static and shared libraries: I would call them 'link_static_lib()' and 'link_shared_lib()'. They would have the same interface as 'link_shared_object()', except the output filename would of course have to be handled differently. (To illustrate: on Unix-y systems, passing shared_object='foo' to 'link_shared_object()' would result in an output file 'foo.so'. But passing output_lib='foo' to 'link_shared_lib()' would result in 'libfoo.so', and passing it to 'link_static_lib()' would result in 'libfoo.a'. So, to all the Windows and Mac experts out there: will this cover it? Can the variations in filename conventions and compilation/link schemes all be shoved under this umbrella? Or is it back to the drawing board? Thanks for your comments! Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From gward@cnri.reston.va.us Tue Mar 30 02:46:26 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Mon, 29 Mar 1999 21:46:26 -0500 Subject: [Distutils] Current weaknesses Message-ID: <19990329214626.D7865@cnri.reston.va.us> Well, it's been about a week since I announced the first bundle of Distutils code. Haven't heard much back yet, so I assume that it has worked for those of you who tried it. Has anyone really dived in and started poking around the code? If so, you must have stumbled across some of the difficulties I had, including: * it doesn't seem like there's a way to control whether 'compile' generates .pyc or .pyo files * worse, it doesn't look like there's even a way to find out what 'compile' will generate! * the command options describing installation directories are haphazard at best. The problem is, I know pretty much what to call platform-specific library directories: "install_platlib", "install_site_platlib", and so forth. That seems in keeping with the Python Makefiles. But I don't really know what to call non-platform-specific library directories. I take solace in knowing that I am not alone; Perl's MakeMaker just calls them "INSTALLLIBDIR", "INSTALLSITELIB", and so forth, which is where my cop-out of "install_lib" and "install_site_lib" came from. But this isn't really satisfactory... anyone got better ideas? * how do we handle copying file metadata under Mac OS? do the copy routines in distutils.util work under Windows as well as they do under Unix? (the code is mostly stolen from the standard shutil module, so if it works then my copying stuff should) * how should we deal with "wildcard" listing of modules in setup.py? Even for a moderate sized distribution like Distutils, it's already obvious that explicitly listing every module in the distribution is a no-go. See the comments in setup.py for some of my thinking on this matter. More importantly, has anyone gotten deeply confused by trying to associate the code with my two-month-old design proposal? I still need to revisit that document to make sure I haven't grossly violated any of its principles (and change the principles to match the violation if so ;-), so if anybody is getting confused it's probably not just you. Hope to hear some comments soon! Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From arcege@shore.net Tue Mar 30 02:51:18 1999 From: arcege@shore.net (Michael P. Reilly) Date: Mon, 29 Mar 1999 21:51:18 -0500 (EST) Subject: [Distutils] generating pyc and pyo files In-Reply-To: <19990329205312.A7865@cnri.reston.va.us> from Greg Ward at "Mar 29, 99 08:53:13 pm" Message-ID: <199903300251.VAA03779@northshore.shore.net> > Quoth Andrew Dalke, on 28 March 1999: > > When should the .pyc and .pyo files be generated during the > > install process, in the "build" directory or the "install" one? > > Sounds like everyone is in favour of compiling at build time: good. > Nobody mentioned my reason for favouring this, which is simple: > installation should consist of nothing more than copying files and > (possibly) changing modes and ownerships. All files that will be > installed should be generated at build time. This makes lots of things > easier, notably: installation itself; updating the mythical database of > installed files; and creating "built distributions" such as RPM. I disagree. I see no reason to double the size of the distribution by shipping redundent files (on average, a .pyc is 78.786% of a .py, based on the Python 1.5.1 distribution; .pyo is 88.148%). Permissions and ownerships cannot be handled at build time. And as the other thread (about placement of distribution files) has stated, it is something the sys admins and installers will have to handle - and can override. If the person is not the sys admin, s/he will have to talk to the the sys admin to open global areas, or deal with creating their own areas. Making pyc/pyo files is trivial and unnecessary (for shipping). There is already an installation step; it's nothing to add a compileall and chownall/chmodall step too. [snip] > Wow, a thread where everybody agrees... we must be on to to something. Not quite, I'm just getting my new house straightened out is all. > Greg -Arcege -- ------------------------------------------------------------------------ | Michael P. Reilly, Release Engineer | Email: arcege@shore.net | | Salem, Mass. USA 01970 | | ------------------------------------------------------------------------ From dalke@bioreason.com Tue Mar 30 05:45:42 1999 From: dalke@bioreason.com (Andrew Dalke) Date: Mon, 29 Mar 1999 21:45:42 -0800 Subject: [Distutils] Compiler abstractiom model References: <19990329213512.C7865@cnri.reston.va.us> Message-ID: <37006506.3132023D@bioreason.com> Hey Greg, I can see some problems with this compilation model even for unix machines: C++ template instantiation For at least some compilers, the compilation flags must be passed to the linker, because the linker instantiates templated code during a "pre-link" step so needs to know the right compilation options. compiler flags Where do I stick "-g" or "-O" in the "compile" function? (Or "-ansi", or for our SGIs, "-o32" ?) Or will you extract these from the Python compilation info? In which case, getting the values for a CPPCompiler would be tricky. include directories As clarification, the "add_include_dir" is for the list of directories used for *all* compilations while the compile(... include_dirs=dirs) is the list needed for just the given source file? I take it the compile() include dirs will be listed first, or is it used instead? Is there any way to get the list of include files (eg, the initial/default list)? lib information Repeat some of the comments from "include directories" > passing shared_object='foo' to 'link_shared_object()' would result > in an output file 'foo.so' As I recall, not all unix-y machines have .so for their shared library extensions. Eg, looking at the Python "makesetup" script, it seems some machines use ".sl". I don't think Python exports this information. I believe at times the order of the -l and -L terms can be important, but I'm not sure. Eg, I think the following -L/home/usa/lib -lfootball -L/home/everyone_else/lib -lfootball lets me do both (American) football -- as in Superbowl -- and soccer (football) -- as in World Cup. Whereas -L/home/usa/lib -L/home/everyone_else/lib -lfootball -lfootball means I link with the same library twice. I think. Andrew dalke@bioreason.com From da@ski.org Tue Mar 30 06:03:00 1999 From: da@ski.org (David Ascher) Date: Mon, 29 Mar 1999 22:03:00 -0800 (Pacific Standard Time) Subject: [Distutils] Compiler abstractiom model In-Reply-To: <19990329213512.C7865@cnri.reston.va.us> Message-ID: On Mon, 29 Mar 1999, Greg Ward wrote: > * At the highest level, we should just be able to say "I know nothing, > just give me a compiler object". This implies a factory function > returning instances of concrete classes derived from an abstract > CCompiler class. These compiler objects must know how to: > - compile .c -> .o (or local equivalent) > - compile multiple .c's to matching .o's > - be able to define/undefine preprocessor macros/tokens > - be able to supply preprocessor search directories > - link multiple .o's to static library (libfoo.a, or local equiv.) > - link multiple .o's to shared library (libfoo.so, or local equiv.) > - link multiple .o's to shared object (foo.so, or local equiv.) > - for all link steps: > + be able to supply explicit libraries (/foo/bar/libbaz.a) > + be able to supply implicit libraries (-lbaz) > + be able to supply search directories for implicit libraries > - do all this with timestamp-based dependency analysis > (non-trivial because it requires analyzing header dependencies!) On windows, it is sometimes needed to specify other files which aren't .c files, but .def files (possibly all can be done with command line options, but might as well build this in). I don't know how these should fit in... > add_include_dir (dir) > add 'dir' to the list of directories that will be searched by > the preprocessor for header files > set_include_dir ([dirs]) > reset the list of preprocessor search directories; 'dirs' should > be a list or tuple of directory names; if not supplied, the list > is cleared Why not expose a list object and have the user modify it? obj.includes.append(dir) obj.includes.insert(3, dir) obj.includes.extend(dir1, dir2) > add_lib (libname) > add a library name to the list of implicit libraries ("-lfoo") > to link with > set_libs ([libnames]) > reset the list of implicit libraries (or clear if 'libnames' > not supplied) > add_lib_dir (dir) > add a directory to the list of library search directories > ("-L/foo/bar/baz") used when we link > set_lib_dirs ([dirs]) > reset (or clear) the list of library search directorie Idem. > Things to think about: should there be explicit support for "explicit > libraries" (eg. where you put "/foo/bar/libbaz.a" on the command line > instead of trusting "-lbaz" to figure it out)? Yes. In general, I think it's not a bad idea to give control over the command line -- there are too many weird compilers out there with strange options, syntaxes, etc. --david From dalke@bioreason.com Tue Mar 30 06:54:29 1999 From: dalke@bioreason.com (Andrew Dalke) Date: Mon, 29 Mar 1999 22:54:29 -0800 Subject: [Distutils] generating pyc and pyo files References: <199903300251.VAA03779@northshore.shore.net> Message-ID: <37007525.EBA4C963@bioreason.com> Michael P. Reilly said: > I disagree. I see no reason to double the size of the > distribution by shipping redundent files (on average, a .pyc > is 78.786% of a .py, based on the Python 1.5.1 distribution; > .pyo is 88.148%). I believe you misread the intention. Only the .py files will be shipped. Once downloaded they are unpacked into the "build" directory. The .pyo and .pyc files are generated in the build directory on the local (downloaded) machine. Once these files are compiled locally, they are installed into the install directory. > Permissions and ownerships cannot be handled at build time. Correct. And that's why they will be handled during the install step. Andrew dalke@bioreason.com From gward@cnri.reston.va.us Tue Mar 30 13:18:54 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Tue, 30 Mar 1999 08:18:54 -0500 Subject: [Distutils] generating pyc and pyo files In-Reply-To: <37007525.EBA4C963@bioreason.com>; from Andrew Dalke on Mon, Mar 29, 1999 at 10:54:29PM -0800 References: <199903300251.VAA03779@northshore.shore.net> <37007525.EBA4C963@bioreason.com> Message-ID: <19990330081854.A8035@cnri.reston.va.us> Quoth Andrew Dalke, on 29 March 1999: > Michael P. Reilly said: > > I disagree. I see no reason to double the size of the > > distribution by shipping redundent files (on average, a .pyc > > is 78.786% of a .py, based on the Python 1.5.1 distribution; > > .pyo is 88.148%). > > I believe you misread the intention. Only the .py files > will be shipped. Once downloaded they are unpacked into the > "build" directory. The .pyo and .pyc files are generated > in the build directory on the local (downloaded) machine. > Once these files are compiled locally, they are installed into > the install directory. Well, actually, you're both right. Compiling .py files at build time will not affect *source* distributions, which is what Andrew is talking about. But it *will* affect the size of *built* distributions, which is what Michael is talking about (I assume). The whole reason I've been calling them "built distributions" instead of "binary distributions" is because of the presumed inclusion of .pyc/.pyo files. I'll have to play around a bit to see how much including .pyc's in the built distributions affects the final size of the .tar.gz or .zip (or .rpm, or whatever) file. Let's see, tarring and zipping up the current Distutils 'build' directory with .pyc files looks like this: -rw-r--r-- 1 gward staff 41363 Mar 30 08:12 distutils-bdist-1.tar.gz -rw-r--r-- 1 gward staff 51629 Mar 30 08:13 distutils-bdist-1.zip and if I delete the .pyc's and try again: -rw-r--r-- 1 gward staff 20544 Mar 30 08:13 distutils-bdist-2.tar.gz -rw-r--r-- 1 gward staff 25447 Mar 30 08:13 distutils-bdist-2.zip So! Michael was almost exactly right, including the .pyc's really does double the size of the built distribution. .pyc files compress roughly as well as .py files. Does this seem like a problem to anyone else? I still want to keep installation as simple as possible -- and, more importantly, be able to trivially determine the set of files that will be installed -- but if increasing the size of built distributions really bothers you, speak up! Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From gward@cnri.reston.va.us Tue Mar 30 13:38:45 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Tue, 30 Mar 1999 08:38:45 -0500 Subject: [Distutils] Compiler abstractiom model In-Reply-To: <37006506.3132023D@bioreason.com>; from Andrew Dalke on Mon, Mar 29, 1999 at 09:45:42PM -0800 References: <19990329213512.C7865@cnri.reston.va.us> <37006506.3132023D@bioreason.com> Message-ID: <19990330083844.B8035@cnri.reston.va.us> Quoth Andrew Dalke, on 29 March 1999: > C++ template instantiation > For at least some compilers, the compilation flags must be > passed to the linker, because the linker instantiates templated > code during a "pre-link" step so needs to know the right > compilation options. Ouch! I *knew* there was a reason I disliked C++, I just couldn't put my finger on it... ;-) Maybe we should cop out and only handle C compilation *for now*? Just how many C++ Python extensions are out there now, anyways? > compiler flags > Where do I stick "-g" or "-O" in the "compile" function? > (Or "-ansi", or for our SGIs, "-o32" ?) Or will you extract > these from the Python compilation info? In which case, > getting the values for a CPPCompiler would be tricky. Generally, those things must be done when Python is compiled. Err, let me reiterate that with emphasis: ** COMPILER FLAGS ARE THE RESPONSIBILITY OF THE PYTHON BUILDER ** and Distutils will slurp them out of Python's Makefile (using Fred's distutils.sysconfig module) and use those to build extensions. Andrew, you use SGIs, so you can probably guess what kind of chaos would result if your sysadmin built Python with -o32 and you started building extension modules with -n32. And that's only the most obvious example of what can go wrong when you use compiler flags on a dynamically loaded object inconsistent with the binary that will be loading it. For that and other reasons, I'm quite leery of letting individual extension modules supply things like -ansi or -o32 -- those options should all be stolen straight from Python's Makefile. However, there probably should be a way to set debugging/optimization flags -- again, the default should definitely be to take them from Python's build, but I don't think inconsistent -g/-O will cause problems. (Anyone have evidence to the contrary?) However, this should not be in the CCompiler interface -- I was thinking it belongs in UnixCCompiler instead, because Unix C compilers are fairly consistent about allowing -g, -O, etc. Anything at the CCompiler level should be applicable to all compilers: compiler.debug = 1 # implies "cc -g" on Unix, something else on # other platforms compiler.optimize = 'none' # or 'medium' or 'high' > As clarification, the "add_include_dir" is for the list of > directories used for *all* compilations while the compile(... > include_dirs=dirs) is the list needed for just the given source > file? Yes; any directories supplied to 'add_include_dir()' and 'set_include_dirs()' would affect *all* compilations. Directories supplied to 'compile()' through the 'include_dirs' named parameter would be *added* to the standard list for that compilation step only. Ditto for macros, libraries, library directories, etc. > I take it the compile() include dirs will be listed first, > or is it used instead? Good point. "added" should be "prepended" above, for maximum clarity. > Is there any way to get the list of include files (eg, the > initial/default list)? Oh, probably. I just haven't documented it. ;-) I think David Ascher's idea of exposing the actual list might be nicer overall -- I'll reply to his post separately. > As I recall, not all unix-y machines have .so for their shared > library extensions. Eg, looking at the Python "makesetup" script, > it seems some machines use ".sl". I don't think Python exports > this information. I was just using '.so' as an illustration. I'll have to spend some time grovelling through Python's Makefiles and configure stuff to verify your last statement... I certainly hope that information is available, though! > I believe at times the order of the -l and -L terms can be > important, but I'm not sure. Eg, I think the following > > -L/home/usa/lib -lfootball -L/home/everyone_else/lib -lfootball > > lets me do both (American) football -- as in Superbowl -- and > soccer (football) -- as in World Cup. Whereas > > -L/home/usa/lib -L/home/everyone_else/lib -lfootball -lfootball > > means I link with the same library twice. Auuugghhh!!! This seems like a "feature" to avoid like the plague, and probably one that's not consistent across platforms. Can anyone back up Andrew's claim? I've certainly never seen this behaviour before, but then I haven't exactly gone looking for such perversion. Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From gward@cnri.reston.va.us Tue Mar 30 13:42:35 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Tue, 30 Mar 1999 08:42:35 -0500 Subject: [Distutils] Compiler abstractiom model In-Reply-To: ; from David Ascher on Mon, Mar 29, 1999 at 10:03:00PM -0800 References: <19990329213512.C7865@cnri.reston.va.us> Message-ID: <19990330084235.C8035@cnri.reston.va.us> Quoth David Ascher, on 29 March 1999: > On windows, it is sometimes needed to specify other files which aren't .c > files, but .def files (possibly all can be done with command line options, > but might as well build this in). I don't know how these should fit in... Are they just listed in the command line like .c files? Or are they specified by a command-line option? Would you use these in code that's meant to be portable to other platforms? > > [my add_include_dir()/set_include_dirs() bureaucratic silliness] > > Why not expose a list object and have the user modify it? Duh, you're quite right. I've been doing too much Java lately. Mmmm, bondage... > Yes. In general, I think it's not a bad idea to give control over the > command line -- there are too many weird compilers out there with strange > options, syntaxes, etc. But, as I said emphatically in my last post, those sorts of things must be supplied when Python itself is built. I'm already allowing control over include directories and macros -- which are essential -- so I'm willing to throw in -g/-O stuff too. But if we allow access to arbitrary compiler flags, you can kiss portability goodbye! Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From Fred L. Drake, Jr." References: <36FEDE06.9A77C2BF@bioreason.com> <19990329205312.A7865@cnri.reston.va.us> <3700308B.7451FB0C@bioreason.com> Message-ID: <14080.55143.787294.552673@weyr.cnri.reston.va.us> Andrew Dalke writes: > Is it true that the only way to generate .pyo files is to > rerun python with -O? Looks like things are that way, so There's a global variable in the C code that can be set to enable optimization. When I spoke to Guido about exposing it in the parser module, he objected. His rationale was that the setting would probably change from version to version and so should not be exposed. (My response was that the grammar changed with major revisions anyway, so the parser module already tends to get affected with some regularity.) I'd be happy to expose somehow in the parser module, but I don't know that Guido won't throw out the change. ;-) For now, the best way to compile whichever flavor your process doesn't generate is to use a child process with -O set if __debug__ is true. -Fred -- Fred L. Drake, Jr. Corporation for National Research Initiatives From Fred L. Drake, Jr." References: <19990329213512.C7865@cnri.reston.va.us> <37006506.3132023D@bioreason.com> <19990330083844.B8035@cnri.reston.va.us> Message-ID: <14080.56174.515894.847203@weyr.cnri.reston.va.us> Andrew Dalke, on 29 March 1999, wrote: > As I recall, not all unix-y machines have .so for their shared > library extensions. Eg, looking at the Python "makesetup" script, > it seems some machines use ".sl". I don't think Python exports Greg Ward writes: > I was just using '.so' as an illustration. I'll have to spend some time > grovelling through Python's Makefiles and configure stuff to verify your > last statement... I certainly hope that information is available, Use the SO variable pulled in from the Makefile; it will be .so or .sl as appropriate. -Fred -- Fred L. Drake, Jr. Corporation for National Research Initiatives From Fred L. Drake, Jr." References: <199903300251.VAA03779@northshore.shore.net> <37007525.EBA4C963@bioreason.com> <19990330081854.A8035@cnri.reston.va.us> Message-ID: <14080.58335.756122.550790@weyr.cnri.reston.va.us> Michael P. Reilly said: > I disagree. I see no reason to double the size of the > distribution by shipping redundent files (on average, a .pyc > is 78.786% of a .py, based on the Python 1.5.1 distribution; > .pyo is 88.148%). Andrew Dalke, on 29 March 1999, writes: > "build" directory. The .pyo and .pyc files are generated > in the build directory on the local (downloaded) machine. Greg Ward writes: > Well, actually, you're both right. Compiling .py files at build time > will not affect *source* distributions, which is what Andrew is talking And I say Michael's figures are conservative; most .pyc files are larger than the .py files. (I've attached a simple script to compare the sizes; Unix only.) > Does this seem like a problem to anyone else? I still want to keep > installation as simple as possible -- and, more importantly, be able to > trivially determine the set of files that will be installed -- but if > increasing the size of built distributions really bothers you, speak up! I think a lot of people will be bothered by the increased size, whether or not we are. People with poor connectivity and archive maintainers will want reduced size. Removing the .pyc and .pyo files will help make packages less tied to Python versions as well; these files have often been obsoleted by changes between Python versions. It's simple enough to generate them during installation when we're able to run at that time. For RPMs this should be fine; I'm not sure about other package systems. For some, it may make sense to build them in the installation locations and then chmod them; this may be the case for the Solaris PKG system. (Barry, are you following this?) -Fred -- Fred L. Drake, Jr. Corporation for National Research Initiatives From Fred L. Drake, Jr." --a87wwq6rQI Content-Type: text/plain; charset=us-ascii Content-Description: message body text Content-Transfer-Encoding: 7bit Sorry, I forgot to attach the promised script. (That seems to be my standard ritual for attachments!) Anyway, the "checkpycs" script is attached, really. -Fred -- Fred L. Drake, Jr. Corporation for National Research Initiatives --a87wwq6rQI Content-Type: text/plain Content-Description: check .pyc & .pyo sizes in comparison table Content-Disposition: inline; filename="checkpycs" Content-Transfer-Encoding: 7bit #! /usr/bin/env python # -*- Python -*- import errno import os import stat import string import sys VERBOSE = 0 COLUMN_WIDTHS = (40, 10, 10, 10) s = '' for cw in COLUMN_WIDTHS: if s: s = "%s %%%ds" % (s, cw) else: s = "%%%ds" % cw FORMAT = s del s, cw def process_dirs(dirlist, verbose=VERBOSE): fp = os.popen("find %s -name \*.py -print" % string.join(dirlist)) writeline("", " .py Size", ".pyc Size", ".pyo Size") writeline("", "---------", "---------", "---------") while 1: line = fp.readline() if not line: break filename = line[:-1] pysize = os.stat(filename)[stat.ST_SIZE] or "0" pycsize = pyosize = None if os.path.isfile(filename + "c"): pycsize = os.stat(filename + "c")[stat.ST_SIZE] if os.path.isfile(filename + "o"): pyosize = os.stat(filename + "o")[stat.ST_SIZE] if pycsize or pyosize or VERBOSE: writeline(filename, pysize, pycsize, pyosize) def writeline(c1, c2, c3, c4): c1 = c1 or "" c2 = c2 or "" c3 = c3 or "" c4 = c4 or "" if len(c1) > COLUMN_WIDTHS[0]: c1 = "..." + c1[-(COLUMN_WIDTHS[0] - 3):] print FORMAT % (c1, c2, c3, c4) def main(): try: process_dirs(sys.argv[1:] or ["."]) except IOError, e: if e.errno != errno.EPIPE: raise if __name__ == "__main__": main() --a87wwq6rQI-- From arcege@shore.net Tue Mar 30 15:06:34 1999 From: arcege@shore.net (Michael P. Reilly) Date: Tue, 30 Mar 1999 10:06:34 -0500 (EST) Subject: [Distutils] generating pyc and pyo files In-Reply-To: <14080.58335.756122.550790@weyr.cnri.reston.va.us> from "Fred L. Drake" at "Mar 30, 99 09:46:55 am" Message-ID: <199903301506.KAA15848@northshore.shore.net> > Michael P. Reilly said: > > I disagree. I see no reason to double the size of the > > distribution by shipping redundent files (on average, a .pyc > > is 78.786% of a .py, based on the Python 1.5.1 distribution; > > .pyo is 88.148%). > > Andrew Dalke, on 29 March 1999, writes: > > "build" directory. The .pyo and .pyc files are generated > > in the build directory on the local (downloaded) machine. > > Greg Ward writes: > > Well, actually, you're both right. Compiling .py files at build time > > will not affect *source* distributions, which is what Andrew is talking > > And I say Michael's figures are conservative; most .pyc files are > larger than the .py files. (I've attached a simple script to compare > the sizes; Unix only.) They aren't just conservative, they are downright misleading - I mixed the ratios in the email, sorry. The 78.768% is supposed to be the size of .py to .pyc and 88.148% is .py to .pyo, not the other way around. These were averages based on the modules available thru sys.path at home (318 .pyc files, 213 .pyo files); it only included values where there were both a .py and .pyc or both .py/.pyo so the averages weren't thrown off. > > -Fred -Arcege -- ------------------------------------------------------------------------ | Michael P. Reilly, Release Engineer | Email: arcege@shore.net | | Salem, Mass. USA 01970 | | ------------------------------------------------------------------------ From Fred L. Drake, Jr." References: <14080.58335.756122.550790@weyr.cnri.reston.va.us> <199903301506.KAA15848@northshore.shore.net> Message-ID: <14080.59800.162738.80908@weyr.cnri.reston.va.us> Michael P. Reilly writes: > They aren't just conservative, they are downright misleading - I mixed > the ratios in the email, sorry. The 78.768% is supposed to be the size > of .py to .pyc and 88.148% is .py to .pyo, not the other way around. That makes sense. > These were averages based on the modules available thru sys.path at > home (318 .pyc files, 213 .pyo files); it only included values where > there were both a .py and .pyc or both .py/.pyo so the averages weren't That's the right approach. Sounds like I should add more summarization to my checkpycs script, to get the ratios out for each .pyc/.pyo and in summary. -Fred -- Fred L. Drake, Jr. Corporation for National Research Initiatives From da@ski.org Tue Mar 30 16:14:45 1999 From: da@ski.org (David Ascher) Date: Tue, 30 Mar 1999 08:14:45 -0800 (Pacific Standard Time) Subject: [Distutils] Compiler abstractiom model In-Reply-To: <19990330084235.C8035@cnri.reston.va.us> Message-ID: On Tue, 30 Mar 1999, Greg Ward wrote: > Quoth David Ascher, on 29 March 1999: > > On windows, it is sometimes needed to specify other files which aren't .c > > files, but .def files (possibly all can be done with command line options, > > but might as well build this in). I don't know how these should fit in... > > Are they just listed in the command line like .c files? Or are they > specified by a command-line option? Would you use these in code that's > meant to be portable to other platforms? Yes, no, and yes. =) E.g. Python extensions need to declare the 'exported' entry point. This can be done either by modifying the source code (bad for portable code, requires #ifdef's etc.), by specifying a command-line option, or by including a .DEF file. > But, as I said emphatically in my last post, those sorts of things must > be supplied when Python itself is built. I'm already allowing control > over include directories and macros -- which are essential -- so I'm > willing to throw in -g/-O stuff too. But if we allow access to > arbitrary compiler flags, you can kiss portability goodbye! Not really -- you simply need to make the consequences of messing with certain objects clear to the user, so that if s/he wants portable, s/he does X, Y and Z, but if s/he wants to distribute the code to a specific machine but with all the other machineries that distutils provides, then s/he can do so. IMHO, portable packaging will come by folks first using it to package their non-portable code because it's easier than doing it the old way. --david From M.Faassen@vet.uu.nl Tue Mar 30 17:05:07 1999 From: M.Faassen@vet.uu.nl (Martijn Faassen) Date: Tue, 30 Mar 1999 19:05:07 +0200 Subject: [Distutils] I browsed through the distutils source! Message-ID: <37010443.DEC6F942@pop.vet.uu.nl> Hi there, This is NOT major news, just a note of encouragement to Greg Ward, and making sure he knows that people are indeed looking at things: I just installed CVS on this win95 system at work, downloaded the distutils source, and read through the sources some. It looks pretty neat. I haven't actually tried *running* distutils yet on this windows box, but I'll try to get to that later this week and give you all a report on what happened. Note on installing CVS on a win95 box: be sure to set the environment variable HOME to some directory or it won't work -- this was not mentioned in any CVS docs I found; presumably in win NT, HOME is already set. Greg, anything you'd like me to examine especially? Regards, Martijn From gward@cnri.reston.va.us Tue Mar 30 19:01:03 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Tue, 30 Mar 1999 14:01:03 -0500 Subject: [Distutils] I browsed through the distutils source! In-Reply-To: <37010443.DEC6F942@pop.vet.uu.nl>; from Martijn Faassen on Tue, Mar 30, 1999 at 07:05:07PM +0200 References: <37010443.DEC6F942@pop.vet.uu.nl> Message-ID: <19990330140102.D8035@cnri.reston.va.us> Quoth Martijn Faassen, on 30 March 1999: > I just installed CVS on this win95 system at work, downloaded the > distutils source, and read through the sources some. It looks pretty > neat. I haven't actually tried *running* distutils yet on this windows > box, but I'll try to get to that later this week and give you all a > report on what happened. There's not much to running it; from the top distutils directory, do: ./setup.py build should work pretty much anywhere. A slightly more risky proposition is ./setup.py install which relies on distutils.sysconfig to get the installation directories; it in turn relies on finding Python's Makefiles in the usual place. I have no idea if they're even installed under Win 95 -- please let me know! > Greg, anything you'd like me to examine especially? The weak spots! Search for "XXX" in the code; I'm liberal with X-rated comments. Also check my "Current weaknesses" post from last night; see if you can correlate my opinions of current problems with the code. When it occurs to you that commands are a lot like subroutines, and then when you start to wonder why parameter passing is done backwards, then you'll be up to speed. (The whole problem of communicating options between commands was bigger than I expected. The implementation isn't overly complicated, but I think it'll take some bouncing around across various classes and thinking about the alternatives before it becomes apparent why I did it that way.) Oh, the big reason I put the code up now is this: I think it's close to being at a state where development can be in parallel. The basic framework is in place, all that's missing is a lot of commands to do the work. The beginnings of building and installation are in place, and I've started thinking about the 'build_ext' command -- witness the thread on compiler abstraction models. But the "dist" and "bdist" commands -- to create source and built distributions -- are important and could easily be done by someone else. Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From dalke@bioreason.com Tue Mar 30 21:10:04 1999 From: dalke@bioreason.com (Andrew Dalke) Date: Tue, 30 Mar 1999 13:10:04 -0800 Subject: [Distutils] generating pyc and pyo files References: <199903300251.VAA03779@northshore.shore.net> <37007525.EBA4C963@bioreason.com> <19990330081854.A8035@cnri.reston.va.us> Message-ID: <37013DAC.DF7C48A6@bioreason.com> Grep Ward said: > Compiling .py files at build time will not affect *source* > distributions, which is what Andrew is talking about. But it > *will* affect the size of *built* distributions, which is what > Michael is talking about (I assume). The whole reason I've > been calling them "built distributions" instead of "binary > distributions" is because of the presumed inclusion of .pyc/.pyo > files. You can tell I'm used to working from source :) As mentioned before, I'm looking into the autoconf/automake process. They have a concept of DISTDIR which is prefixed to the install path as in $(DESTDIR)$(bindir) (the default value of DESTDIR is ""). In theory, I can do a make install DESTDIR="blib/" and it will install my package underneath: blib/usr/local/lib/python1.5/site-packages/kwyjibo/... and my executable scripts under blib/usr/local/bin/melissa Wouldn't it be possible to have distribution program figure out what to do based on this tree, including knowing to make .pyo and .pyc files from .py files, if they exist? Of course, permissions are problematical here as well. The package generation program could have a hook which lets the distributor add some python code to adjust things accordingly. Also, I could use a special INSTALL program which doesn't actually set the permissions but instead logs them to a file for the packager (hook) to use. Andrew dalke@bioreason.com From sanner@scripps.edu Tue Mar 30 23:06:37 1999 From: sanner@scripps.edu (Michel Sanner) Date: Tue, 30 Mar 1999 15:06:37 -0800 Subject: [Distutils] I browsed through the distutils source! In-Reply-To: Greg Ward "Re: [Distutils] I browsed through the distutils source!" (Mar 30, 2:01pm) References: <37010443.DEC6F942@pop.vet.uu.nl> <19990330140102.D8035@cnri.reston.va.us> Message-ID: <990330150637.ZM81096@noah.scripps.edu> Hi Greg, sorry , didn't have time to look at this yet .. just a question/suggestion What I really like about Makefiles is the -n mode where I can see what it would do before it actually does it. Is duch an option available in setup.py and if not could it be added. Cheers -Michel On Mar 30, 2:01pm, Greg Ward wrote: > > report on what happened. > > There's not much to running it; from the top distutils directory, do: > > ./setup.py build > > should work pretty much anywhere. A slightly more risky proposition is > > ./setup.py install > > which relies on distutils.sysconfig to get the installation directories; > it in turn relies on finding Python's Makefiles in the usual place. I > have no idea if they're even installed under Win 95 -- please let me > know! > > > Greg, anything you'd like me to examine especially? > > The weak spots! Search for "XXX" in the code; I'm liberal with X-rated > comments. Also check my "Current weaknesses" post from last night; see > if you can correlate my opinions of current problems with the code. > > When it occurs to you that commands are a lot like subroutines, and then > when you start to wonder why parameter passing is done backwards, then > you'll be up to speed. (The whole problem of communicating options > between commands was bigger than I expected. The implementation isn't > overly complicated, but I think it'll take some bouncing around across > various classes and thinking about the alternatives before it becomes > apparent why I did it that way.) > > Oh, the big reason I put the code up now is this: I think it's close to > being at a state where development can be in parallel. The basic > framework is in place, all that's missing is a lot of commands to do the > work. The beginnings of building and installation are in place, and > I've started thinking about the 'build_ext' command -- witness the > thread on compiler abstraction models. But the "dist" and "bdist" > commands -- to create source and built distributions -- are important > and could easily be done by someone else. > > Greg > -- > Greg Ward - software developer gward@cnri.reston.va.us > Corporation for National Research Initiatives > 1895 Preston White Drive voice: +1-703-620-8990 x287 > Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG@python.org > http://www.python.org/mailman/listinfo/distutils-sig > >-- End of excerpt from Greg Ward From MHammond@skippinet.com.au Tue Mar 30 23:48:36 1999 From: MHammond@skippinet.com.au (Mark Hammond) Date: Wed, 31 Mar 1999 09:48:36 +1000 Subject: [Distutils] Compiler abstractiom model In-Reply-To: Message-ID: <001301be7b07$d4f499a0$0801a8c0@bobcat> > > Quoth David Ascher, on 29 March 1999: > > > On windows, it is sometimes needed to specify other files > which aren't .c > > > files, but .def files (possibly all can be done with > command line options, > > > but might as well build this in). I don't know how these > should fit in... > > > > Are they just listed in the command line like .c files? Or are they > > specified by a command-line option? Would you use these in > code that's > > meant to be portable to other platforms? > > Yes, no, and yes. =) > > E.g. Python extensions need to declare the 'exported' entry > point. This > can be done either by modifying the source code (bad for > portable code, > requires #ifdef's etc.), by specifying a command-line option, or by > including a .DEF file. Just to follow up on this, many obscure options may need to be passed to the Windows linker, but they do require their own option - they are not passed as normal files. Examples are /DEF: - the .def file David mentions, /NOD:lib - to prevent a default library from being linked, /implib:lib to name the built .lib file, etc. It wasnt clear from David's post that these need their own option, and are not passed as a simple filename param to the linker.... Building from what Greg said, I agree that _certain_ command-line params can be mandated - they are designed not to be configurable, as messing them will likely break the build. But there also needs to be a secondary class that are virtually unconstrained. [Sorry - as usual, speaking having kept only a quick eye on the posts, and not having seen the latest code drop...] Mark. From dubois1@llnl.gov Wed Mar 31 05:46:18 1999 From: dubois1@llnl.gov (Paul F. Dubois) Date: Tue, 30 Mar 1999 21:46:18 -0800 Subject: [Distutils] Compiler abstractiom model Message-ID: <001f01be7b39$cc993ea0$f4160218@c1004579-c.plstn1.sfba.home.com> Just to chime in about the unmentionable, I have a tool for connecting Fortran and Python (as do others; mine isn't ready for the light of day yet) and unfortunately the existing Setup scheme makes me compile my Fortran separately in a library. I don't have any way to piggyback on what Python has learned about the platform. This is too bad. I am sure it isn't the Python community's job to solve the Fortran 90 make problem but there are days when I dream about it. The "right" object-oriented approach should allow some way to extend the system, e.g. adding suffices or file names together with info about how to compile them. -----Original Message----- From: Mark Hammond To: distutils-sig@python.org Date: Tuesday, March 30, 1999 3:50 PM Subject: RE: [Distutils] Compiler abstractiom model >> > Quoth David Ascher, on 29 March 1999: >> > > On windows, it is sometimes needed to specify other files >> which aren't .c >> > > files, but .def files (possibly all can be done with >> command line options, >> > > but might as well build this in). I don't know how these >> should fit in... >> > >> > Are they just listed in the command line like .c files? Or >are they >> > specified by a command-line option? Would you use these in >> code that's >> > meant to be portable to other platforms? >> >> Yes, no, and yes. =) >> >> E.g. Python extensions need to declare the 'exported' entry >> point. This >> can be done either by modifying the source code (bad for >> portable code, >> requires #ifdef's etc.), by specifying a command-line option, >or by >> including a .DEF file. > >Just to follow up on this, many obscure options may need to be >passed to the Windows linker, but they do require their own >option - they are not passed as normal files. Examples are >/DEF: - the .def file David mentions, /NOD:lib - to prevent a >default library from being linked, /implib:lib to name the built >.lib file, etc. > >It wasnt clear from David's post that these need their own >option, and are not passed as a simple filename param to the >linker.... > >Building from what Greg said, I agree that _certain_ command-line >params can be mandated - they are designed not to be >configurable, as messing them will likely break the build. But >there also needs to be a secondary class that are virtually >unconstrained. > >[Sorry - as usual, speaking having kept only a quick eye on the >posts, and not having seen the latest code drop...] > >Mark. > > >_______________________________________________ >Distutils-SIG maillist - Distutils-SIG@python.org >http://www.python.org/mailman/listinfo/distutils-sig > > From gstein@lyra.org Wed Mar 31 09:39:19 1999 From: gstein@lyra.org (Greg Stein) Date: Wed, 31 Mar 1999 01:39:19 -0800 Subject: [Distutils] Compiler abstractiom model References: <19990329213512.C7865@cnri.reston.va.us> <37006506.3132023D@bioreason.com> <19990330083844.B8035@cnri.reston.va.us> Message-ID: <3701ED47.13018A08@lyra.org> Greg Ward wrote: > > Quoth Andrew Dalke, on 29 March 1999: > > C++ template instantiation > > For at least some compilers, the compilation flags must be > > passed to the linker, because the linker instantiates templated > > code during a "pre-link" step so needs to know the right > > compilation options. > > Ouch! I *knew* there was a reason I disliked C++, I just couldn't put > my finger on it... ;-) Maybe we should cop out and only handle C > compilation *for now*? Just how many C++ Python extensions are out > there now, anyways? Enough that you can't simply punt it. For example, most of the win32 extensions are actually C++ stuff. LLNL also uses C++, I believe. > ... > > Is there any way to get the list of include files (eg, the > > initial/default list)? > > Oh, probably. I just haven't documented it. ;-) I think David Ascher's > idea of exposing the actual list might be nicer overall -- I'll reply to > his post separately. This is V1. Keep it dirt simple. Don't create a bazillion APIs. Expose the stuff, let people fill it in, and go. Even better: rather than doing the configuration thru code, do it declaratively where you can. e.g. a file that can be read by ConfigParser.py Also: in your original email, you talked about "factories" and "abstract classes" and crap like that. What are you building? Who needs a factory? Just instantiate some class and go. Python is easy to change and to rewrite. I really dislike seeing people get all wrapped up in a huge design session to create the ultimate API when they'd be better served just writing some code and running with it. Change it later when it becomes necessary -- change is cheap in Python. > ... > > I believe at times the order of the -l and -L terms can be > > important, but I'm not sure. Eg, I think the following > > > > -L/home/usa/lib -lfootball -L/home/everyone_else/lib -lfootball > > > > lets me do both (American) football -- as in Superbowl -- and > > soccer (football) -- as in World Cup. Whereas > > > > -L/home/usa/lib -L/home/everyone_else/lib -lfootball -lfootball > > > > means I link with the same library twice. > > Auuugghhh!!! This seems like a "feature" to avoid like the plague, and > probably one that's not consistent across platforms. Can anyone back up > Andrew's claim? I've certainly never seen this behaviour before, but > then I haven't exactly gone looking for such perversion. Trying linking against Oracle sometime. You're *required* to list a library multiple times. It's really nasty -- they've created all kinds of inter-dependencies between their libraries. Cheers, -g -- Greg Stein, http://www.lyra.org/ From gstein@lyra.org Wed Mar 31 09:42:19 1999 From: gstein@lyra.org (Greg Stein) Date: Wed, 31 Mar 1999 01:42:19 -0800 Subject: [Distutils] Compiler abstractiom model References: Message-ID: <3701EDFB.5C2CC8DB@lyra.org> David Ascher wrote: >... > > But, as I said emphatically in my last post, those sorts of things must > > be supplied when Python itself is built. I'm already allowing control > > over include directories and macros -- which are essential -- so I'm > > willing to throw in -g/-O stuff too. But if we allow access to > > arbitrary compiler flags, you can kiss portability goodbye! > > Not really -- you simply need to make the consequences of messing with > certain objects clear to the user, so that if s/he wants portable, s/he > does X, Y and Z, but if s/he wants to distribute the code to a specific > machine but with all the other machineries that distutils provides, then > s/he can do so. > > IMHO, portable packaging will come by folks first using it to package > their non-portable code because it's easier than doing it the old way. yes! speak it, brother! Seriously: a number of things should have defaults, but there shouldn't be a reason to *force* developers/users into a particular model. As I've said in the past: if you try to do this, then they just won't use it. Developers are a finicky breed :-) It is especially true with Python: reinventing the wheel is cheap, so it happens a lot. Cheers, -g -- Greg Stein, http://www.lyra.org/ From Fred L. Drake, Jr." References: <19990329213512.C7865@cnri.reston.va.us> <37006506.3132023D@bioreason.com> <19990330083844.B8035@cnri.reston.va.us> <3701ED47.13018A08@lyra.org> Message-ID: <14082.10709.912205.701615@weyr.cnri.reston.va.us> Greg Stein writes: > Trying linking against Oracle sometime. You're *required* to list a > library multiple times. It's really nasty -- they've created all kinds > of inter-dependencies between their libraries. And reading their example Makefile is enough to given even the most diehard Unix hacker hernias; I wonder how may developers they hospitalize to develop it! -Fred -- Fred L. Drake, Jr. Corporation for National Research Initiatives From gward@cnri.reston.va.us Wed Mar 31 18:50:16 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Wed, 31 Mar 1999 13:50:16 -0500 Subject: [Distutils] Compiler abstractiom model In-Reply-To: <001f01be7b39$cc993ea0$f4160218@c1004579-c.plstn1.sfba.home.com>; from Paul F. Dubois on Tue, Mar 30, 1999 at 09:46:18PM -0800 References: <001f01be7b39$cc993ea0$f4160218@c1004579-c.plstn1.sfba.home.com> Message-ID: <19990331135016.A8894@cnri.reston.va.us> Quoth Paul F. Dubois, on 30 March 1999: > Just to chime in about the unmentionable, I have a tool for connecting > Fortran and Python (as do others; mine isn't ready for the light of day yet) > and unfortunately the existing Setup scheme makes me compile my Fortran > separately in a library. I don't have any way to piggyback on what Python > has learned about the platform. This is too bad. I am sure it isn't the > Python community's job to solve the Fortran 90 make problem but there are > days when I dream about it. > > The "right" object-oriented approach should allow some way to extend the > system, e.g. adding suffices or file names together with info about how to > compile them. The way I see Distutils being extended is by people writing new command classes. Let's say LLNL wants to distribute a Python module with a (shudder) FORTRAN back-end, and Distutils doesn't support FORTRAN. (Sorry, not a lot of call for it.) The general idea, which I have not fully thought through, is that you would write a "BuildFortran" class, and include it with your code -- right there in setup.py if it's not too big. Then, your setup.py would look something like this: class BuildFortran: # ... setup (name = "llnl-fortran-module", version = "1.0", description = "Front end to some crufty FORTRAN code", cmdclass = {build_fortran: BuildFortran}, py_modules = ['mod1', 'mod2'], fortran_extensions = ['fext1', 'fext2']) The part that I haven't really thought through is how the Distribution class is supposed to know that 'fortran_extensions' is a valid option. Perhaps I'll continue to punt on checking options for validity, although it is nice to catch typos! Also, in the current model you'd have to subclass the 'Build' class (which implements -- surprise! -- the 'build' command) so that it calls build_py, build_ext, and build_fortran. Perhaps there should be a mechanism to adjust the "wrapper" commands (currently 'build' and 'install') so they can call more than just the commands hard-coded into them. Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From gward@cnri.reston.va.us Wed Mar 31 18:38:00 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Wed, 31 Mar 1999 13:38:00 -0500 Subject: [Distutils] I browsed through the distutils source! In-Reply-To: <990330150637.ZM81096@noah.scripps.edu>; from Michel Sanner on Tue, Mar 30, 1999 at 03:06:37PM -0800 References: <37010443.DEC6F942@pop.vet.uu.nl> <19990330140102.D8035@cnri.reston.va.us> <990330150637.ZM81096@noah.scripps.edu> Message-ID: <19990331133800.A8820@cnri.reston.va.us> Quoth Michel Sanner, on 30 March 1999: > sorry , didn't have time to look at this yet .. just a > question/suggestion What I really like about Makefiles is the -n mode > where I can see what it would do before it actually does it. Is duch > an option available in setup.py and if not could it be added. Yes, the option is there, but support for it is rather spotty. The idea is something like this (from the top distutils directory): ./setup.py -nv build or, equivalently, ./setup.py --dry-run --verbose build But the "dry run" option is ignored at the moment -- I'll explain below. Currently, verbose mode is *not* the default, even if you specify "dry-run" mode. (Thus "./setup.py -n build" will do nothing silently.) This is probably wrong, and trivial to fix; but first I need to figure out just how much information the default verbosity level should give. Currently it reports every filesystem interaction (just copying and compiling so far), which might be a bit much for the default but must be available as an option. Hence some notion of "verbosity level" is probably needed. The other tricky thing is to make sure that all command classes respect both the "verbose" and "dry run" options. Verbosity is handled by the 'announce()' method in the 'Distribution' and 'Command' classes; "dry run" mode is not currently handled very well. (It's the responsibility of each command class to get the "dry_run" flag from the Distribution object and obey it, which I think is too much to ask. For instance, if you look at the build_py command, you'll note that I seem to have completely forgotten about handling "dry run" mode, which is why the above commands don't work -- they go right ahead and build Distutils anyways, despite the -n option.) I see two ways to do this nicely, both of which involve bundling up a "filesystem operation" as a couple of discrete objects -- function to to call, arguments for it, and descriptive string to print. The simpler way is to pass this bundle to a function which prints the message if the current verbosity level is high enough, and calls the supplied function if we're not in dry-run mode. The 'make_file()' function (in distutils.util) is vaguely in this direction, except it doesn't have access to the verbose and dry_run flags (needs the Distribution instance for that). Also, it adds the notion of input and output files and simple timestamp dependency checking -- which is why it's called "make_file"! Handy, but it adds excess functionality to the simple notion of "do this thing now, respecting the verbose and dry-run flags". The other, somewhat less obvious, way to handle the dry-run (and verbose) flag is to save all these "do this thing now" bundles in a list. Then, when we have run *all* commands, walk through the list and carry out the planned operations. I don't think this extra complication is really needed in the Distutils, where most runs should be over in a second or two, and even the biggest builds/installs should only take a few minutes. It's a very useful scheme when you're writing glorified shell scripts that will run for many many hours, cranking through the analysis of large datasets (which is what I did a lot of in a previous life -- I know a thing or two about writing glorified shell scripts, and some of that knowledge seems applicable in the Distutils). Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From gward@cnri.reston.va.us Wed Mar 31 19:02:50 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Wed, 31 Mar 1999 14:02:50 -0500 Subject: [Distutils] Compiler abstractiom model In-Reply-To: <001301be7b07$d4f499a0$0801a8c0@bobcat>; from Mark Hammond on Wed, Mar 31, 1999 at 09:48:36AM +1000 References: <001301be7b07$d4f499a0$0801a8c0@bobcat> Message-ID: <19990331140249.B8894@cnri.reston.va.us> Quoth Mark Hammond, on 31 March 1999: > Building from what Greg said, I agree that _certain_ command-line > params can be mandated - they are designed not to be > configurable, as messing them will likely break the build. But > there also needs to be a secondary class that are virtually > unconstrained. I think there will have to be some way to accomodate platform dependencies in a Distutils build. Eg. Win32 extensions are allowed to compile only on Win32, and from what I'm hearing it sounds as though lots of Windows-specific compiler options might need to be snuck in to build a given extension. Or I might want my extension to be built with -O2 as long as gcc is the compiler, no matter how Python was built. Ultimately, this means that you will be free to stick "-n32" into your compiler command line if you're on an SGI and using SGI's compiler, even though this will most likely break things. Partly this is a documentation/social engineering thing, but it can also be addressed by making the compiler options that can (and sometimes must) be set in a portable way part of the compiler abstraction model -- hence the idea of a "compiler.debug" flag, which will control the "-g" switch on Unix C compilers and the local equivalent elsewhere. > [Sorry - as usual, speaking having kept only a quick eye on the > posts, and not having seen the latest code drop...] That's quite all right -- just scatter enough droplets of knowledge, I'll filter through the ones that aren't relevant to the code. Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From gstein@lyra.org Wed Mar 31 22:14:10 1999 From: gstein@lyra.org (Greg Stein) Date: Wed, 31 Mar 1999 14:14:10 -0800 Subject: [Distutils] Compiler abstractiom model References: <001f01be7b39$cc993ea0$f4160218@c1004579-c.plstn1.sfba.home.com> <19990331135016.A8894@cnri.reston.va.us> Message-ID: <37029E32.6D137D76@lyra.org> Greg Ward wrote: >... > The way I see Distutils being extended is by people writing new command > classes. Let's say LLNL wants to distribute a Python module with a > (shudder) FORTRAN back-end, and Distutils doesn't support FORTRAN. > (Sorry, not a lot of call for it.) The general idea, which I have not > fully thought through, is that you would write a "BuildFortran" class, > and include it with your code -- right there in setup.py if it's not too You can also save yourself a lot of time by building something that seems reasonable and waiting for the feedback from people who need it and/or feel it is important. No sense in getting yourself all bound up and unmoving, just because you're trying to solve a 5% case. Build something for the 95% case and delegate the 5% to the people who care about it. Cheers, -Greg "Minimalist" Stein -- Greg Stein, http://www.lyra.org/