From gstein@lyra.org Mon Feb 1 02:48:21 1999 From: gstein@lyra.org (Greg Stein) Date: Sun, 31 Jan 1999 18:48:21 -0800 Subject: [Distutils] Re: new import mechanism and a "small" distribution References: Message-ID: <36B515F5.6BC78BA0@lyra.org> Just van Rossum wrote: > > At 10:47 AM -0800 1/31/99, Greg Stein wrote: > >By "loader", I will presume that you mean an instance of an Importer > >subclass that is defining get_code(). Here is the method as defined by > >imputil.Importer: > > > > def get_code(self, parent, modname, fqname): > > I see in your code that you build a chain of import hooks that basically > work like a list of loaders. Why don't you use sys.path for this? (Current > implementation details, probably?) Yah. I'm not sure what exactly would happen if I tried that. Much easier to throw out the concepts by skipping that aspect for now :-) > I'd suggest a loader object should be a callable, ie. get_code() should be > called __call__()... Changing your pseudo code from the distutils-sig to: > > for pathentry in sys.path: > if type(pathentry) == StringType: > module = old_import(pathentry, modname) > else: > module = pathentry(modname) # <-- > if module: > return module > else: > raise ImportError, modname + " not found." > > It's the only public method. Makes more sense to me. It also seems that it > would make it easier to implement loaders (and the above loop) in C. get_code() is *NOT* a public method. It is for subclasses to override. The "public method" is the _import_hook method, but it isn't really public, as it gets installed into the import hook. However, it wouldn't be hard to add "__call__ = _import_hook" to the Importer class. That would provide the appropriate callable interface. > >That method is the sole interface used by Importer subclasses. To define > >a custom import mechanism, you would just derive from imputil.Importer > >and override that one method. > > > >I'm not sure if that answers your question, however. Please let me know > >if something is unclear so that I can correct the docstring. > > The only thing I find unclear is when the parent argument should be used or > not. Is it only for importing submodules? The superclass calls it with the right arguments. Users of the Importer class do *not* call get_code (maybe I should rename it to _get_code, but I wanted it shown as "public" since it needs to be overridden). The only public method on Importer is the install() method, which should be called after the instance has been created and configured. So the real question isn't "what do I pass for the parent argument?", but "how do I use the parent argument in my response?" "parent" will be None, or a module object. If it is None, then "modname" should be looked for in a non-package context. If it is a module (implying it represents a package), then "modname" should be looked for within that module (package). The method should not attempt to look in both a package and a non-package context. The Importer subclass may call get_code() multiple times, looking for the right context. >... > >Internally, packages are labelled with a module-level name: __ispkg__. > >That is set to 0 or 1 accordingly. > > Right now a package is labeled with a __path__ variable. If it's there, > it's a package. Is it neccesary to define something different? __path__ and __file__ do not make sense for many custom import mechanisms. So, yes, there really should be an explicit marker. For example, when my small distribution system imports a module, it isn't loading it from a file. It's pulling it out of py15.pyl. __file__ just doesn't make sense. > >The Importer that actually imports a > >module places itself into the module with "__importer__ = self". This > >latter variable is to allow Importers to only work on modules that they > >imported themselves, and helps with identifying the context of an > >importer (whether an import is being performed by a module within a > >package, or not). > > Ok, so __importer__ is there instead of __path__. That makes sense. The > current __path__ variable makes it look like it can have its own > sys.path-like thingy, but that's not quite what it is (?). It is even > abused: if it is a string, it is supposed to be the (sub)package's full > name and means that any submodule is to be located like a normal module, > but with the full name (eg. mypkg.sub1.foo). This is crucial for freezing > and handy in other cases (it allows submodules to be builtin!). While this > is cool and handy, it smells really funny. As a case study, could you show > how this stuff should work with the new import mechanism? The freeze issue is simply another aspect of improper reliance on __file__ and/or __path__. The "test" package fails in my small distribution because it uses __file__. In addition, the following standard library modules use __file__, which means they won't work when they've been extracted from archives: ihooks.py (this sets/uses them, which is probably okay) knee.py (this sets/uses them, which is probably okay) copy.py (actually, this is in a test function) test/regrtest.py (needed to locate the test directory) test/test_imageop.py (some kind of file search algorithm) test/test_imgfile.py (locating the script/test dir) test/test_support.py (kind of a dup of the function in test_imageop) test/test_zlib.py (locating the script/test dir) I recall having a bitch of a time with the win32com package because it also does some magic with the __path__ variable. It meant that I couldn't shove the files into an archive easily. I think it may have been fixed, though, because Mark ran into the same issue when he tried to freeze the package, and so I believe he fixed it. In any case, the two variables could be supported quite easily by DirectoryImporter (which is a near-clone of the builtin behavior). IMO, __file__ should be used VERY sparingly, if at all, and __path__ should just disappear (people should use a custom Importer if they want funny import behavior). As far as a case study? I'm not sure what you mean, beyond my rudimentary response in the preceeding paragraph. Hmm. Or do you mean, how does freezing work? Frozen modules simply use a FreezeImporter instance (theoretical class :-). Functionally, it would be very similar to the SimpleArchive class in the site.py in my small distro (but the TOC would be replaced by C structures, and the file ops would be replaced by memory lookups and mem-based unmarshalling). In general, I submit that my Importer mechanism also fulfills Mark's and Jack's original impetus in this matter: it can clean up Python's import mechanism, and can provide a way for each platform to introduce platform-specific mechanisms for importing. Cheers, -g -- Greg Stein, http://www.lyra.org/ From just@letterror.com Mon Feb 1 13:09:24 1999 From: just@letterror.com (Just van Rossum) Date: Mon, 1 Feb 1999 14:09:24 +0100 Subject: [Distutils] Re: new import mechanism and a "small" distribution In-Reply-To: <36B515F5.6BC78BA0@lyra.org> References: Message-ID: Greg Stein wrote: >get_code() is *NOT* a public method. It is for subclasses to override. Oh, ok. But that makes it very specific to your implementation. I'd like to see a much more general idea. I'm sure that it's there in your code but it is not very clear... >The "public method" is the _import_hook method, but it isn't really >public, as it gets installed into the import hook. However, it wouldn't >be hard to add "__call__ = _import_hook" to the Importer class. That >would provide the appropriate callable interface. I see, that makes sense. Then the idea could boil down to something like this: Each element of sys.path is either of these: - a string, in which case the "classic" importer will do the work. - a callable object, which must have the same interface (signature) as the current __import__ hook. (that's basically where you started, right?) Just From gstein@lyra.org Mon Feb 1 13:41:02 1999 From: gstein@lyra.org (Greg Stein) Date: Mon, 1 Feb 1999 05:41:02 -0800 (PST) Subject: [Distutils] Re: new import mechanism and a "small" distribution In-Reply-To: Message-ID: On Mon, 1 Feb 1999, Just van Rossum wrote: > Greg Stein wrote: > >get_code() is *NOT* a public method. It is for subclasses to override. > > Oh, ok. But that makes it very specific to your implementation. I'd like to > see a much more general idea. I'm sure that it's there in your code but it > is not very clear... My post was only about implementation, as an expression of an implicit design :-) And yes, it appears that the doc could be clearer. > >The "public method" is the _import_hook method, but it isn't really > >public, as it gets installed into the import hook. However, it wouldn't > >be hard to add "__call__ = _import_hook" to the Importer class. That > >would provide the appropriate callable interface. > > I see, that makes sense. Then the idea could boil down to something like this: > > Each element of sys.path is either of these: > - a string, in which case the "classic" importer will do the work. > - a callable object, which must have the same interface (signature) as the > current __import__ hook. > > (that's basically where you started, right?) Exactly. And within that design, my implementation also favors a single-step import mechanism rather than the find/load style that has characterized previous import mechanisms (imp, ihooks, Mark/Jack's email, etc). The import design is also compatible with existing code. Only when somebody begins to insert callable objects will old apps potentially break. IMO, they can choose to not use callable objects and leave their app alone, or use callables and fix their app to compensate. Mainly, I'm just hoping that my code is useful to demonstrate viability of the approach that I'm recommending. "Code talks" :-) I've certainly found the Importer class to be a clearer way to write custom import hooks than anything else that I've seen (try writing a subclass of the ihooks classes, and you'll see what I mean). Cheers, -g -- Greg Stein, http://www.lyra.org/ From just@letterror.com Mon Feb 1 14:41:15 1999 From: just@letterror.com (Just van Rossum) Date: Mon, 1 Feb 1999 15:41:15 +0100 Subject: [Distutils] Re: new import mechanism and a "small" distribution In-Reply-To: References: Message-ID: At 5:41 AM -0800 2/1/99, Greg Stein wrote: >> Each element of sys.path is either of these: >> - a string, in which case the "classic" importer will do the work. >> - a callable object, which must have the same interface (signature) as the >> current __import__ hook. >> >> (that's basically where you started, right?) > >Exactly. > >And within that design, my implementation also favors a single-step import >mechanism rather than the find/load style that has characterized previous >import mechanisms (imp, ihooks, Mark/Jack's email, etc). What I really like about your idea (not your implementation ;-) is that it doesn't replace the __import__ hook, but it expands the semantics of sys.path so it can contain _additional_ hooks. Which is what most people need in the first place. It saves you the trouble to emulate the whole import process (which as you know is almost impossible to do 100% right...). So instead of writing ugly and error-prone replacements for __import__, you just write a specialized loader and whack it into sys.path. Very cool. >The import design is also compatible with existing code. Only when >somebody begins to insert callable objects will old apps potentially >break. IMO, they can choose to not use callable objects and leave their >app alone, or use callables and fix their app to compensate. > > >Mainly, I'm just hoping that my code is useful to demonstrate viability of >the approach that I'm recommending. "Code talks" :-) I've certainly found >the Importer class to be a clearer way to write custom import hooks than >anything else that I've seen (try writing a subclass of the ihooks >classes, and you'll see what I mean). Yes: been there, done that ;-) Just From gstein@lyra.org Mon Feb 1 14:39:17 1999 From: gstein@lyra.org (Greg Stein) Date: Mon, 01 Feb 1999 06:39:17 -0800 Subject: [Distutils] Re: new import mechanism and a "small" distribution References: <1294242746-1611138@hypernet.com> Message-ID: <36B5BC95.3799292F@lyra.org> Gordon McMillan wrote: > > [Greg puts code where his mouth is] > > > A while back, Mark Hammond brought up a "new import architecture" > > proposal after some discussion with Jack Jansen and Guido. I > > responded in my usual diplomatic style and said "feh. the two-step > > import style is bunk." > > [snip...] > > This is outrageously cool! > > It appears that you're aiming for a rewrite of import.c (and > friends) and giving new meaing to "sys.path". (I love it - Java says > "it can be a directory or a zip file" and Greg says "why stop there? > It can be anything at all" - hee hee). Yup :-) Last month, I even argued that the import mechanism could simply be shifted entirely to Python, too, since the overhead of interpreted Python code is minimal next to the parsing, execution, and/or I/O involved with importing. hehe... > What is enabling this magic in this prototype? We pick up your > site.py automatically, but is your python.exe doing something in > between Py_Initialize and Py_Main? Or is this all based on making a > chain out of __builtin__.__import__? If python.exe doesn't find its registry settings, then it assumes a default sys.path (which includes the current directory). Using that, it loads exceptions.py and then site.py. Once site.py loads, then I install the custom import hook so that all future imports will be yanked from py15.pyl. The builtin modules aren't in there, of course, so those fall down the chain to the builtin importer. Cheers, -g -- Greg Stein, http://www.lyra.org/ From oli@andrich.net Tue Feb 2 21:53:46 1999 From: oli@andrich.net (Oliver Andrich) Date: Tue, 2 Feb 1999 22:53:46 +0100 Subject: [Distutils] The $0.02 of a packager ;-) Message-ID: <19990202225346.A25047@rwpc.rhein-zeitung.de> Hi, after being invited by Greg Ward to this list, I tried to read all documents available concerning the topic of the list and I also tried to get uptodate with the mailing list by reading the archives. Now I like to give my $0.02 as someone who tries to keep a quite big distribution uptodate which also includes a lot of third party modules. This mail is rather long, cause I like to comment all all things in one mail, cause I think that quite a lot relates to each other. And please keep in mind that I am speaking as an enduser in one or another way. That means I will have to use this as a developer and will have to use it as a packager of third party developer. So I won't comment on any implementation issue where it is not absolutely critical. And please keep also in mind that may be I am talking about things you have already talked about, but I am quite new to this list and I only read the archives of this and the last month. ;-) - Tasks and division of labour I think that the packager and installer role are a little bit mixed up. In my eyes the packagers job is to build and test the package on a specific platform and also build the binary distribution. This is also what you wrote on the webpage. But the installers role should only be to install the prebuilt package, cause his normal job is to provide uptodate software that the users of the system he manages use. And he has enough to do with that. - The proposed Interface Basically the that I can say that I like the idea that I can write in my RPM spec file setup.py build setup.py test setup.py install and afterwards I have installed the package somewhere on my machine. And I am absolutely sure that it works as indented. I think that this is the way it works for most Perl modules. But I have problems with bdist option of setup.py cause I think that this is hard to implement. If I got this right I as a RPM and debian package maintainer should be able to say setup.py bdist rpm setup.py bdist debian And afterwards I have a debian linux and rpm package of the Python package. Nice in theory but this would require that setup.py or the distutils packages how to create these packages, that means we have to implement a meta packaging system on top of existing packaging systems which are powerful themselves. So what would it look like when I call these commands above? Would the distutils stuff create a spec file (input file to create a rpm) and then call rpm -ba ? And inside the rpm build process setup.py is called again to compile and install the packages content? Finally rpm creates the two normal output files, which are the actual binary package and the other is the source rpm from which you can recompile the binary package on your machine. This is the same for debian linux, slackaware linux, rpm based linux versions, Solaris packages and BeOS software packages. The last is only a vague guess cause I only looked into the Be system very shortly. - What I would suggest what setup.py should do The options that I have no problem with are build_py - copy/compile .py files (pure Python modules) build_ext - compile .c files, link to .so in blib build_doc - process documentation (targets: html, info, man, ...?) build - build_py, build_ext, build_doc dist - create source distribution test - run test suite install - install on local machine What should make_blib do ? But I require is that I can tell the build_ext which compiler switches to use, cause may be I need on my system different switches then the original developer can use. I also like to provide the install option with an argument to tell where the files should be installed, cause I can tell rpm for example that it should compile the extension package as if it would be installed in /usr/lib/python1.5 but could it in the install stage to install it in /tmp/py-root/usr/lib/python1.5. So I can build and install the package without overwriting an existing installation of a older version and I also have a clean way to determine what files actually got installed. install should also be split up into install and install_doc and installdoc should also be able to take an argument where I tell it where to install the files to. I would remove the bdist option cause it would introduce a lot of work, cause you not only have to tackle various systems but also various packaging systems. I would add an option files instead which returns a list of files this packages consists of. And consequently an option doc_files is also required cause I like to stick to the way rpm manages doc files, I simply tell it what files are doc files and it installs them the right way. Another that would be fine if I could extract the package information with setup.py. Something like setup description returns the full description and so on. And I would also add an option system to the command line options, cause I like to tell the setup.py script an option from which it can determine on which system it is running. Why this is required will follow. - ARCH dependent sections should be added What is not clear in my eyes, may be I have missed something, but how do you deal with different architectures? What I would suggest here is we should use a dictionary instead of plain definitions of cc, ccshared, cflags and ldflags. Such a dictionary may look like that compilation_flags = { "Linux" : { "cc": "gcc", "cflags": "-O3", ...}, "Linux2.2": { "cc": "egcs", ....}, "Solaris": { "cc": "cc", ....} } And now I would call setup.py like that setup.py -system Linux build or whatever convention you want to use for command line arguments. - Subpackages are also required Well, this is something that I like very much and what I really got accustomed to. They you build PIL and also a Tkinter version that supports PIL, then like to create both packages and also state that PIL-Tkinter requires PIL. Conclusion (or whatever you want to call it) I as a packager don't require the distutils stuff to be some kind of meta packaging system that generates from some kind of meta information the actual package creation file from which it is called again. And I don't believe that have to develop a complete new packaging system, cause for quite a lot systems such systems exist. And I also think that we introduce such a system the acceptance wouldn't be very high. The people want to maintain the software basis with their natural tools. A RedHat Linux user would like to use rpm, a Solaris user would like to use pkg and a WindowsUser would like to use INstallShield (or whatever is the standard). The target of distutils should be to develop a package which can be configured to compile and install the extension package. The developed software should be usable by the packager to extract all required information to create his native package and the installer, should use the prebuilt packages at best or should be able to install the package by calling setup install. I hope that I described as good as possible what I require as packager and I think that is not a business of distutils but of the native packaging system. Any comments are welcome and I am willing to discuss this, as I am absolutely aware that we need a standard way of installing python extensions. Best regards, Oliver -- Oliver Andrich, RZ-Online, Schlossstrasse Str. 42, D-56068 Koblenz Telefon: 0261-3921027 / Fax: 0261-3921033 / Web: http://rhein-zeitung.de Private Homepage: http://andrich.net/ From gward@cnri.reston.va.us Thu Feb 4 15:37:29 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Thu, 4 Feb 1999 10:37:29 -0500 Subject: [Distutils] The $0.02 of a packager ;-) In-Reply-To: <19990202225346.A25047@rwpc.rhein-zeitung.de>; from Oliver Andrich on Tue, Feb 02, 1999 at 10:53:46PM +0100 References: <19990202225346.A25047@rwpc.rhein-zeitung.de> Message-ID: <19990204103729.A10699@cnri.reston.va.us> Hi Oliver -- glad you could join us -- you raise a lot of good points, and I'll see if I can address (most of) them. Your post will certainly serve as a good "to do" list! Quoth Oliver Andrich, on 02 February 1999: > - Tasks and division of labour > > I think that the packager and installer role are a little bit mixed up. In > my eyes the packagers job is to build and test the package on a specific > platform and also build the binary distribution. This is also what you wrote > on the webpage. > > But the installers role should only be to install the prebuilt package, > cause his normal job is to provide uptodate software that the users of the > system he manages use. And he has enough to do with that. Yes, the packager and installer are a little bit mixed up, because in the most general case -- installing from a source distribution -- there is no packager, and the installer has to do what that non-existent packager might have done. That is, starting with the same source distribution, both a packager (creating a built distribution) and an installer (of the hardcore Unix sysadmin type, who is not afraid of source) would incant: # packager: # installer: tar -xzf foo-1.23.tar.gz tar -xzf foo-1.23.tar.gz cd foo-1.23 cd foo-1.23 ./setup.py build ./setup.py build ./setup.py test ./setup.py test ./setup.py bdist --rpm ./setup.py install Yes, there is a lot of overlap there. So why is the packager wasting his time building this RPM (or some other kind of built distribution)? Because *this* installer is an oddball running a six-year-old version of Freaknix on his ancient Frobnabulator-2000, and Distutils doesn't support Freaknix' weird packaging system. (Sorry.) So, like every good Unix weenie, he starts with the source code and installs that. More mainstream users, eg. somebody running a stock Red Hat 5.2 on their home PC (maybe even with -- gasp! -- Red Hat's Python RPM, instead of a replacement cooked up by some disgruntled Python hacker >grin<), will just download the foo-1.23.i386.rpm that results from the packager running "./setup.py bdist --rpm", and incant rpm --install foo-1.23.i386.rpm Easier and faster than building from source, but a) it only works on Intel machines running an RPM-based Linux distribution, and b) it requires that some kind soul out there has built an RPM for this particular Python module distribution. That's why building from source must be supported, and is considered the "general case" (even if not many people will have to do it). Also, building from source builds character (as you will quickly find out if you ask "Where can I get pre-built binaries for Perl?" on comp.lang.perl.misc ;-). It's good for your soul, increases karma, reduces risk of cancer (but not ulcers!), etc. > - The proposed Interface > > Basically the that I can say that I like the idea that I can write in my RPM > spec file > > setup.py build > setup.py test > setup.py install > > and afterwards I have installed the package somewhere on my machine. And I > am absolutely sure that it works as indented. I think that this is the way > it works for most Perl modules. That's the plan: a simple standard procedure so that anyone with a clue can do this, and so that it can be automated for those without a clue. And yes, the basic mode of operation was stolen shamelessly from the Perl world, with the need for Makefiles removed because a) they're not really needed, and b) they hurt portability. > But I have problems with bdist option of setup.py cause I think that this is > hard to implement. If I got this right I as a RPM and debian package > maintainer should be able to say > > setup.py bdist rpm > setup.py bdist debian > > And afterwards I have a debian linux and rpm package of the Python package. That's the basic idea, except it would probably be "bdist --rpm" -- 'bdist' being the command, '--rpm' being an option to it. If it turns out that all the "smart packagers" are sufficiently different and difficult to wrap, it might make sense to make separate commands for them, eg. "bdist_rpm", "bdist_debian", "bdist_wise", etc. Or something like that. > Nice in theory but this would require that setup.py or the distutils > packages how to create these packages, that means we have to implement a > meta packaging system on top of existing packaging systems which are > powerful themselves. So what would it look like when I call these commands > above? > > Would the distutils stuff create a spec file (input file to create a > rpm) and then call rpm -ba ? And inside the rpm build > process setup.py is called again to compile and install the packages > content? Finally rpm creates the two normal output files, which are > the actual binary package and the other is the source rpm from which > you can recompile the binary package on your machine. I haven't yet thought through how this should go, but your plan sounds pretty good. Awkward having setup.py call rpm, which then calls setup.py to build and install the modules, but consider that setup.py is really just a portal to various Distutils classes. In reality, we're using the Distutils "bdist" command to call rpm, which then calls the Distutils "build", "test", and "install" commands. It's not so clunky if you think about it that way. Also, I don't see why this constitutes "building a meta packaging system" -- about the only RPM-ish terrain that Distutils would intrude upon is knowing which files to install. And it's got to know that anyways, else how could it install them? Heck, all we're doing here is writing a glorified Makefile in Python because Python has better control constructs and is more portable than make's language. Even the lowliest Makefile with an "install" target has to know what files to install. > This is the same for debian linux, slackaware linux, rpm based linux > versions, Solaris packages and BeOS software packages. The last is only a > vague guess cause I only looked into the Be system very shortly. The open question here is, How much duplication is there across various packaging systems? Definitely we should concentrate on the build/test/dist/install stuff first; giving the world a standard way to build module distributions from source would be a major first step, and we can worry about built distributions afterwards. > - What I would suggest what setup.py should do > > The options that I have no problem with are > > build_py - copy/compile .py files (pure Python modules) > build_ext - compile .c files, link to .so in blib > build_doc - process documentation (targets: html, info, man, ...?) > build - build_py, build_ext, build_doc > dist - create source distribution > test - run test suite > install - install on local machine > > What should make_blib do ? "make_blib" just creates a bunch of empty directories that mimic something under the Python lib directory, eg. ./blib ./blib/site-packages ./blib/site-packages/plat-sunos5 ./blib/doc ./blib/doc/html etc. (The plat directory under site-packages is, I think, something not in Python 1.5 -- but has Michel Sanner pointed out, it appears to be needed.) The reason for this: it provides a mockup installation tree in which to run test suites, it makes installation near-trivial, and it makes determining which files get installed where near-trivial. The reason for making it a separate command: because build_py, build_ext, build_doc, and build all depend on it having already been done, so it's easier if they can just "call" this command themselves (which will of course silently do nothing if it doesn't need to do anything). > But I require is that I can tell the build_ext which compiler switches to > use, cause may be I need on my system different switches then the original > developer can use. Actually, the preferred compiler/flags will come not from the module developer but from the Python installation which is being used. That's crucial; otherwise the shared library files might be incompatible with the Python binary. If you as packager or installer wish to tweak some of these ("I know this extension module is time-intensive, so I'll compile with -O2 instead of -O"), that's fine. Of course, that opens up some unpleasant possibilities: "My sysadmin compiled Python with cc, but I prefer gcc, so I'll use it for this extension." Danger Will Robinson! Danger! Not much we can do about that except warn in the documentation, I suppose. > I also like to provide the install option with an argument to tell > where the files should be installed, cause I can tell rpm for > example that it should compile the extension package as if it would > be installed in /usr/lib/python1.5 but could it in the install stage > to install it in /tmp/py-root/usr/lib/python1.5. So I can build and > install the package without overwriting an existing installation of > a older version and I also have a clean way to determine what files > actually got installed. Yes, good idea. That should be an option to the "install" command; again, the default would come from the current Python installation, but could be overidden by the packager or installer. > install should also be split up into install and install_doc and > installdoc should also be able to take an argument where I tell it > where to install the files to. Another good idea. Actually, I think the split should be into "install python library stuff" and "install doc"; "install" would do both. I *don't* think that "install" should be split, like "build", into "install_py" and "install_ext". But I could be wrong... opinions? > I would remove the bdist option cause it would introduce a lot of > work, cause you not only have to tackle various systems but also > various packaging systems. I would add an option files instead which > returns a list of files this packages consists of. And consequently > an option doc_files is also required cause I like to stick to the > way rpm manages doc files, I simply tell it what files are doc files > and it installs them the right way. Punting on bdist: ok. Removing it? no way. It should definitely be handled, although it's not as high priority as being able to build from source. (Because obviously, if you can't build from source, you can't make a built distribution!) The option(s) to get out list(s) of files installed is good. Where does it belong, though? I would think something like "install --listonly" would do the trick. > Another that would be fine if I could extract the package information with > setup.py. Something like setup description returns the full description and > so on. I like that -- probably best to just add one command, say "meta". Then you could say "./setup.py meta --description" or "./setup.py meta --name --version". Or whatever. > And I would also add an option system to the command line options, cause I > like to tell the setup.py script an option from which it can determine on > which system it is running. Why this is required will follow. Already in the plan. Go back and check the archives for mid-January -- I posted a bunch of stuff about design proposals, with how-to-handle- command-line-options being one of my fuzzier areas. (Eg. see http://www.python.org/pipermail/distutils-sig/1999-January/000124.html and followups.) > - ARCH dependent sections should be added > > What is not clear in my eyes, may be I have missed something, but how do you > deal with different architectures? What I would suggest here is we should > use a dictionary instead of plain definitions of cc, ccshared, cflags and > ldflags. Such a dictionary may look like that Generally, that's left up to Python itself. Distutils shouldn't have a catalogue of compilers and compiler flags, because those are chosen when Python is configured and built. That's the autoconf philosophy -- no feature catalogues, just make sure that what you try makes sense on the current platform, and let the builder (of Python in this case, not necessarily of a module distribution) override if he needs to. Module packagers and installers can tweak compiler stuff a little bit, but it's dangerous -- the more you tweak, the more likely you are to generate shared libraries that won't load with your Python binary. > - Subpackages are also required > > Well, this is something that I like very much and what I really got > accustomed to. They you build PIL and also a Tkinter version that supports > PIL, then like to create both packages and also state that PIL-Tkinter > requires PIL. Something like that has been tried in the Perl world, except they were talking about "super-packages" and they called them "bundles". I think it was a hack because module dependencies were not completely handled for a long time (which, I gather, has now been fixed). I never liked the idea, and I hope it will now go away. The plan for Distutils is to handle module dependencies from the start, because that lack caused many different Perl module developers to have to write Makefile.PL's that all check for their dependencies. That should be handled by MakeMaker (in the Perl world) and by Distutils (in the Python world). Thanks for your comments! Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From olli@rhein-zeitung.de Thu Feb 4 21:12:39 1999 From: olli@rhein-zeitung.de (Oliver Andrich) Date: Thu, 4 Feb 1999 22:12:39 +0100 Subject: [Distutils] The $0.02 of a packager ;-) In-Reply-To: <19990204103729.A10699@cnri.reston.va.us>; from Greg Ward on Thu, Feb 04, 1999 at 10:37:29AM -0500 References: <19990202225346.A25047@rwpc.rhein-zeitung.de> <19990204103729.A10699@cnri.reston.va.us> Message-ID: <19990204221239.A7823@rwpc.rhein-zeitung.de> On Thu, Feb 04, 1999 at 10:37:29AM -0500, Greg Ward wrote: > Yes, there is a lot of overlap there. So why is the packager wasting > his time building this RPM (or some other kind of built distribution)? > Because *this* installer is an oddball running a six-year-old version of > Freaknix on his ancient Frobnabulator-2000, and Distutils doesn't > support Freaknix' weird packaging system. (Sorry.) So, like every good > Unix weenie, he starts with the source code and installs that. [...] Ok, I see the reason for this definition and I can absolutely agree on it. I myself was such an oddball myself until I had to manage a software state for quite a lot of machines. ;-))) > Also, building from source builds character (as you will quickly find > out if you ask "Where can I get pre-built binaries for Perl?" on > comp.lang.perl.misc ;-). It's good for your soul, increases karma, > reduces risk of cancer (but not ulcers!), etc. ;-))) > That's the basic idea, except it would probably be "bdist --rpm" -- > 'bdist' being the command, '--rpm' being an option to it. If it turns > out that all the "smart packagers" are sufficiently different and > difficult to wrap, it might make sense to make separate commands for > them, eg. "bdist_rpm", "bdist_debian", "bdist_wise", etc. Or something > like that. I don't think that this is the right way if you want to do this. I think that setup.py should behave quite pythonish in this situation, that means it should be called as you defined it cause this describes it or models it in a better way. [...] > I haven't yet thought through how this should go, but your plan sounds > pretty good. Awkward having setup.py call rpm, which then calls > setup.py to build and install the modules, but consider that setup.py is > really just a portal to various Distutils classes. In reality, we're > using the Distutils "bdist" command to call rpm, which then calls the > Distutils "build", "test", and "install" commands. It's not so clunky > if you think about it that way. > > Also, I don't see why this constitutes "building a meta packaging > system" -- about the only RPM-ish terrain that Distutils would intrude > upon is knowing which files to install. And it's got to know that > anyways, else how could it install them? Heck, all we're doing here is > writing a glorified Makefile in Python because Python has better control > constructs and is more portable than make's language. Even the lowliest > Makefile with an "install" target has to know what files to install. Hm... I am not quite sure what position I should take here. The position of the developer or the packager. But let's discuss this. ;-)) Let's assume we implement bdist --rpm and so on. What is the job of the developer and the packager? The developer has to provide the setup.py and with it all the meta information he thinks that is useful for building and installing his extension, but also for building the package as a binary distribution. Let us now look at the process of packing an extension module. And let's us keep outside that the extension should be compiled on half a dozen platforms. What does actually happen when the packager builds the distribution, let's take PIL as an example. The packager (me) calls setup.py bdist --rpm and these what goes wrong and tweacks such things as wrong include paths, wrong library names, evil compilation command switches and so on. Afterwards building the distribution might look like that. setup.py build --include-path="/usr/include/mypath /usr/local/include/tk/" \ --libraries="tcl tclX mytk" \ --cflags="$RPM_OPT_FLAGS -mpentiumpro" setup.py install --install-dir="/tmp/pyroot/usr" setup.py bdist --install-dir="/tmp/pyroot/usr" --rpm And this contradicts the actual rpm bnuilding process, cause rpm what's to bne able to build the package itself. Or I have to edited the setup.py file to do my changes, cause normally the building process for a rpm looks like that. Step 1) create a rpm spec file Step 2) call rpm -ba Step 2.1) unpack sources Step 2.2) compile sources Step 2.3) install binaries Step 2.4) package the files Step 3) install the package I have to edited setup.py, then we have the same problems as we have with Makefiles. Another problem that arises by letting setup.py or distutils be able to create an RPM itself without editing the spec file myself, is how about dependencies? How can the developer know anything about the packages that are required on my system or on my version of my system? How can he know for example that my packages require Tkstep instead of Tk, or that my PIL package requires libjpeg-6b and not just the package jpeg-6b? I don't think that building the actual package should be the job of disutils or so, cause it introduces much work I as a developer don't want to take care of, cause I don't care how the package system of distribution x of Linux works and how Sun changes it package system in the next version. What I as a developer want to do is, to provide a way so that my extension compiles and installs in the right way on all my target platforms and that I am able to add new platforms from user information. I as packager don't like to learn how to edited a new type of Makefile, which in some way or so must also introduce a new meta level wrapping my actual packing system that I am very accustomed too. It is for me much easier to start out with some kind of summy RPM spec file or so, that has all basic setup.py calls already included and tweak the RPM options in the RPM file and not in some kind of new file. Hoepfully I don't annoy you or tell you something you already have looked at, and decided that it is a minor problem. But let's look at late 1999, distutils stuff has been release and all the world is using it, but guess what would happen in my eyes. Most of the people would use the features a installer has and should use, but stick to their traditional way of packing the binary distribution. I mean all package system without an actual config file driven method can be build from with distutils, but all the other will encounter problems and will be forced to deal with stuff that they don't like to deal with. > > This is the same for debian linux, slackaware linux, rpm based linux > > versions, Solaris packages and BeOS software packages. The last is only a > > vague guess cause I only looked into the Be system very shortly. > > The open question here is, How much duplication is there across various > packaging systems? Definitely we should concentrate on the > build/test/dist/install stuff first; giving the world a standard way to > build module distributions from source would be a major first step, and > we can worry about built distributions afterwards. Yes, that is definetly the case. If I would be able to easily build an extension without reading a README each and every time and without having to deal with configuration parameters that differ each time, then this would help me a lot already. > "make_blib" just creates a bunch of empty directories that mimic > something under the Python lib directory, eg. [...] I see, this is definetly a useful command for setup.py. > > install should also be split up into install and install_doc and > > installdoc should also be able to take an argument where I tell it > > where to install the files to. > > Another good idea. Actually, I think the split should be into "install > python library stuff" and "install doc"; "install" would do both. > I *don't* think that "install" should be split, like "build", into > "install_py" and "install_ext". But I could be wrong... opinions? This is fine for me. > > I would remove the bdist option cause it would introduce a lot of > > work, cause you not only have to tackle various systems but also > > various packaging systems. I would add an option files instead which > > returns a list of files this packages consists of. And consequently > > an option doc_files is also required cause I like to stick to the > > way rpm manages doc files, I simply tell it what files are doc files > > and it installs them the right way. > > Punting on bdist: ok. Removing it? no way. It should definitely be > handled, although it's not as high priority as being able to build from > source. (Because obviously, if you can't build from source, you can't > make a built distribution!) Ok, but I keep my opinion on the all in one solution. ;-)) But I have to think about it. > The option(s) to get out list(s) of files installed is good. Where does > it belong, though? I would think something like "install --listonly" > would do the trick. Yep this is fine. > > Another that would be fine if I could extract the package information with > > setup.py. Something like setup description returns the full description and > > so on. > > I like that -- probably best to just add one command, say "meta". Then > you could say "./setup.py meta --description" or "./setup.py meta > --name --version". Or whatever. What I like to see is setup meta --name --version --short-description --description > Already in the plan. Go back and check the archives for mid-January -- > I posted a bunch of stuff about design proposals, with how-to-handle- > command-line-options being one of my fuzzier areas. (Eg. see > http://www.python.org/pipermail/distutils-sig/1999-January/000124.html > and followups.) Ok, I look into this. > > - ARCH dependent sections should be added > > > > What is not clear in my eyes, may be I have missed something, but how do you > > deal with different architectures? What I would suggest here is we should > > use a dictionary instead of plain definitions of cc, ccshared, cflags and > > ldflags. Such a dictionary may look like that > > Generally, that's left up to Python itself. Distutils shouldn't have a > catalogue of compilers and compiler flags, because those are chosen when > Python is configured and built. That's the autoconf philosophy -- no > feature catalogues, just make sure that what you try makes sense on the > current platform, and let the builder (of Python in this case, not > necessarily of a module distribution) override if he needs to. Module > packagers and installers can tweak compiler stuff a little bit, but it's > dangerous -- the more you tweak, the more likely you are to generate > shared libraries that won't load with your Python binary. Ok, this is fine. > The plan for Distutils is to handle module dependencies from the start, > because that lack caused many different Perl module developers to have > to write Makefile.PL's that all check for their dependencies. That > should be handled by MakeMaker (in the Perl world) and by Distutils (in > the Python world). This is fine to hear. Is it also planned that I can check for a certain version of library or so? Let's say I need libtk.so.8.1.0 and not libtk.so.8.0.0 or is this kept anywhere else? How do you like to implement this tests? Bye, Oliver -- Oliver Andrich, RZ-Online, Schlossstrasse Str. 42, D-56068 Koblenz Telefon: 0261-3921027 / Fax: 0261-3921033 / Web: http://rhein-zeitung.de Private Homepage: http://andrich.net/ From gward@cnri.reston.va.us Thu Feb 4 21:44:21 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Thu, 4 Feb 1999 16:44:21 -0500 Subject: [Distutils] The $0.02 of a packager ;-) In-Reply-To: <19990204221239.A7823@rwpc.rhein-zeitung.de>; from Oliver Andrich on Thu, Feb 04, 1999 at 10:12:39PM +0100 References: <19990202225346.A25047@rwpc.rhein-zeitung.de> <19990204103729.A10699@cnri.reston.va.us> <19990204221239.A7823@rwpc.rhein-zeitung.de> Message-ID: <19990204164421.B10699@cnri.reston.va.us> Quoth Oliver Andrich, on 04 February 1999: > Hm... I am not quite sure what position I should take here. The position of > the developer or the packager. But let's discuss this. ;-)) There are lots of people developing Python modules, but not too many packaging them up -- so your opinions as a packager as especially valuable! (So I can't understand why you-as-developer advocate pushing work onto the packager. >grin<) > What does actually happen when the packager builds the distribution, let's > take PIL as an example. The packager (me) calls > > setup.py bdist --rpm > > and these what goes wrong and tweacks such things as wrong include paths, > wrong library names, evil compilation command switches and so on. Afterwards > building the distribution might look like that. > > setup.py build --include-path="/usr/include/mypath /usr/local/include/tk/" \ > --libraries="tcl tclX mytk" \ > --cflags="$RPM_OPT_FLAGS -mpentiumpro" > setup.py install --install-dir="/tmp/pyroot/usr" > setup.py bdist --install-dir="/tmp/pyroot/usr" --rpm > > And this contradicts the actual rpm bnuilding process, cause rpm > what's to bne able to build the package itself. [...Greg gets a sinking feeling...] Urp, you're quite right. This business of building RPMs is a bit hairier than I thought. Here's a *possible* solution to this problem: have setup.py (well, the Distutils modules really) read/write an options file that reflects the state of all options for all commands. (Credit for this idea goes to Martijn Faassen -- though I don't think Martijn said the Distutils should also *write* the options file to reflect user-supplied command-line options.) The options file would basically be a repository for command-line options: you could set it up before you run setup.py, and when you run setup.py with options, they get stuffed in the options file. Then, the "bdist --rpm" code uses the options file to create the spec file. This raises the danger of "yet another Makefile-that's-not-a-Makefile to edit" though. Yuck. Maybe we could call it "Setup" so existing Python hackers think they know what it is. ;-) However, you raise a lot of difficult issues regarding creating RPMs. I *still* think we should be able to handle this, but it's looking harder and harder -- and getting pushed farther and farther onto the back-burner. And this is only one of the "smart packagers" out there; we definitely need some expertise with Solaris 'pkg', the various Mac and Windows solutions, etc. I'll leave it at that for now: I'm not going to respond in depth to all of your concerns, because for the most part I don't have answers. I will spend some head-scratching time on them, though. Oh: implementing a "bdist" command to create "dumb" built distributions (eg. a tar.gz or zip file) should still be pretty easy, though -- just tar/zip up the blib directory (minus documentation: hard to say where that should be installed, since there's no standard for module documentation). So *that* doesn't have to be punted on. > What I like to see is > > setup meta --name > --version > --short-description > --description Sounds about right to me -- we haven't really discussed what meta-data is necessary, but this is a bare minimum. It should be at least a superset of what RPM knows, though. > This is fine to hear. Is it also planned that I can check for a certain > version of library or so? Let's say I need libtk.so.8.1.0 and not > libtk.so.8.0.0 or is this kept anywhere else? How do you like to implement > this tests? No, that's not in the plan and it's a known weakness. Checking dependencies on Python itself and other Python modules should be doable, so that's what I think should be done. Open the door beyond that and you fall into a very deep rat-hole indeed -- a rat-hole that RPM takes care of quite nicely, and you argued very cogently that we should *not* duplicate what RPM (and others) already do! Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From olli@rhein-zeitung.de Thu Feb 4 22:56:12 1999 From: olli@rhein-zeitung.de (Oliver Andrich) Date: Thu, 4 Feb 1999 23:56:12 +0100 Subject: [Distutils] The $0.02 of a packager ;-) In-Reply-To: <19990204164421.B10699@cnri.reston.va.us>; from Greg Ward on Thu, Feb 04, 1999 at 04:44:21PM -0500 References: <19990202225346.A25047@rwpc.rhein-zeitung.de> <19990204103729.A10699@cnri.reston.va.us> <19990204221239.A7823@rwpc.rhein-zeitung.de> <19990204164421.B10699@cnri.reston.va.us> Message-ID: <19990204235612.A8087@rwpc.rhein-zeitung.de> On Thu, Feb 04, 1999 at 04:44:21PM -0500, Greg Ward wrote: > valuable! (So I can't understand why you-as-developer advocate pushing > work onto the packager. >grin<) Well, it if the things I wrote up for the packager would be the packagers only job, then I would have saved a lot of work compared to the current situation. ;-)) And on the other hand I know how someone feels if he has to determine how to compile a extension on a system he has no access too. ;-) > [...Greg gets a sinking feeling...] Hopefully, this is not so bad, that you can't continue your good work? But sometimes you have to face reality as it is. ;-)))) > This raises the danger of "yet another Makefile-that's-not-a-Makefile to > edit" though. Yuck. Maybe we could call it "Setup" so existing Python > hackers think they know what it is. ;-) That is a little bit what I have seen to come up. ;-))) > we definitely need some expertise with Solaris 'pkg', the various Mac > and Windows solutions, etc. This would be helpful. For all packaging systems I know of, this is the way packages are build. > Oh: implementing a "bdist" command to create "dumb" built distributions > (eg. a tar.gz or zip file) should still be pretty easy, though -- just > tar/zip up the blib directory (minus documentation: hard to say where > that should be installed, since there's no standard for module > documentation). So *that* doesn't have to be punted on. I would also leave inside the distutils package, cause this is something that we can easily deal with. > Sounds about right to me -- we haven't really discussed what meta-data > is necessary, but this is a bare minimum. It should be at least a > superset of what RPM knows, though. I would suggest to have at least this information in the meta data - Name of the developer - Name of the package - package version - a one sentence - one line summary - a description field - a copyright notice, that means something like a short identifier for the used licence. > > This is fine to hear. Is it also planned that I can check for a certain > > version of library or so? Let's say I need libtk.so.8.1.0 and not > > libtk.so.8.0.0 or is this kept anywhere else? How do you like to implement > > this tests? > > No, that's not in the plan and it's a known weakness. Checking > dependencies on Python itself and other Python modules should be doable, > so that's what I think should be done. Open the door beyond that and > you fall into a very deep rat-hole indeed -- a rat-hole that RPM takes > care of quite nicely, and you argued very cogently that we should *not* > duplicate what RPM (and others) already do! Hm.. this is something I would have expected that distutils include. I have seen distutils a little bit like autoconf or so for Python in Python. Not as powerful, but with some basic features of autoconf. Cause this is information the developer can easily provide. But if this is possible to implement is the other question. I hope I haven't been to pessimistic or so. Bye, Oliver -- Oliver Andrich, RZ-Online, Schlossstrasse Str. 42, D-56068 Koblenz Telefon: 0261-3921027 / Fax: 0261-3921033 / Web: http://rhein-zeitung.de Private Homepage: http://andrich.net/ From gward@cnri.reston.va.us Mon Feb 8 23:09:51 1999 From: gward@cnri.reston.va.us (Greg Ward) Date: Mon, 8 Feb 1999 18:09:51 -0500 Subject: [Distutils] The $0.02 of a packager ;-) In-Reply-To: <19990204235612.A8087@rwpc.rhein-zeitung.de>; from Oliver Andrich on Thu, Feb 04, 1999 at 11:56:12PM +0100 References: <19990202225346.A25047@rwpc.rhein-zeitung.de> <19990204103729.A10699@cnri.reston.va.us> <19990204221239.A7823@rwpc.rhein-zeitung.de> <19990204164421.B10699@cnri.reston.va.us> <19990204235612.A8087@rwpc.rhein-zeitung.de> Message-ID: <19990208180951.A15671@cnri.reston.va.us> Quoth Oliver Andrich, on 04 February 1999: > > [I dismiss dealing with dependencies on non-Python stuff] > > Hm.. this is something I would have expected that distutils include. I have > seen distutils a little bit like autoconf or so for Python in Python. Not as > powerful, but with some basic features of autoconf. Cause this is information > the developer can easily provide. But if this is possible to implement is the > other question. There are some possible partial solutions: list C libraries that this module depends on, and if a test program compiles and links, it's there. Ditto for header files. But testing versions? Things more complicated than normal C libraries? You'd need a way for developers to supply chunks of test C code, just the way autoconf does it... ugh... Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913 From olli@rhein-zeitung.de Tue Feb 9 17:12:35 1999 From: olli@rhein-zeitung.de (Oliver Andrich) Date: Tue, 9 Feb 1999 18:12:35 +0100 Subject: [Distutils] The $0.02 of a packager ;-) In-Reply-To: <19990208180951.A15671@cnri.reston.va.us>; from Greg Ward on Mon, Feb 08, 1999 at 06:09:51PM -0500 References: <19990202225346.A25047@rwpc.rhein-zeitung.de> <19990204103729.A10699@cnri.reston.va.us> <19990204221239.A7823@rwpc.rhein-zeitung.de> <19990204164421.B10699@cnri.reston.va.us> <19990204235612.A8087@rwpc.rhein-zeitung.de> <19990208180951.A15671@cnri.reston.va.us> Message-ID: <19990209181235.A29641@rwpc.rhein-zeitung.de> On Mon, Feb 08, 1999 at 06:09:51PM -0500, Greg Ward wrote: > There are some possible partial solutions: list C libraries that this > module depends on, and if a test program compiles and links, it's > there. Ditto for header files. But testing versions? Things more > complicated than normal C libraries? You'd need a way for developers to > supply chunks of test C code, just the way autoconf does it... ugh... Well, I think we can go two ways here. First we try to cover each and every detail with the distutils but this seems to be way to much, or we can provide what I believe is the core of the distutils, and which is clearly described in the proposed interface. (With slight motifications of course. ;-) I know what I have written before but after having some time to think about it, I decided that we should add a meta level information, that looks something like that setup.py meta --requierments and which prints a free form text written by the developer where he states, what things he needs. As an installer and packager you should be capable to tell if your system satisfies the requirements. Bye, Oliver Andrich -- Oliver Andrich, RZ-Online, Schlossstrasse Str. 42, D-56068 Koblenz Telefon: 0261-3921027 / Fax: 0261-3921033 / Web: http://rhein-zeitung.de Private Homepage: http://andrich.net/ From M.Faassen@vet.uu.nl Wed Feb 10 11:22:08 1999 From: M.Faassen@vet.uu.nl (Martijn Faassen) Date: Wed, 10 Feb 1999 12:22:08 +0100 Subject: [Distutils] The $0.02 of a packager ;-) References: <19990202225346.A25047@rwpc.rhein-zeitung.de> <19990204103729.A10699@cnri.reston.va.us> <19990204221239.A7823@rwpc.rhein-zeitung.de> Message-ID: <36C16BE0.CE6BF463@pop.vet.uu.nl> Oliver Andrich wrote: > What I like to see is > > setup meta --name > --version > --short-description > --description setup-meta --license would be nice too. Regards, Martijn From M.Faassen@vet.uu.nl Wed Feb 10 11:47:56 1999 From: M.Faassen@vet.uu.nl (Martijn Faassen) Date: Wed, 10 Feb 1999 12:47:56 +0100 Subject: [Distutils] The $0.02 of a packager ;-) References: <19990202225346.A25047@rwpc.rhein-zeitung.de> <19990204103729.A10699@cnri.reston.va.us> <19990204221239.A7823@rwpc.rhein-zeitung.de> <19990204164421.B10699@cnri.reston.va.us> Message-ID: <36C171EC.2B3CDBF9@pop.vet.uu.nl> Greg Ward wrote: > > Quoth Oliver Andrich, on 04 February 1999: > > This is fine to hear. Is it also planned that I can check for a certain > > version of library or so? Let's say I need libtk.so.8.1.0 and not > > libtk.so.8.0.0 or is this kept anywhere else? How do you like to > > implement this tests? > > No, that's not in the plan and it's a known weakness. Checking > dependencies on Python itself and other Python modules should be doable, > so that's what I think should be done. Open the door beyond that and > you fall into a very deep rat-hole indeed -- a rat-hole that RPM takes > care of quite nicely, and you argued very cogently that we should *not* > duplicate what RPM (and others) already do! Actually there was some discussion of an 'external dependency' system. The idea is that we'd like Python extensions to check for some common libraries that they rely on. For these common external libraries (or programs) we could provide standard modules that simply check if the library is there (and return true or false or some failure message). Initially the distutils package could supply external dependency modules for some libraries (such as for GNU readline). Eventually the developer or packagers could start to supply these things (we could initiate a central archive for them or bundle them with the distribution so other developers or packagers don't have to reinvent the wheel). After that, the developers of these external libraries themselves might start providing them. :) An external dependency checking system could be as simple or complicated as one likes. A simple system would simply check if libfoo.1.5 is there. A more complicated system might look for libfoo.1.5 and upwards. The simplest system ever is when it automatically gets installed when libfoo1.5. is installed -- doesn't need much checking code then. :) What probably is out of scope is autoconfig style behavior; if we can't find libfoo, we are still able to use libbar to provide the same functionality. Unless this somehow follows easily from the design, we shouldn't aim for this. Anyway, this was just discussion. We haven't worked out the implications of a external dependency system fully yet. How useful would it be? How platform independent would or should it be? How do the external dependency modules get distributed? With distutils? With distutils packages? With the external libraries? Where these external dependency modules stored? The advantage of Python is that we have a full fledged programming language at our hands to do configure style things. This can make things more messy, but it can also definitely make some things trivial that are hard to do with make like systems (especially if there are some suitable Python modules that help). We shouldn't be *too* scared of rebuilding configure functionality; we shouldn't be too worried about developers/packages having to learn yet another make language; it'd just be Python and if there are enough batteries included it shouldn't be hard. Regards, Martijn From gmcm@hypernet.com Sat Feb 20 20:02:43 1999 From: gmcm@hypernet.com (Gordon McMillan) Date: Sat, 20 Feb 1999 15:02:43 -0500 Subject: [Distutils] Installer Message-ID: <1292580620-4006186@hypernet.com> Hi all, I've taken Greg Stein's "small" distribution and (with his advice(*) and help) created a mechanism for building compressed single-file importable python libraries. This is cross-platform, depending only on zlib (and Greg's imputil.py). There are a number of ways you can create these archives - grabbing directories, packages or computing the dependencies of a particular script. Then, for Win32 users, I've taken it a few steps further. I've mixed this with Christian Tismer's sqfreeze (based on /F's squeeze) and freeze and created a compiler-less Python installer. You can also use it much like freeze, except you don't need a compiler. The major difference is that, once installed, it won't be a completely standalone executable. The installer will unpack the dependencies into the exe's directory. There will be no dependencies outside that directory(**). You can pack in extension modules, dlls and anything else you like. The python lib support will be contained in a single archive file. So the equivalent of a frozen pure-python script would be one self-installing exe that expands to 5 files - the "squeezed" main script, python15.dll, zlib.dll, zlib.pyd and your private support lib (something.pyz). Check it out - http://www.mcmillan-inc.com/install.html Oh yes, while the code has my copyright, it's released under a do-what-you-want license. Mostly, I glued together what others had done (freeze, sqfreeze, Greg's stuff...). - Gordon (*) Not always followed. Don't blame Greg. (**) Note that if you want to install something that makes use of Mark's COM extensions (particularly, Python COM servers), you can't get away with this. From tismer@appliedbiometrics.com Mon Feb 22 17:08:07 1999 From: tismer@appliedbiometrics.com (Christian Tismer) Date: Mon, 22 Feb 1999 18:08:07 +0100 Subject: [Distutils] Re: Installer References: <1292580620-4006186@hypernet.com> Message-ID: <36D18EF7.46BAF490@appliedbiometrics.com> Hi Gordon, I had a look - nice thingy. Maybe this is the way to do it. I resisted to unpack everything from the C starter and was more heading into having Python up before the installation process. This would make it possible to be much smarter when it comes to decide which files to pull out, where they should go etc. I think this part should already be Python. Therefore I spent time on how to build Python stand-alone, linked with zlib. But maybe your way is the better one. My target was to keep as many files as possible in the .exe and use is as a repository, acting as a vitual file system. Redirection to a fake file system should be easy with Greg's module. Only the dlls must be pulled out, there is no chance to avoid this. I thought of /windows/temp for them. Your cookie looks funny. I looked at it from a dos window and I'm trying to figure out what the meaning of 'MEI' plus all the male/female symbols could be :-) ciao - chris -- Christian Tismer :^) Applied Biometrics GmbH : Have a break! Take a ride on Python's Kaiserin-Augusta-Allee 101 : *Starship* http://starship.python.net 10553 Berlin : PGP key -> http://wwwkeys.pgp.net PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF we're tired of banana software - shipped green, ripens at home From gstein@lyra.org Mon Feb 22 20:31:51 1999 From: gstein@lyra.org (Greg Stein) Date: Mon, 22 Feb 1999 12:31:51 -0800 Subject: [Distutils] Re: Installer References: <1292580620-4006186@hypernet.com> <36D18EF7.46BAF490@appliedbiometrics.com> Message-ID: <36D1BEB7.20CF919@lyra.org> Christian Tismer wrote: >... > My target was to keep as many files as possible in the .exe > and use is as a repository, acting as a vitual file system. > Redirection to a fake file system should be easy with Greg's > module. > Only the dlls must be pulled out, there is no chance to > avoid this. I thought of /windows/temp for them. Yes, a simple variation on a theme uses win32api to read resources out of the .exe. Unmarshal the string to get a code object and return it (maybe decompress before unmarshal). imputil.py takes care of the rest, once you can return a code object. I suspect the Importer to use Win32 resources is a dozen lines or so. Cheers, -g -- Greg Stein, http://www.lyra.org/ From gmcm@hypernet.com Mon Feb 22 20:59:00 1999 From: gmcm@hypernet.com (Gordon McMillan) Date: Mon, 22 Feb 1999 15:59:00 -0500 Subject: [Distutils] Re: Installer In-Reply-To: <36D18EF7.46BAF490@appliedbiometrics.com> Message-ID: <1292404436-150467@hypernet.com> Christian wrote: > I had a look - nice thingy. Thanks. ... > My target was to keep as many files as possible in the .exe > and use is as a repository, acting as a vitual file system. > Redirection to a fake file system should be easy with Greg's > module. > Only the dlls must be pulled out, there is no chance to > avoid this. I thought of /windows/temp for them. The only difference between Launch.exe and Run.exe is that Run removes everything it pulls out at end of run. Of course, if you don't reach end of run, it'll leave everything lying around. If you wanted to do a _really_ self contained virtual file system, then the "stuff" should be a valid section in the exe, and the C should get a pointer to the section. No file needed, but you would have to patch the exe. But since they're just tacked together and we open the exe as a file to find everything, it didn't seem worth it to add the smarts for nested packaging. > Your cookie looks funny. I looked at it from a dos window > and I'm trying to figure out what the meaning of 'MEI' > plus all the male/female symbols could be :-) MEI stands for "McMillan Enterprises Inc.". The male symbols are an obvious expression of the phallic nature of structs; I'd guess the female symbols are evoked by the matriarchal nature of ANSI societies. or-maybe-it's-the-Freudian-nature-of-Deutche-dos-boxes-ly y'rs - Gordon