From terry.ware at baesystems.com Thu Dec 2 14:29:38 2004 From: terry.ware at baesystems.com (Terry Ware) Date: Thu Dec 2 14:29:47 2004 Subject: [Distutils] Embedding C with Python Message-ID: <5.2.0.9.2.20041202082313.00ada0d8@mail.dc.alphatech.com> I work on windows platforms and I have a collection of C routines that are distributed via a DLL. These routines have embedded python calls. Currently I have to distribute the python modules separately from the C DLL. Is there a way using distutils for me to combine the python scripts with the C code in the C DLL so that all I have to distribute is my C DLL along with the pythonxx.dll? Thanks for your time, Terry Terry Ware Sr. Software Engineer BAE Systems Advanced Information Technologies Suite 500 3811 North Fairfax Drive Arlington Va. 22203 office: (703)284-8425 fax: (703)524-6280 terry.ware@baesystems.com From theller at python.net Thu Dec 2 17:09:34 2004 From: theller at python.net (Thomas Heller) Date: Thu Dec 2 17:08:51 2004 Subject: [Distutils] Embedding C with Python In-Reply-To: <5.2.0.9.2.20041202082313.00ada0d8@mail.dc.alphatech.com> (Terry Ware's message of "Thu, 02 Dec 2004 08:29:38 -0500") References: <5.2.0.9.2.20041202082313.00ada0d8@mail.dc.alphatech.com> Message-ID: Terry Ware writes: > I work on windows platforms and I have a collection of C routines that > are distributed via a DLL. These routines have embedded python calls. > Currently I have to distribute the python modules separately from the > C DLL. Is there a way using distutils for me to combine the python > scripts with the C code in the C DLL so that all I have to distribute > is my C DLL along with the pythonxx.dll? You could create a zipfile from the modules you need, and append the zipfile to your C dll. Then, add the pathname of your dll to sys.path somewhere in your init code, and zipimport should do the trick. At least in principle ;-) Thomas From anthony at computronix.com Thu Dec 2 19:21:48 2004 From: anthony at computronix.com (Anthony Tuininga) Date: Thu Dec 2 19:21:57 2004 Subject: [Distutils] Embedding C with Python In-Reply-To: References: <5.2.0.9.2.20041202082313.00ada0d8@mail.dc.alphatech.com> Message-ID: <41AF5D3C.5020703@computronix.com> Not even in principle. I've done exactly that for some our applications and it works very well. cx_Freeze (http://starship.python.net/crew/atuining) is what I have been using but I wouldn't be surprised if py2exe can handle this as well. Thomas Heller wrote: > Terry Ware writes: > > >>I work on windows platforms and I have a collection of C routines that >>are distributed via a DLL. These routines have embedded python calls. >>Currently I have to distribute the python modules separately from the >>C DLL. Is there a way using distutils for me to combine the python >>scripts with the C code in the C DLL so that all I have to distribute >>is my C DLL along with the pythonxx.dll? > > > You could create a zipfile from the modules you need, and append the > zipfile to your C dll. Then, add the pathname of your dll to sys.path > somewhere in your init code, and zipimport should do the trick. At > least in principle ;-) > > Thomas > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG@python.org > http://mail.python.org/mailman/listinfo/distutils-sig -- Anthony Tuininga anthony@computronix.com Computronix Distinctive Software. Real People. Suite 200, 10216 - 124 Street NW Edmonton, AB, Canada T5N 4A3 Phone: (780) 454-3700 Fax: (780) 454-3838 http://www.computronix.com From rakis at gmpexpress.net Sat Dec 4 22:16:34 2004 From: rakis at gmpexpress.net (Tom Cocagne) Date: Sat Dec 4 22:16:05 2004 Subject: [Distutils] Embedding C with Python In-Reply-To: <5.2.0.9.2.20041202082313.00ada0d8@mail.dc.alphatech.com> References: <5.2.0.9.2.20041202082313.00ada0d8@mail.dc.alphatech.com> Message-ID: <200412041616.34897.rakis@gmpexpress.net> The py2exe tool might be what you're looking for. I've been using it for a while now to simplify the distribution of a project with embedded python modules and it's working out pretty well. The associated wiki page contains some information on how to use py2exe to assist in the deployment of embedded projects. py2exe: http://starship.python.net/crew/theller/py2exe/ wiki: http://starship.python.net/crew/theller/moin.cgi/Py2Exe Tom On Thursday 02 December 2004 8:29 am, Terry Ware wrote: > I work on windows platforms and I have a collection of C routines that are > distributed via a DLL. These routines have embedded python > calls. Currently I have to distribute the python modules separately from > the C DLL. Is there a way using distutils for me to combine the python > scripts with the C code in the C DLL so that all I have to distribute is my > C DLL along with the pythonxx.dll? > > Thanks for your time, > > Terry > > > Terry Ware > Sr. Software Engineer > BAE Systems > Advanced Information Technologies > Suite 500 > 3811 North Fairfax Drive > Arlington Va. 22203 > office: (703)284-8425 > fax: (703)524-6280 > terry.ware@baesystems.com > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG@python.org > http://mail.python.org/mailman/listinfo/distutils-sig From bob at redivi.com Tue Dec 7 06:49:28 2004 From: bob at redivi.com (Bob Ippolito) Date: Tue Dec 7 06:50:04 2004 Subject: [Distutils] [ANN] py2app 0.1.6 Message-ID: I've rolled together a new 0.1.6 release of py2app that includes the following feature enhancements and probably a few bug fixes: modulegraph: - This is now a top-level package and should be cross-platformish and not at all py2app specific (if someone wants a project, integrate this into py2exe/cx_Freeze/etc.) altgraph: - Some common code between modulegraph and macholib was moved into altgraph (the ObjectGraph data structure, for example) macholib: - Lots of code in its supporting library, ptypes, was removed, rewritten and optimized for performance and simplicity. - The API has totally been changed (I don't think anyone else uses it, so I don't feel bad about it :) - It uses altgraph for its data structure now - More correct algorithms for locating dylibs and frameworks based upon a thorough reading of the dyld source code bdist_mpkg: - Made the dependency checking more specific for better Installer compatibility - Fixed some minor bugs py2app: - New "plugin" target for building loadable bundles (i.e. Interface Builder palettes). This is a crazy hack, and will never work perfectly due to the icky globalness of the Python interpreter, but works well enough in practice. - Plugin example - Sets a new ARGVZERO environment variable that points to the argv[0] that was passed to main(...). - Sets a new EXECUTABLEPATH environment variable that points to the actual path of the executable that was run (which will be == to ARGVZERO most of the time) - suboptimal PyQt support (sip and PyQt are built in really strange ways and have lots of interdependencies at the C/C++ level so whenever you use ANY sip module you use ALL sip modules) - PyQt example - suboptimal PyOpenGL support (PyOpenGL has a stupid way of finding its version that prevents it from being easily bundled) - PyOpenGL example - py2applet command line tool (performs the same function as the GUI app) -bob From pje at telecommunity.com Wed Dec 8 04:40:11 2004 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed Dec 8 04:39:51 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps Message-ID: <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> Many applications need or want to be able to support extension via dynamic installation of "plugin" code, such as Zope Products, Chandler Parcels, and WSGI servers' "application" objects. Frequently, these plugins may require access to other plugins or to libraries of other Python code. Currently, application platforms must create their own binary formats to address their specific requirements, but these formats are of course specific to the platform and not portable, so it's not possible to package a third-party module just once, and deploy it in any Python application platform. Although each platform may have its own additional requirements for the contents of such a "plugin", the minimum basis for such plugins is that they include Python modules and other files, and import and export selected modules. Although the standard distutils pattern is to use a platform's own packaging system, this really only makes sense if you are dealing with Python as a language, and not as an application platform. Platform packaging doesn't make sense for applications that are end-user programmable, because even if the core application can be installed in one location, each instance of use of that application (e.g. per user on a multi-user system) may have its own plugins installed. Therefore, I would like to propose the creation of: * a binary package format optimized for "plugin" use (probably just a specific .zip layout) * a distutils "bdist" command to package a set of modules in this format * A PEP 302 importer that supports running of code directly from plugin distributions, and which can be configured to install or extract C extension modules. (This will serve as a base for platform developers to create their platform-specific plugin registries, installation mechanisms, etc., although there will be opportunity for standardization/code sharing at this level, too.) * additions to setup() metadata to support declaration of: * modules required/provided, with version information * platform metadata for C extensions contained in the plugin distribution * ability to specify metadata files to be included in the distribution, that are specific to the target application platform (e.g. Zope config files, Chandler parcel schemas, WSGI deployment configuration, etc.) (This is actually only "level 1" of the standardization that I'd like to do; levels 2 and 3 would address runtime issues like startup/shutdown of plugins, automatic dependency resolution, isolation between plugins, and mediated service discovery and service management across plugins. However, these other levels are distinct deliverables that don't necessarily relate to the distutils, except insofar as those levels may influence requirements for the first level. Also, it's important to note that even without those higher levels of standardization, the availability of a "plug and play" distribution format should be beneficial to the Python community, in making it easier for applications to bundle arbitrary libraries. Indeed, tools like py2exe and py2app might grow the ability to automatically "link" bundles needed by an application, and there will likely be many other spin-off uses of this technology once it's available.) My main idea is to expand the existing PKG-INFO format, adding some formally-defined fields for system processing, that are supplied by the setup script or in additional files. The additional metadata should be syntactically validated (and semantically, to the extent possible), and the distutils should not knowingly produce a plugin package with invalid execution metadata. The specific kinds of metadata needed (that I know of currently) are: * Static imports (module names and specification of compatible versions) * Dynamic imports (wildcard masks) * Exports (module names and versions) * Platform requirements for included extension modules * Other platform requirements * Entry point for dynamic integration (technically a level 2 feature, but it'd be nice to reserve the header and define its syntax) * Update URL (for obtaining the "latest" version of the plugin) There are several issues that would have to be hammered out, here. Versioning, for example, both in the sense of formats and qualifiers, and in the sense of versioning a module/package versus versioning a distribution. Gathering information about imports is also tricky. Tools like py2exe try to gather some of this information automatically, but that info doesn't include version requirements. (It may be that we can get by without version requirements, taking the default state to mean, "any version will do.") Platform specs are another dicey issue, since we're talking about trying to define binary compatibility here. This includes the issue that it might be necessary to include shared libraries other than the C extensions themselves. (For example, like the wxWidgets libraries that ship with wxPython.) While researching a deployment strategy for WSGI, I discovered the OSGi specifications for Java, which address all of these issues and more. Where OSGi's solutions are directly usable, I'd like to apply them, rather than re-inventing wheels... not to mention axles, brakes, and transmissions! However, their syntax for these things is often weird and verbose, and could probably use some "Pythonizing". Anyway, I would see the deliverables here as being a PEP documenting the format, and a prototype implementation in setuptools (for current Python versions), that would then migrate to an official implementation in the distutils for Python 2.5. I'd like to find out "who's with me" at this stage in having an interest in any aspect of this project, especially if you have requirements or issues I haven't thought of. Thanks! From bob at redivi.com Wed Dec 8 05:50:52 2004 From: bob at redivi.com (Bob Ippolito) Date: Wed Dec 8 05:50:56 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> References: <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> Message-ID: On Dec 7, 2004, at 10:40 PM, Phillip J. Eby wrote: > Many applications need or want to be able to support extension via > dynamic installation of "plugin" code, such as Zope Products, Chandler > Parcels, and WSGI servers' "application" objects. Frequently, these > plugins may require access to other plugins or to libraries of other > Python code. Currently, application platforms must create their own > binary formats to address their specific requirements, but these > formats are of course specific to the platform and not portable, so > it's not possible to package a third-party module just once, and > deploy it in any Python application platform. > > Although each platform may have its own additional requirements for > the contents of such a "plugin", the minimum basis for such plugins is > that they include Python modules and other files, and import and > export selected modules. Although the standard distutils pattern is > to use a platform's own packaging system, this really only makes sense > if you are dealing with Python as a language, and not as an > application platform. Platform packaging doesn't make sense for > applications that are end-user programmable, because even if the core > application can be installed in one location, each instance of use of > that application (e.g. per user on a multi-user system) may have its > own plugins installed. Is anything you're proposing NOT applicable to Python packages also? A lot of the issues you're proposing to solve are problems we have for more than just plugins. Though, I suppose it's a lot easier to propose a new plugin mechanism than a major overhaul to the module system ;) > Therefore, I would like to propose the creation of: > > * a binary package format optimized for "plugin" use (probably just a > specific .zip layout) > > * a distutils "bdist" command to package a set of modules in this > format > > * A PEP 302 importer that supports running of code directly from > plugin distributions, and which can be configured to install or > extract C extension modules. (This will serve as a base for platform > developers to create their platform-specific plugin registries, > installation mechanisms, etc., although there will be opportunity for > standardization/code sharing at this level, too.) This importer also needs to grow methods so that these plugins can introspect themselves and ask for things (metadata that may be in the zip, etc.). Something smarter than zipimport's get_data would be nice too. Also, if you allow extraction of C extension modules, you'll probably also have to allow extraction of dependent dlls and whatnot.. which is a real mess. For dependent dynamic libraries on Darwin, this is a *real* mess, because the runtime linker is only affected by environment variables at process startup time. I can only think of two solutions to this for Darwin: (a) build an executable bundle on and execve it on process startup, drop the dependent libraries inside that executable bundle (b) have some drop location for dependent libraries and C extensions and rewrite the load commands in them before loading (which may fail if there isn't enough wiggle room in the header) For Darwin, py2app would be more or less required at runtime to do either... Note that either can be done without a compiler handy, because py2app can read and write enough of the Mach-O object file format (equivalent to ELF or PE). > * additions to setup() metadata to support declaration of: > > * modules required/provided, with version information Does this mean sys.modules keys will also have a version in them? How do you import cross-plugin or cross-package in this scenario? > * platform metadata for C extensions contained in the plugin > distribution How do you build one of these that has C extensions for multiple platforms? Multiple variants of the same platform? Multiple Python runtime versions? It's not always possible to cross-compile from platform A to B for arbitrary A and B. > * ability to specify metadata files to be included in the > distribution, that are specific to the target application platform > (e.g. Zope config files, Chandler parcel schemas, WSGI deployment > configuration, etc.) > > (This is actually only "level 1" of the standardization that I'd like > to do; levels 2 and 3 would address runtime issues like > startup/shutdown of plugins, automatic dependency resolution, > isolation between plugins, and mediated service discovery and service > management across plugins. However, these other levels are distinct > deliverables that don't necessarily relate to the distutils, except > insofar as those levels may influence requirements for the first > level. Also, it's important to note that even without those higher > levels of standardization, the availability of a "plug and play" > distribution format should be beneficial to the Python community, in > making it easier for applications to bundle arbitrary libraries. > Indeed, tools like py2exe and py2app might grow the ability to > automatically "link" bundles needed by an application, and there will > likely be many other spin-off uses of this technology once it's > available.) > > My main idea is to expand the existing PKG-INFO format, adding some > formally-defined fields for system processing, that are supplied by > the setup script or in additional files. The additional metadata > should be syntactically validated (and semantically, to the extent > possible), and the distutils should not knowingly produce a plugin > package with invalid execution metadata. The specific kinds of > metadata needed (that I know of currently) are: > > * Static imports (module names and specification of compatible > versions) > * Dynamic imports (wildcard masks) > * Exports (module names and versions) > * Platform requirements for included extension modules > * Other platform requirements > * Entry point for dynamic integration (technically a level 2 feature, > but it'd be nice to reserve the header and define its syntax) What do you mean by "entry point" and "dynamic integration"? > * Update URL (for obtaining the "latest" version of the plugin) How about some kind of public key as well, so that if you visit the update URL you will know if the new package was provided by the same author or not? > There are several issues that would have to be hammered out, here. > Versioning, for example, both in the sense of formats and qualifiers, > and in the sense of versioning a module/package versus versioning a > distribution. Gathering information about imports is also tricky. > Tools like py2exe try to gather some of this information > automatically, but that info doesn't include version requirements. > (It may be that we can get by without version requirements, taking the > default state to mean, "any version will do.") If we require that modules specify a __version__ that is a "constant", it would be easy to parse... When you "link" this bundle, it could automatically say that it requires a version >= to the version it saw when it was scanned for dependencies (unless explicitly specified to be more or less specific). In any case, I think versioning is a really hard problem, especially in Python due to sys.modules and the import mechanism, so I think that this task should be deferred. If Python developers really felt strongly about this, we'd probably see more packages with version numbers in their names :) > Platform specs are another dicey issue, since we're talking about > trying to define binary compatibility here. This includes the issue > that it might be necessary to include shared libraries other than the > C extensions themselves. (For example, like the wxWidgets libraries > that ship with wxPython.) The only reasonable solution I've found to this issue is to just shove every dependency in the package.. any less than that and it's just too hard to deal with. > While researching a deployment strategy for WSGI, I discovered the > OSGi specifications for Java, which address all of these issues and > more. Where OSGi's solutions are directly usable, I'd like to apply > them, rather than re-inventing wheels... not to mention axles, brakes, > and transmissions! However, their syntax for these things is often > weird and verbose, and could probably use some "Pythonizing". > > Anyway, I would see the deliverables here as being a PEP documenting > the format, and a prototype implementation in setuptools (for current > Python versions), that would then migrate to an official > implementation in the distutils for Python 2.5. I'd like to find out > "who's with me" at this stage in having an interest in any aspect of > this project, especially if you have requirements or issues I haven't > thought of. Thanks! I'm +100 on this venture, but I could definitely see it happening in pieces, and much of it being applied to packages in general rather than just plugins. -bob From bear at code-bear.com Wed Dec 8 06:20:52 2004 From: bear at code-bear.com (Mike Taylor) Date: Wed Dec 8 06:20:59 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: References: <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> Message-ID: I'm also interested in this idea. While I'm new to the distutils scene, I am currently tasked with creating installation tools for Chandler so maybe I can be of some help. On Dec 7, 2004, at 11:50 PM, Bob Ippolito wrote: >> * Update URL (for obtaining the "latest" version of the plugin) > > How about some kind of public key as well, so that if you visit the > update URL you will know if the new package was provided by the same > author or not? Some sort of signature, or other means of validation, would be a must - otherwise the security risk would make cross-site scripting attacks look enjoyable :) >> There are several issues that would have to be hammered out, here. >> Versioning, for example, both in the sense of formats and qualifiers, >> and in the sense of versioning a module/package versus versioning a >> distribution. Gathering information about imports is also tricky. >> Tools like py2exe try to gather some of this information >> automatically, but that info doesn't include version requirements. >> (It may be that we can get by without version requirements, taking >> the default state to mean, "any version will do.") > > If we require that modules specify a __version__ that is a "constant", > it would be easy to parse... When you "link" this bundle, it could > automatically say that it requires a version >= to the version it saw > when it was scanned for dependencies (unless explicitly specified to > be more or less specific). > > In any case, I think versioning is a really hard problem, especially > in Python due to sys.modules and the import mechanism, so I think that > this task should be deferred. If Python developers really felt > strongly about this, we'd probably see more packages with version > numbers in their names :) The only issue I see with versioning would be if you have a plugin that is operating specific - then the version information would need to include that metadata. >> Platform specs are another dicey issue, since we're talking about >> trying to define binary compatibility here. This includes the issue >> that it might be necessary to include shared libraries other than the >> C extensions themselves. (For example, like the wxWidgets libraries >> that ship with wxPython.) > > The only reasonable solution I've found to this issue is to just shove > every dependency in the package.. any less than that and it's just too > hard to deal with. I took a good look at how to make Chandler be able to use existing modules/libraries in order to reduce the number of included dependencies and ended up putting the idea down and just backing slowly away from it. Without the metadata information that this idea would provide it just seems to be one big mess. -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/distutils-sig/attachments/20041208/f1bc185f/PGP.pgp From bob at redivi.com Wed Dec 8 07:51:56 2004 From: bob at redivi.com (Bob Ippolito) Date: Wed Dec 8 07:52:33 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: References: <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> Message-ID: On Dec 8, 2004, at 12:20 AM, Mike Taylor wrote: >>> There are several issues that would have to be hammered out, here. >>> Versioning, for example, both in the sense of formats and >>> qualifiers, and in the sense of versioning a module/package versus >>> versioning a distribution. Gathering information about imports is >>> also tricky. Tools like py2exe try to gather some of this >>> information automatically, but that info doesn't include version >>> requirements. (It may be that we can get by without version >>> requirements, taking the default state to mean, "any version will >>> do.") >> >> If we require that modules specify a __version__ that is a >> "constant", it would be easy to parse... When you "link" this >> bundle, it could automatically say that it requires a version >= to >> the version it saw when it was scanned for dependencies (unless >> explicitly specified to be more or less specific). >> >> In any case, I think versioning is a really hard problem, especially >> in Python due to sys.modules and the import mechanism, so I think >> that this task should be deferred. If Python developers really felt >> strongly about this, we'd probably see more packages with version >> numbers in their names :) > > The only issue I see with versioning would be if you have a plugin > that is operating specific - then the version information would need > to include that metadata. I don't think that's really a big deal. The real problem, from my perspective, is the interpreter-global sys.modules. Sure, you could get around it, but only if you replace Python's import machinery entirely (which you can do, but that is also interpreter-global). For example, let's say I have barPackage that needs foo 1.0 and bazPackage that needs foo 2.0. How does that work? foo 1.0 and foo 2.0 can't both be sys.modules['foo']. -bob From pje at telecommunity.com Wed Dec 8 14:35:34 2004 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed Dec 8 14:35:25 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: References: <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> At 11:50 PM 12/7/04 -0500, Bob Ippolito wrote: >On Dec 7, 2004, at 10:40 PM, Phillip J. Eby wrote: > >>Many applications need or want to be able to support extension via >>dynamic installation of "plugin" code, such as Zope Products, Chandler >>Parcels, and WSGI servers' "application" objects. Frequently, these >>plugins may require access to other plugins or to libraries of other >>Python code. Currently, application platforms must create their own >>binary formats to address their specific requirements, but these formats >>are of course specific to the platform and not portable, so it's not >>possible to package a third-party module just once, and deploy it in any >>Python application platform. >> >>Although each platform may have its own additional requirements for the >>contents of such a "plugin", the minimum basis for such plugins is that >>they include Python modules and other files, and import and export >>selected modules. Although the standard distutils pattern is to use a >>platform's own packaging system, this really only makes sense if you are >>dealing with Python as a language, and not as an application >>platform. Platform packaging doesn't make sense for applications that >>are end-user programmable, because even if the core application can be >>installed in one location, each instance of use of that application (e.g. >>per user on a multi-user system) may have its own plugins installed. > >Is anything you're proposing NOT applicable to Python packages also? Not at "level 1", no. The other levels are definitely plug-in specific, though, apart from simple import/export resolution. However, I'm focusing on plugins because 1) that's my use case, and 2) I'd like to keep it focused on deliverables, and not let the effort wander off into some kind of vague "we need a CPAN clone" sort of discussion. :) In particular, I want to keep PEP 262 and any download or installation tools completely out of scope at level 1. > A lot of the issues you're proposing to solve are problems we have for > more than just plugins. Though, I suppose it's a lot easier to propose a > new plugin mechanism than a major overhaul to the module system ;) :) >>* A PEP 302 importer that supports running of code directly from plugin >>distributions, and which can be configured to install or extract C >>extension modules. (This will serve as a base for platform developers to >>create their platform-specific plugin registries, installation >>mechanisms, etc., although there will be opportunity for >>standardization/code sharing at this level, too.) > >This importer also needs to grow methods so that these plugins can >introspect themselves and ask for things (metadata that may be in the zip, >etc.). Something smarter than zipimport's get_data would be nice too. Yes, but I consider that part of level 2. For level 1, I only want to enable normal PEP 302 get_data() support, which is about as good as OSGi level 1 already. >Also, if you allow extraction of C extension modules, you'll probably also >have to allow extraction of dependent dlls and whatnot.. which is a real >mess. For dependent dynamic libraries on Darwin, this is a *real* mess, >because the runtime linker is only affected by environment variables at >process startup time. I can only think of two solutions to this for Darwin: >(a) build an executable bundle on and execve it on process startup, drop >the dependent libraries inside that executable bundle >(b) have some drop location for dependent libraries and C extensions and >rewrite the load commands in them before loading (which may fail if there >isn't enough wiggle room in the header) With regard to 'b', I'm not quite sure I understand about the rewriting load commands. Are you saying that on Darwin, you have no LD_LIBRARY_PATH? Because, wouldn't it suffice for the application to have that defined when it starts, and install the libraries on that path? What am I missing, here? IOW, if you have a directory set up on LD_LIBRARY_PATH or its equivalent, can't you just dump the libraries and C extensions there? >>* additions to setup() metadata to support declaration of: >> >> * modules required/provided, with version information > >Does this mean sys.modules keys will also have a version in them? How do >you import cross-plugin or cross-package in this scenario? For level 1, I'm only concerned with ensuring that for some plugin X, it's possible to construct a sys.path that allows its dependencies to be imported. Simultaneous resolution of conflicting dependencies from multiple plugins is a level 2 goal, and I have several possible solutions in mind for it, but I don't want to bring in distractions at level 1. As long as the plugin format contains the necessary metadata for level 2 tools to handle it, I think we should go ahead and get level 1 implemented. In any case, there aren't any platforms that cleanly support such cross-plugin version skew today, so it's not like we'll be going backwards. We'll just be going forwards, once the base format exists. >> * platform metadata for C extensions contained in the plugin distribution > >How do you build one of these that has C extensions for multiple >platforms? Multiple variants of the same platform? Multiple Python >runtime versions? It's not always possible to cross-compile from platform >A to B for arbitrary A and B. I don't want to try. I'd be fine with saying that a plugin file with C extensions is always specific to a single platform, where "platform" is a specified processor and set of operating system versions. It might be nice to have a tool that can take a distutils-packaged source distro and build a plugin from it, but I think that's a separate issue. >>* Entry point for dynamic integration (technically a level 2 feature, but >>it'd be nice to reserve the header and define its syntax) > >What do you mean by "entry point" and "dynamic integration"? In OSGi, there's a "Bundle-Activator" field that carries the name of a class in the package that will be instantiated and have its 'start()' and 'stop()' methods called when the plugin is started or stopped by the application platform. The methods get passed a BundleContext object that provides them with lots of goodies for accessing plugin metadata, and registering or discovering services between plugins. So, the "entry point" would probably be a class, and "dynamic integration" means offering services, starting threads or servers, etc. Detailed design of these facilities should be left to levels 2 and 3, so as not to hold up the base system. >>* Update URL (for obtaining the "latest" version of the plugin) > >How about some kind of public key as well, so that if you visit the update >URL you will know if the new package was provided by the same author or not? Sounds like a cool idea, although I'm not sure how you could sign a zipfile from inside the zipfile. Unless the idea is to stick some extra data on the front or back? >If we require that modules specify a __version__ that is a "constant", it >would be easy to parse... When you "link" this bundle, it could >automatically say that it requires a version >= to the version it saw when >it was scanned for dependencies (unless explicitly specified to be more or >less specific). Interesting idea. Of course, __version__ often doesn't exist, and when it does, it often consists of CVS tag spew. >In any case, I think versioning is a really hard problem, especially in >Python due to sys.modules and the import mechanism, so I think that this >task should be deferred. We need to distinguish between being able to resolve dependencies, and resolve conflicts. We can focus on resolving dependencies now, and conflicts later. There are several possible conflict resolution mechanisms, each suitable for different scenarios. From pje at telecommunity.com Wed Dec 8 14:44:24 2004 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed Dec 8 14:44:13 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: References: <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20041208083557.03b83cc0@mail.telecommunity.com> At 12:20 AM 12/8/04 -0500, Mike Taylor wrote: >I'm also interested in this idea. While I'm new to the distutils scene, I >am currently tasked with creating installation tools for Chandler so maybe >I can be of some help. Great, that saves me the trouble of waving down an OSAF person to get involved. :) >On Dec 7, 2004, at 11:50 PM, Bob Ippolito wrote: >>How about some kind of public key as well, so that if you visit the >>update URL you will know if the new package was provided by the same >>author or not? > >Some sort of signature, or other means of validation, would be a must - >otherwise the security risk would make cross-site scripting attacks look >enjoyable :) No more so than with any other current technique for installing Python modules, but nonetheless a signature mechanism is a "nice-to-have" at the current level of things. I'd like to leave a way open for that, but not allow it to hold up implementation of a basic format, so that we can start experimenting with its use. >The only issue I see with versioning would be if you have a plugin that is >operating specific - then the version information would need to include >that metadata. Platform dependency for C extensions, plus general OS platform dependencies should be part of the metadata, but they don't need versioning, except e.g. OS version dependencies. >I took a good look at how to make Chandler be able to use existing >modules/libraries in order to reduce the number of included dependencies >and ended up putting the idea down and just backing slowly away from >it. Without the metadata information that this idea would provide it just >seems to be one big mess. Indeed. In the general case, a "plugin" under this concept could easily just be a bdist_plugin of an arbitrary package, however, like Twisted or wxPython. Granted, they might have to tweak the setup scripts a little in order to properly enable it, but once the format exists, a lot of other things should be possible. For example, if you have a pure-Python application with no C extensions, but you depend on packages that do have C extensions, you could just bundle platform-specific plugin distros of those other packages, built by people with access to the target platform. From pje at telecommunity.com Wed Dec 8 15:11:38 2004 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed Dec 8 15:11:29 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: References: <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20041208084430.03b88560@mail.telecommunity.com> At 01:51 AM 12/8/04 -0500, Bob Ippolito wrote: >I don't think that's really a big deal. The real problem, from my >perspective, is the interpreter-global sys.modules. Sure, you could get >around it, but only if you replace Python's import machinery entirely >(which you can do, but that is also interpreter-global). > >For example, let's say I have barPackage that needs foo 1.0 and bazPackage >that needs foo 2.0. How does that work? foo 1.0 and foo 2.0 can't both >be sys.modules['foo']. [Disclaimer: the rest of this post discusses possible solutions for the versioning problem, but please let's not get into solving that problem right now; it's not really a distutils issue, except insofar as the distutils needs to provide the metadata to allow the problem to be solved. I really don't want to get into detailed design of an actual implementation, and am just sharing these ideas to show that solutions are *possible*.] One possible solution is to place each plugin under a prefix name, like making the protocols package in PyProtocols show up as "PyProtocols_0_9_3.protocols". Then, the rest can be accomplished with standard PEP 302 import hooks. Let's say that another plugin, "PEAK_0_5a4", wants to import the protocols module. The way that imports currently work, the local package is tried before the global non-package namespace. So, if the module 'PEAK_0_5a4.peak.core' tries to import 'protocols', the import machinery first looks for 'PEAK_0_5a4.peak.protocols' -- which the PEP 302 import hook plugin will be called for, since it's the loader that would be responsible if that module or package really existed. However, when it sees the package doesn't exist, it simply checks its dependency resolution info in order to see that it should import 'PyProtocols_0_9_3.protocols', and then stick an extra reference to it in sys.modules under 'PEAK_0_5a4.peak.protocols'. It's messy, but it would work. There are a few sticky points, however: * "Relative-then-absolute" importing is considered bad and is ultimately supposed to go away in some future Python version * Absolute imports done dynamically will not be trapped (e.g. PEAK's "lazy import" facility) unless there's an __import__ hook also used * Modules that do things with '__name__' may act strangely (Of course, a more sophisticated version of this technique could only prefix packages that have more than one version installed and required, and so reduce the reach of these issues quite a bit.) Another possible solution is to use the Python multi-interpreter API, wrapped via ctypes or Pyrex or some such, and using an interpreter per plugin. Each interpreter has its own builtins, sys.modules and sys.path, so each plugin sees the universe exactly as it wants to. And, there are probably other solutions as well. I bring these up only to point out that it's possible to get quite close to a solution with only PEP 302 import hooks, without even trapping __import__. IMO, as long as we provide adequate metadata to allow a conflict resolver to know who wants what and who's got what, we have what we need for the first round of design and implementation: the binary format. Once a binary format exists, it becomes possible for lots of people to experiment with creating installers, registries, conflict resolvers, signature checkers, autoupdaters, etc. But until we have a binary format, it's all just hot air. From bob at redivi.com Wed Dec 8 15:37:28 2004 From: bob at redivi.com (Bob Ippolito) Date: Wed Dec 8 15:37:37 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> References: <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> Message-ID: On Dec 8, 2004, at 8:35, Phillip J. Eby wrote: > At 11:50 PM 12/7/04 -0500, Bob Ippolito wrote: >> On Dec 7, 2004, at 10:40 PM, Phillip J. Eby wrote: >> Also, if you allow extraction of C extension modules, you'll probably >> also have to allow extraction of dependent dlls and whatnot.. which >> is a real mess. For dependent dynamic libraries on Darwin, this is a >> *real* mess, because the runtime linker is only affected by >> environment variables at process startup time. I can only think of >> two solutions to this for Darwin: >> (a) build an executable bundle on and execve it on process startup, >> drop the dependent libraries inside that executable bundle >> (b) have some drop location for dependent libraries and C extensions >> and rewrite the load commands in them before loading (which may fail >> if there isn't enough wiggle room in the header) > > With regard to 'b', I'm not quite sure I understand about the > rewriting load commands. Are you saying that on Darwin, you have no > LD_LIBRARY_PATH? Because, wouldn't it suffice for the application to > have that defined when it starts, and install the libraries on that > path? What am I missing, here? Load commands (runtime dependencies between Mach-O files) have full paths embedded in them, not just names, so that is why header rewriting is useful. If these load commands start with "@executable_path/", then the first place the library is looked for will be relative to the executable, which makes (a) possible and is what py2app already does when you start including dependencies. > IOW, if you have a directory set up on LD_LIBRARY_PATH or its > equivalent, can't you just dump the libraries and C extensions there? Darwin has a pair of environment variables that are sort-of equivalent to LD_LIBRARY_PATH, however, their values are cached by the runtime linker (dyld) as soon as a process starts. Since a new process has to be started to use these from this environment anyway, (a) is the better option because setting these environment variables have panDYLD_FRAMEWORK_PATH or DYLD_LIBRARY_PATH has penalties associated with it. >>> * Update URL (for obtaining the "latest" version of the plugin) >> >> How about some kind of public key as well, so that if you visit the >> update URL you will know if the new package was provided by the same >> author or not? > > Sounds like a cool idea, although I'm not sure how you could sign a > zipfile from inside the zipfile. Unless the idea is to stick some > extra data on the front or back? How about we include a manifest file that includes filename, size, and a hash of the file's contents, and has the author's public key in there somewhere at the top or bottom. A second file, or a SMIME or PGP style wrapper around the manifest file, will contain the hash of the manifest file that is signed by the author's private key. Obviously this doesn't do you much good until you download an upgrade, but when you do download an upgrade, you can verify that the author is the same as what you already have and its contents have not been tampered with. Verifying the first time you download something is not really possible anyway, because you don't already "know" the author, so you would need some out of band mechanism. -bob From bob at redivi.com Wed Dec 8 15:48:08 2004 From: bob at redivi.com (Bob Ippolito) Date: Wed Dec 8 15:48:17 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: References: <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> Message-ID: <2CB70AAB-4928-11D9-8733-000A95BA5446@redivi.com> On Dec 8, 2004, at 9:37, Bob Ippolito wrote: > > On Dec 8, 2004, at 8:35, Phillip J. Eby wrote: > >> IOW, if you have a directory set up on LD_LIBRARY_PATH or its >> equivalent, can't you just dump the libraries and C extensions there? > > Darwin has a pair of environment variables that are sort-of equivalent > to LD_LIBRARY_PATH, however, their values are cached by the runtime > linker (dyld) as soon as a process starts. Since a new process has to > be started to use these from this environment anyway, (a) is the > better option because setting these environment variables have > panDYLD_FRAMEWORK_PATH or DYLD_LIBRARY_PATH has penalties associated > with it. I guess it's too early for me to be multitastking, but that was supposed to say "these environment variables have penalties associated with them". -bob From pje at telecommunity.com Wed Dec 8 16:06:58 2004 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed Dec 8 16:06:52 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: References: <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> At 09:37 AM 12/8/04 -0500, Bob Ippolito wrote: >On Dec 8, 2004, at 8:35, Phillip J. Eby wrote: >>At 11:50 PM 12/7/04 -0500, Bob Ippolito wrote: >>>Also, if you allow extraction of C extension modules, you'll probably >>>also have to allow extraction of dependent dlls and whatnot.. which is a >>>real mess. For dependent dynamic libraries on Darwin, this is a *real* >>>mess, because the runtime linker is only affected by environment >>>variables at process startup time. I can only think of two solutions to >>>this for Darwin: >>>(a) build an executable bundle on and execve it on process startup, drop >>>the dependent libraries inside that executable bundle >>>(b) have some drop location for dependent libraries and C extensions and >>>rewrite the load commands in them before loading (which may fail if >>>there isn't enough wiggle room in the header) >> >>With regard to 'b', I'm not quite sure I understand about the rewriting >>load commands. Are you saying that on Darwin, you have no >>LD_LIBRARY_PATH? Because, wouldn't it suffice for the application to >>have that defined when it starts, and install the libraries on that >>path? What am I missing, here? > >Load commands (runtime dependencies between Mach-O files) have full paths >embedded in them, not just names, so that is why header rewriting is >useful. If these load commands start with "@executable_path/", then the >first place the library is looked for will be relative to the executable, >which makes (a) possible and is what py2app already does when you start >including dependencies. Okay, now I'm really confused. How the heck does Python manage to load dynamic stuff at all, if everything has to have absolute paths in them? Can you use load commands relative to the location of the library itself? And who designed this crazy thing? ;) >>IOW, if you have a directory set up on LD_LIBRARY_PATH or its equivalent, >>can't you just dump the libraries and C extensions there? > >Darwin has a pair of environment variables that are sort-of equivalent to >LD_LIBRARY_PATH, however, their values are cached by the runtime linker >(dyld) as soon as a process starts. I meant, couldn't a given application instance just say, "okay, this is where I'm going to put my libraries", and have the environment variable set before it starts? That way, it could add new stuff to that directory at runtime without needing to restart. I suppose if the path is relative to some executable, then you could still do that at runtime. >How about we include a manifest file that includes filename, size, and a >hash of the file's contents, and has the author's public key in there >somewhere at the top or bottom. A second file, or a SMIME or PGP style >wrapper around the manifest file, will contain the hash of the manifest >file that is signed by the author's private key. I like this. Specifically, I like the part that it's a separate and optional file, so it doesn't hold up the base format definition. We just need to be able to define how metadata files like this get included in the format, so that other metadata files (like a Chandler Parcel schema, or a Zope ZCML file) would be includable also. Then, the bdist_plugin command would just package up those files, possibly after optionally generating the signature manifest. From theller at python.net Wed Dec 8 16:31:20 2004 From: theller at python.net (Thomas Heller) Date: Wed Dec 8 16:30:31 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: <5.1.1.6.0.20041208084430.03b88560@mail.telecommunity.com> (Phillip J. Eby's message of "Wed, 08 Dec 2004 09:11:38 -0500") References: <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041208084430.03b88560@mail.telecommunity.com> Message-ID: "Phillip J. Eby" writes: > Another possible solution is to use the Python multi-interpreter API, > wrapped via ctypes or Pyrex or some such, and using an interpreter per > plugin. Each interpreter has its own builtins, sys.modules and > sys.path, so each plugin sees the universe exactly as it wants to. Each time I think about this, because there's a similar problem with inprocess COM servers on windows, I hear mostly from Mark Hammond that he has agreed with Guido that the multi-interpreter api is flawed (hope that's the correct term). Unfortuately the agreement seems to have been reached in private emails between those two only. Thomas From bob at redivi.com Wed Dec 8 18:19:37 2004 From: bob at redivi.com (Bob Ippolito) Date: Wed Dec 8 18:20:15 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> References: <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> Message-ID: <562E366C-493D-11D9-8CCD-000A9567635C@redivi.com> On Dec 8, 2004, at 10:06 AM, Phillip J. Eby wrote: > At 09:37 AM 12/8/04 -0500, Bob Ippolito wrote: >> On Dec 8, 2004, at 8:35, Phillip J. Eby wrote: >>> At 11:50 PM 12/7/04 -0500, Bob Ippolito wrote: >>>> Also, if you allow extraction of C extension modules, you'll >>>> probably also have to allow extraction of dependent dlls and >>>> whatnot.. which is a real mess. For dependent dynamic libraries on >>>> Darwin, this is a *real* mess, because the runtime linker is only >>>> affected by environment variables at process startup time. I can >>>> only think of two solutions to this for Darwin: >>>> (a) build an executable bundle on and execve it on process startup, >>>> drop the dependent libraries inside that executable bundle >>>> (b) have some drop location for dependent libraries and C >>>> extensions and rewrite the load commands in them before loading >>>> (which may fail if there isn't enough wiggle room in the header) >>> >>> With regard to 'b', I'm not quite sure I understand about the >>> rewriting load commands. Are you saying that on Darwin, you have no >>> LD_LIBRARY_PATH? Because, wouldn't it suffice for the application >>> to have that defined when it starts, and install the libraries on >>> that path? What am I missing, here? >> >> Load commands (runtime dependencies between Mach-O files) have full >> paths embedded in them, not just names, so that is why header >> rewriting is useful. If these load commands start with >> "@executable_path/", then the first place the library is looked for >> will be relative to the executable, which makes (a) possible and is >> what py2app already does when you start including dependencies. > > Okay, now I'm really confused. How the heck does Python manage to > load dynamic stuff at all, if everything has to have absolute paths in > them? Can you use load commands relative to the location of the > library itself? And who designed this crazy thing? ;) No you can't use load commands relative to a library, only the process' main executable, or absolute paths. I certainly didn't design it :) I think that library-relative load commands would be terribly useful, and have filed feature requests.. but I'm not so sure it's going to be implemented by Mac OS X 10.4. There are essentially three ways to reference an external symbol with dyld (assuming two-level namespaces): (a) directly, by specifying that the symbol "foo" is going to be in the image from a particular load command, crash if the symbol or image is not found (b) weakly, by specifying that the symbol "foo" is going to be in the image from a particular load command, set the symbol to NULL if not found (c) indirectly, by specifying that the symbol "foo" is hopefully already defined in the process by something else, crash if not found The best way to link Python extensions is to use (c), but that feature of dyld was not implemented until Mac OS X 10.3, and was not used by Python until 2.4. I'll probably submit a patch to make it work like this for 2.3.5 if I find the time. I'm not sure if this will help you better understand, but the dyld_find function in this Python module clones the load command resolution algorithm of dyld: http://svn.red-bean.com/bob/py2app/trunk/src/macholib/dyld.py The main reason these absolute paths are there are because Darwin has namespaces for symbols. You can load two different versions of the same thing just fine, so long as you are only looking up symbols directly from them. There's also a feature called prebinding that depends on this, essentially if it can confirm that the libraries have not changed, then it can do a lot of the symbol mapping stuff at "link time" to make executables start up faster. I say link time in quotes because it can be updated (if you upgrade a dependent library, for example). >>> IOW, if you have a directory set up on LD_LIBRARY_PATH or its >>> equivalent, can't you just dump the libraries and C extensions >>> there? >> >> Darwin has a pair of environment variables that are sort-of >> equivalent to LD_LIBRARY_PATH, however, their values are cached by >> the runtime linker (dyld) as soon as a process starts. > > I meant, couldn't a given application instance just say, "okay, this > is where I'm going to put my libraries", and have the environment > variable set before it starts? That way, it could add new stuff to > that directory at runtime without needing to restart. > > I suppose if the path is relative to some executable, then you could > still do that at runtime. How does an application say something about what it wants before it starts? Do we expect every application developer to write a darwin-specific boot script? Do we force a "boot" script (think something like Twisted's "twistd") for every platform so that things like this can be accommodated? >> How about we include a manifest file that includes filename, size, >> and a hash of the file's contents, and has the author's public key in >> there somewhere at the top or bottom. A second file, or a SMIME or >> PGP style wrapper around the manifest file, will contain the hash of >> the manifest file that is signed by the author's private key. > > I like this. Specifically, I like the part that it's a separate and > optional file, so it doesn't hold up the base format definition. We > just need to be able to define how metadata files like this get > included in the format, so that other metadata files (like a Chandler > Parcel schema, or a Zope ZCML file) would be includable also. Then, > the bdist_plugin command would just package up those files, possibly > after optionally generating the signature manifest. What about something like this: myplugin.pyplugin metadata/ MANIFEST-1.0 share/ mypackage.zcml purelib/ mypackage/ __init__.py platlib/ os-and-python-version-specific-string/ mypackage/ extmodule.so lib/ os-and-python-version-specific-string/ extmoduledependency.dylib This should more or less allow for someone to create a "fat" plugin that has platform-specific dependencies, but includes them for multiple platforms. We should also say that the filenames in the zip file should encoded as utf-8, so we can support unicode filenames. The zip format itself has no standard for this, and there isn't even a de facto standard. -bob From pje at telecommunity.com Wed Dec 8 19:03:41 2004 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed Dec 8 19:03:37 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: References: <5.1.1.6.0.20041208084430.03b88560@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041208084430.03b88560@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20041208125207.0353b2b0@mail.telecommunity.com> At 04:31 PM 12/8/04 +0100, Thomas Heller wrote: >"Phillip J. Eby" writes: > > > Another possible solution is to use the Python multi-interpreter API, > > wrapped via ctypes or Pyrex or some such, and using an interpreter per > > plugin. Each interpreter has its own builtins, sys.modules and > > sys.path, so each plugin sees the universe exactly as it wants to. > >Each time I think about this, because there's a similar problem with >inprocess COM servers on windows, I hear mostly from Mark Hammond that >he has agreed with Guido that the multi-interpreter api is flawed (hope >that's the correct term). Unfortuately the agreement seems to have been >reached in private emails between those two only. The principal issue I'm aware of with the multi-interpreter API is that extension modules that are loaded in multiple interpreters may still end up sharing their static data. I've looked at the implementation to a moderate depth, and I haven't seen any other major issues. Well, actually, the new "safe threading extensions" API only works with the default interpreter in a case where an extension-created thread is not already associated with a Python thread state (and therefore an interpreter). But, by definition that's all it *can* do, unless it grew an option for the extension to specify an interpreter. There may actually be a few other caveats, but as far as I can tell, all of these caveats are specific to extension modules. As a practical matter, mod_python uses the CPython multi-interpreter API now, and Jython has its own multi-interpreter API. (Strangely, IronPython seems to be strictly single-interpreter, or at least the first release of it was; I'm not sure why.) From pje at telecommunity.com Wed Dec 8 19:20:23 2004 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed Dec 8 19:20:20 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: <562E366C-493D-11D9-8CCD-000A9567635C@redivi.com> References: <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20041208130527.03540070@mail.telecommunity.com> At 12:19 PM 12/8/04 -0500, Bob Ippolito wrote: >>I suppose if the path is relative to some executable, then you could >>still do that at runtime. > >How does an application say something about what it wants before it >starts? Do we expect every application developer to write a >darwin-specific boot script? Do we force a "boot" script (think something >like Twisted's "twistd") for every platform so that things like this can >be accommodated? Well, it's better than nothing. My long-term thinking is that you'd have a platform-specific way to launch an "application plugin" that represents the actual application, in the same way that on Windows you can double-click a .jar file to run a Java application. Once that bootstrap facility is available on the platform, then theoretically a given app can just be launched that way. But I do see your general point. Clearly library deployment on Darwin is a pain in the rear. I'm glad you're following it, because it's completely over my head at this point. >>>How about we include a manifest file that includes filename, size, and a >>>hash of the file's contents, and has the author's public key in there >>>somewhere at the top or bottom. A second file, or a SMIME or PGP style >>>wrapper around the manifest file, will contain the hash of the manifest >>>file that is signed by the author's private key. >> >>I like this. Specifically, I like the part that it's a separate and >>optional file, so it doesn't hold up the base format definition. We >>just need to be able to define how metadata files like this get included >>in the format, so that other metadata files (like a Chandler Parcel >>schema, or a Zope ZCML file) would be includable also. Then, the >>bdist_plugin command would just package up those files, possibly after >>optionally generating the signature manifest. > >What about something like this: > >myplugin.pyplugin > metadata/ > MANIFEST-1.0 > share/ > mypackage.zcml > purelib/ > mypackage/ > __init__.py > platlib/ > os-and-python-version-specific-string/ > mypackage/ > extmodule.so > lib/ > os-and-python-version-specific-string/ > extmoduledependency.dylib > >This should more or less allow for someone to create a "fat" plugin that >has platform-specific dependencies, but includes them for multiple platforms. I'd prefer to be able to use a plugin archive (.par, anyone?) directly with zipimport in the case that it's a pure Python archive (or if it's on some hypothetical platform that can load extensions from a zipfile). Ideally, also, one should also be able to unzip a .par directly into site-packages or a subdirectory thereof and use it. Thus, I'd prefer to see an internal layout that's something more like: mypackage/ __init__.py extmodule.so extmoduledependency.dylib __metadata__/ myplugin-0.1.2/ MANIFEST-1.0 signature.smime (???) configure.zcml Since this layout is "safe" to unzip into a common location for multiple plugins (i.e., the metadata won't be overridden by a different package or version thereof). I don't currently see a strong use case for "fat" plugins, as distutils cross-compilation capability is limited, which would make it really hard to build a fat plugin. And you certainly wouldn't want a fat plugin for something like wxPython or Numeric, anyway. :) >We should also say that the filenames in the zip file should encoded as >utf-8, so we can support unicode filenames. The zip format itself has no >standard for this, and there isn't even a de facto standard. Sounds good. From bob at redivi.com Thu Dec 9 11:59:40 2004 From: bob at redivi.com (Bob Ippolito) Date: Thu Dec 9 11:59:47 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: <5.1.1.6.0.20041208130527.03540070@mail.telecommunity.com> References: <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> <5.1.1.6.0.20041208130527.03540070@mail.telecommunity.com> Message-ID: <6C9A40D0-49D1-11D9-8733-000A95BA5446@redivi.com> On Dec 8, 2004, at 13:20, Phillip J. Eby wrote: > At 12:19 PM 12/8/04 -0500, Bob Ippolito wrote: >>> I suppose if the path is relative to some executable, then you could >>> still do that at runtime. >> >> How does an application say something about what it wants before it >> starts? Do we expect every application developer to write a >> darwin-specific boot script? Do we force a "boot" script (think >> something like Twisted's "twistd") for every platform so that things >> like this can be accommodated? > > Well, it's better than nothing. My long-term thinking is that you'd > have a platform-specific way to launch an "application plugin" that > represents the actual application, in the same way that on Windows you > can double-click a .jar file to run a Java application. Once that > bootstrap facility is available on the platform, then theoretically a > given app can just be launched that way. Makes sense. > But I do see your general point. Clearly library deployment on Darwin > is a pain in the rear. I'm glad you're following it, because it's > completely over my head at this point. It Just Works as long as you do it the way they want you to (use an application bundle or a fixed path relative to the root of the filesystem). Trying to mangle it into doing something it's not supposed to do is a pain in the rear, but that's more or less universally true. >>>> How about we include a manifest file that includes filename, size, >>>> and a hash of the file's contents, and has the author's public key >>>> in there somewhere at the top or bottom. A second file, or a SMIME >>>> or PGP style wrapper around the manifest file, will contain the >>>> hash of the manifest file that is signed by the author's private >>>> key. >>> >>> I like this. Specifically, I like the part that it's a separate and >>> optional file, so it doesn't hold up the base format definition. >>> We just need to be able to define how metadata files like this get >>> included in the format, so that other metadata files (like a >>> Chandler Parcel schema, or a Zope ZCML file) would be includable >>> also. Then, the bdist_plugin command would just package up those >>> files, possibly after optionally generating the signature manifest. >> >> What about something like this: >> This should more or less allow for someone to create a "fat" plugin >> that has platform-specific dependencies, but includes them for >> multiple platforms. > > I'd prefer to be able to use a plugin archive (.par, anyone?) directly > with zipimport in the case that it's a pure Python archive (or if it's > on some hypothetical platform that can load extensions from a > zipfile). Ideally, also, one should also be able to unzip a .par > directly into site-packages or a subdirectory thereof and use it. > Thus, I'd prefer to see an internal layout that's something more like: Uh, why does it matter if zipimport can do something with it if we're going to need a custom importer anyway? By the way, .par is just about the worst possible choice for an extension, ever. - It's already used by Perl and PHP to mean similar things - Explicit is better than implicit (what the heck is a par?). - We're not using DOS anymore, it's safe to use more than three letters. - It's a common english word, good luck finding it on google or anywhere else for that matter. "pyplugin" has "about 14" results on google (that displays as 4) "par" has "about 245,000,000", which looks like just an upper bound on the amount of results they're willing to give you. Even "par python" has "about 589,000" results. -bob From theller at python.net Thu Dec 9 13:32:59 2004 From: theller at python.net (Thomas Heller) Date: Thu Dec 9 13:32:05 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: <6C9A40D0-49D1-11D9-8733-000A95BA5446@redivi.com> (Bob Ippolito's message of "Thu, 9 Dec 2004 05:59:40 -0500") References: <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> <5.1.1.6.0.20041208130527.03540070@mail.telecommunity.com> <6C9A40D0-49D1-11D9-8733-000A95BA5446@redivi.com> Message-ID: >> I'd prefer to be able to use a plugin archive (.par, anyone?) >> directly with zipimport in the case that it's a pure Python archive >> (or if it's on some hypothetical platform that can load extensions >> from a zipfile). Ideally, also, one should also be able to unzip a >> .par directly into site-packages or a subdirectory thereof and use >> it. Thus, I'd prefer to see an internal layout that's something >> more like: > > > Uh, why does it matter if zipimport can do something with it if we're > going to need a custom importer anyway? If you use a custom importer, it could extract extension modules to the file system on demand (for those non-hypotetical platforms where it's required). And it may even be possible with zipimporter, if the archive has some custom extension loaders. Thomas From pje at telecommunity.com Thu Dec 9 16:49:14 2004 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu Dec 9 16:49:55 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: <6C9A40D0-49D1-11D9-8733-000A95BA5446@redivi.com> References: <5.1.1.6.0.20041208130527.03540070@mail.telecommunity.com> <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> <5.1.1.6.0.20041208130527.03540070@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20041209103036.042c8190@mail.telecommunity.com> At 05:59 AM 12/9/04 -0500, Bob Ippolito wrote: >On Dec 8, 2004, at 13:20, Phillip J. Eby wrote: > >>At 12:19 PM 12/8/04 -0500, Bob Ippolito wrote: >>>What about something like this: > >>>This should more or less allow for someone to create a "fat" plugin that >>>has platform-specific dependencies, but includes them for multiple platforms. >> >>I'd prefer to be able to use a plugin archive (.par, anyone?) directly >>with zipimport in the case that it's a pure Python archive (or if it's on >>some hypothetical platform that can load extensions from a >>zipfile). Ideally, also, one should also be able to unzip a .par >>directly into site-packages or a subdirectory thereof and use it. >>Thus, I'd prefer to see an internal layout that's something more like: > > >Uh, why does it matter if zipimport can do something with it if we're >going to need a custom importer anyway? Well, for one thing, so we can possibly reuse some of the zipimport code. But also because it allows pure-Python packages distributed this way to be used without additional downloads or other steps, and because packages with extensions can just be unzipped in an application's directory. >By the way, .par is just about the worst possible choice for an extension, >ever. >- It's already used by Perl and PHP to mean similar things Oops. >- We're not using DOS anymore, it's safe to use more than three letters. >- It's a common english word, good luck finding it on google or anywhere >else for that matter. > "pyplugin" has "about 14" results on google (that displays as 4) Hm. I'm not sure that what Google displays *now* is going to be helpful later. I mean, if you Google "PyProtocols" today, almost half the results are Debian package mirrors. If the format catches on, you're going to have to search for something besides "pyplugin" to find anything useful. But I don't have any real objection to "pyplugin", apart from the fact the length makes me uneasy, admittedly irrationally so. Before DOS 3-character extensions, I was using TRS-DOS 3-character extensions, so I've got about 20 years of 3-characterness to overcome. :) "pyarc", btw, has "about 10" results on Google (that displays as 6), so it's about as unused as "pyplugin", and is shorter. Which is therefore less unnerving to my primitive hindbrain's fear of longer extension names. ;) OTOH, "bdist_plugin" has a much nicer ring than "bdist_pyplugin" or "bdist_pyarc", so there's lots of ways this could go. "pyzip" is also comparably "available" with respect to Google hits. From pje at telecommunity.com Thu Dec 9 16:58:53 2004 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu Dec 9 16:59:36 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: References: <6C9A40D0-49D1-11D9-8733-000A95BA5446@redivi.com> <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> <5.1.1.6.0.20041208130527.03540070@mail.telecommunity.com> <6C9A40D0-49D1-11D9-8733-000A95BA5446@redivi.com> Message-ID: <5.1.1.6.0.20041209104934.042cc4a0@mail.telecommunity.com> At 01:32 PM 12/9/04 +0100, Thomas Heller wrote: > >> I'd prefer to be able to use a plugin archive (.par, anyone?) > >> directly with zipimport in the case that it's a pure Python archive > >> (or if it's on some hypothetical platform that can load extensions > >> from a zipfile). Ideally, also, one should also be able to unzip a > >> .par directly into site-packages or a subdirectory thereof and use > >> it. Thus, I'd prefer to see an internal layout that's something > >> more like: > > > > > > Uh, why does it matter if zipimport can do something with it if we're > > going to need a custom importer anyway? > >If you use a custom importer, it could extract extension modules to the >file system on demand (for those non-hypotetical platforms where it's >required). And it may even be possible with zipimporter, if the archive >has some custom extension loaders. That's actually a good point; py2exe does basically that now, with a pure Python stub replacing the extension. Of course, py2exe relies on the extension and its libraries being outside the archive. Hm. Does the normal Python search look for extensions first or second? I'm just wondering if unzipping an archive with such stubs would automatically work when Python does its normal import mechanism. That is, would the extension take precedence over the stub? If so, then that approach is a real winner. bdist_plugin (or whatever it's called) could create stubs that provide extension loading support, and which would be ignored if the archive was unpacked. If need be, a support module could be included in the archive's root, so that the stubs don't have to be large. In fact, if the support module allows controlling policy for how/where to unpack extensions for dynamic loading, then an application that wants to control that stuff can just include and import the support module directly, configuring it for its needs before any plugins with extensions get loaded. For later Python versions (2.5), the support module would be part of the stdlib, and would no longer need to be distributed inside the archives produced. We'd only distribute it for plugins produced with older (<2.5) Pythons, and then only if the archive contains extensions. From theller at python.net Thu Dec 9 17:16:39 2004 From: theller at python.net (Thomas Heller) Date: Thu Dec 9 17:15:45 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: <5.1.1.6.0.20041209104934.042cc4a0@mail.telecommunity.com> (Phillip J. Eby's message of "Thu, 09 Dec 2004 10:58:53 -0500") References: <6C9A40D0-49D1-11D9-8733-000A95BA5446@redivi.com> <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> <5.1.1.6.0.20041208130527.03540070@mail.telecommunity.com> <6C9A40D0-49D1-11D9-8733-000A95BA5446@redivi.com> <5.1.1.6.0.20041209104934.042cc4a0@mail.telecommunity.com> Message-ID: "Phillip J. Eby" writes: > At 01:32 PM 12/9/04 +0100, Thomas Heller wrote: >> >> I'd prefer to be able to use a plugin archive (.par, anyone?) >> >> directly with zipimport in the case that it's a pure Python archive >> >> (or if it's on some hypothetical platform that can load extensions >> >> from a zipfile). Ideally, also, one should also be able to unzip a >> >> .par directly into site-packages or a subdirectory thereof and use >> >> it. Thus, I'd prefer to see an internal layout that's something >> >> more like: >> > >> > >> > Uh, why does it matter if zipimport can do something with it if we're >> > going to need a custom importer anyway? >> >>If you use a custom importer, it could extract extension modules to the >>file system on demand (for those non-hypotetical platforms where it's >>required). And it may even be possible with zipimporter, if the archive >>has some custom extension loaders. > > That's actually a good point; py2exe does basically that now, with a > pure Python stub replacing the extension. Mark does the same in the win32 extensions; pythoncom24.dll and pywintypes24.dll are also loaded via python stubs. > Of course, py2exe relies on the extension and its libraries being > outside the archive. > > Hm. Does the normal Python search look for extensions first or > second? I'm just wondering if unzipping an archive with such stubs > would automatically work when Python does its normal import mechanism. > That is, would the extension take precedence over the stub? Yes, .pyd is preferred if a .py is available in the same place. I don't think this is by accident, and probably the order is determined by what imp.getsuffixes() returns. > If so, then that approach is a real winner. bdist_plugin (or whatever > it's called) could create stubs that provide extension loading > support, and which would be ignored if the archive was unpacked. If > need be, a support module could be included in the archive's root, so > that the stubs don't have to be large. This may even be an idea for single-file py2exe: Let the stub unpack the extension, maybe in a TMP directory, open the file with FILE_FLAG_DELETE_ON_CLOSE (so we don't have to clean up manually), and then let the stub load the extension. (Off-topic: I've always been interested in the question if it is possible to emulate LoadLibrary with user code) > In fact, if the support module allows controlling policy for how/where > to unpack extensions for dynamic loading, then an application that > wants to control that stuff can just include and import the support > module directly, configuring it for its needs before any plugins with > extensions get loaded. > > For later Python versions (2.5), the support module would be part of > the stdlib, and would no longer need to be distributed inside the > archives produced. We'd only distribute it for plugins produced with > older (<2.5) Pythons, and then only if the archive contains extensions. From pje at telecommunity.com Thu Dec 9 17:31:13 2004 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu Dec 9 17:31:59 2004 Subject: [Distutils] Standardizing distribution of "plugins" for extensible apps In-Reply-To: References: <5.1.1.6.0.20041209104934.042cc4a0@mail.telecommunity.com> <6C9A40D0-49D1-11D9-8733-000A95BA5446@redivi.com> <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041207223222.02b10ec0@mail.telecommunity.com> <5.1.1.6.0.20041208075853.02e9ce70@mail.telecommunity.com> <5.1.1.6.0.20041208095613.029d34f0@mail.telecommunity.com> <5.1.1.6.0.20041208130527.03540070@mail.telecommunity.com> <6C9A40D0-49D1-11D9-8733-000A95BA5446@redivi.com> <5.1.1.6.0.20041209104934.042cc4a0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20041209112420.0300ce70@mail.telecommunity.com> At 05:16 PM 12/9/04 +0100, Thomas Heller wrote: >"Phillip J. Eby" writes: > > If so, then that approach is a real winner. bdist_plugin (or whatever > > it's called) could create stubs that provide extension loading > > support, and which would be ignored if the archive was unpacked. If > > need be, a support module could be included in the archive's root, so > > that the stubs don't have to be large. > >This may even be an idea for single-file py2exe: Let the stub unpack the >extension, maybe in a TMP directory, open the file with >FILE_FLAG_DELETE_ON_CLOSE (so we don't have to clean up manually), and >then let the stub load the extension. Of course, this code is going to be platform-specific, by necessity. And, it should be configurable, so that an application can define a non-temporary "cache" directory for the extraction. The stub loader should check for same file date and size as the archived extension(s), and go for it. We also need to figure out what to do for non-extension libraries. Presumably py2exe doesn't need to do anything special because the libraries just install next to the .exe. But for .dll or non-Python .so files, they'll need to be extracted somewhere. And, the stub loader support module will also need to be able to do the header munging for Mac OS X. Hm. Maybe we should just have platform-specific stub loader support, since we know the target platform at archive build time. So there could be darwin_stubloader, win32_stubloader, and so on. We just package the right one, and write the stubs accordingly. But just as 'os' and 'os.path' map to e.g. posix and posixpath, we should still have a generic stubloader module API so that an application can be platform-agnostic about its configuration. That is, an app that wants to support Darwin will need to set any non-default Darwin options it cares about, but it shouldn't need to check the platform and import a different module to do that. The platform-specific stubloaders would all support the same API and just ignore configuration specific to other platforms. From theller at python.net Wed Dec 15 21:15:37 2004 From: theller at python.net (Thomas Heller) Date: Wed Dec 15 21:14:35 2004 Subject: [Distutils] Import extensions from zipfiles (windows only) Message-ID: <7jnjz5au.fsf@python.net> I have a first working version of an importer which can import extension modules from zipfiles, avoiding to unpack them to the file system. License is still LGPL, unfortunately. Subscribers to the py2exe-users list already know that it uses this code which simulates the windows LoadLibrary call: http://www.joachim-bauch.de/tutorials/load_dll_memory.html It works in simple cases, the only ones that I have tested so far. Shall I publish it for experimentation? Thomas From pje at telecommunity.com Thu Dec 16 08:06:06 2004 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu Dec 16 07:58:02 2004 Subject: [Distutils] Import extensions from zipfiles (windows only) In-Reply-To: <7jnjz5au.fsf@python.net> Message-ID: <5.1.1.6.0.20041216020001.00a29560@mail.telecommunity.com> At 09:15 PM 12/15/2004 +0100, Thomas Heller wrote: >I have a first working version of an importer which can import extension >modules from zipfiles, avoiding to unpack them to the file system. >License is still LGPL, unfortunately. > >Subscribers to the py2exe-users list already know that it uses this code >which simulates the windows LoadLibrary call: > > http://www.joachim-bauch.de/tutorials/load_dll_memory.html > >It works in simple cases, the only ones that I have tested so far. > >Shall I publish it for experimentation? Doesn't this technique require an extension module, which would mean in turn that we can't bootstrap it from the zipfile? That is, it would have to be included in Python, which the LGPL would rule out anyway. Still, it sounds most interesting. I hope in another week or two to have some time to hammer out a prototype for a self-extract API and a setuptools extension to build a basic archive format. I'm thinking that rather than allowing metadata to be a holdup, I'd like to get a base implementation we can experiment with. In that regard, your technique sounds useful too, but I'm kind of wary about the licensing issue. >Thomas > >_______________________________________________ >Distutils-SIG maillist - Distutils-SIG@python.org >http://mail.python.org/mailman/listinfo/distutils-sig From mal at egenix.com Thu Dec 16 10:14:22 2004 From: mal at egenix.com (M.-A. Lemburg) Date: Thu Dec 16 10:14:26 2004 Subject: [Distutils] Import extensions from zipfiles (windows only) In-Reply-To: <5.1.1.6.0.20041216020001.00a29560@mail.telecommunity.com> References: <5.1.1.6.0.20041216020001.00a29560@mail.telecommunity.com> Message-ID: <41C151EE.9070001@egenix.com> Phillip J. Eby wrote: > At 09:15 PM 12/15/2004 +0100, Thomas Heller wrote: > >> I have a first working version of an importer which can import extension >> modules from zipfiles, avoiding to unpack them to the file system. >> License is still LGPL, unfortunately. >> >> Subscribers to the py2exe-users list already know that it uses this code >> which simulates the windows LoadLibrary call: >> >> http://www.joachim-bauch.de/tutorials/load_dll_memory.html >> >> It works in simple cases, the only ones that I have tested so far. >> >> Shall I publish it for experimentation? > > > Doesn't this technique require an extension module, which would mean in > turn that we can't bootstrap it from the zipfile? That is, it would > have to be included in Python, which the LGPL would rule out anyway. > > Still, it sounds most interesting. I hope in another week or two to > have some time to hammer out a prototype for a self-extract API and a > setuptools extension to build a basic archive format. I'm thinking that > rather than allowing metadata to be a holdup, I'd like to get a base > implementation we can experiment with. In that regard, your technique > sounds useful too, but I'm kind of wary about the licensing issue. I wonder why you put so much effort into avoiding the unzip of the file ? What's so bad about it ? In the end, the user will want "plugins" to be easily installable, e.g. have the application install them for him. For that to work, the most important part is a download manager. The rest (unzip into the plugin directory) can easily be done using standard distutils tools. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Dec 15 2004) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From pje at telecommunity.com Thu Dec 16 16:21:29 2004 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu Dec 16 16:13:21 2004 Subject: [Distutils] Import extensions from zipfiles (windows only) In-Reply-To: <41C151EE.9070001@egenix.com> References: <5.1.1.6.0.20041216020001.00a29560@mail.telecommunity.com> <5.1.1.6.0.20041216020001.00a29560@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20041216100411.00a2b010@mail.telecommunity.com> At 10:14 AM 12/16/2004 +0100, M.-A. Lemburg wrote: >I wonder why you put so much effort into avoiding the unzip >of the file ? What's so bad about it ? I mostly don't want it to be a manual step. I think Thomas would also like to have the option of py2exe generating single-file "installation-free" executables, even if an application includes extensions. (That's not a plugins issue, of course.) >In the end, the user will want "plugins" to be easily installable, >e.g. have the application install them for him. For that to >work, the most important part is a download manager. The rest >(unzip into the plugin directory) can easily be done using >standard distutils tools. Well, as long as they're either unzipped into different directories, or there is only one version of each plugin needed. But, it'd also be nice to be able to just put zipfiles on the Python path and share them between applications for some plugins. One big advantage to this is that platform-specific packagers (RPM et al) could be made to simply dump the zipfiles in an all-purpose location like /lib/pythonXX/site-packages, and not have to worry about version conflicts. A script or application could call a utility routine to add the needed items to sys.path at runtime. Something like 'require("Twisted>=1.1.0")' to search the existing sys.path for an appropriately-named directory or zipfile, and push it on the front of sys.path. Of course, with the proper metadata in the zipfiles (or their unpacked versions), the routine could also search for the necessary dependencies, and check for version conflicts. From mal at egenix.com Thu Dec 16 16:42:59 2004 From: mal at egenix.com (M.-A. Lemburg) Date: Thu Dec 16 16:43:06 2004 Subject: [Distutils] Import extensions from zipfiles (windows only) In-Reply-To: <5.1.1.6.0.20041216100411.00a2b010@mail.telecommunity.com> References: <5.1.1.6.0.20041216020001.00a29560@mail.telecommunity.com> <5.1.1.6.0.20041216020001.00a29560@mail.telecommunity.com> <5.1.1.6.0.20041216100411.00a2b010@mail.telecommunity.com> Message-ID: <41C1AD03.3000400@egenix.com> Phillip J. Eby wrote: > At 10:14 AM 12/16/2004 +0100, M.-A. Lemburg wrote: > >> I wonder why you put so much effort into avoiding the unzip >> of the file ? What's so bad about it ? > > > I mostly don't want it to be a manual step. Isn't that an application feature rather than a distutils one ? I mean distutils can help the application by packing everything up nicely in a ZIP file, but the application has to take care of getting the ZIP file and installing it - like most over-the-web installers do nowadays. > I think Thomas would also > like to have the option of py2exe generating single-file > "installation-free" executables, even if an application includes > extensions. (That's not a plugins issue, of course.) That's a valid case. However, I'd consider that a cosmetic thing: the days of one .exe does it all, drop-into c:\windows are long over :-/ >> In the end, the user will want "plugins" to be easily installable, >> e.g. have the application install them for him. For that to >> work, the most important part is a download manager. The rest >> (unzip into the plugin directory) can easily be done using >> standard distutils tools. > > Well, as long as they're either unzipped into different directories, or > there is only one version of each plugin needed. But, it'd also be nice > to be able to just put zipfiles on the Python path and share them > between applications for some plugins. Sharing plugins is a different concept than having managing multiple plugins for a single application. It's a matter of who owns the plugin installation location. If all applications come from the same vendor, then the usual approach is to have a vendor specific base dir and then a shared/ directory for shared resources. If the applications come from different vendors, you'd have to define a standard location for the applications to look for plugins. The standard then owns the location. Most likely the installation will have to be done by an administrator, unless you want to run into permission problems. > One big advantage to this is that platform-specific packagers (RPM et > al) could be made to simply dump the zipfiles in an all-purpose location > like /lib/pythonXX/site-packages, and not have to worry about version > conflicts. A script or application could call a utility routine to add > the needed items to sys.path at runtime. Something like > 'require("Twisted>=1.1.0")' to search the existing sys.path for an > appropriately-named directory or zipfile, and push it on the front of > sys.path. Right, but that's all possible today: simply create directories under site-packages/ that are not Python package directories and contain the version number, e.g. site-packages/ egenix-mx-base-2.0.90/ mx/ __init__.py egenix-mx-base-2.1.0/ mx/ __init__.py The rest of the logic can then be done in Python and placed into a helper module: import syspathtools syspathtools.use('egenix-mx-base >= 2.1') from mx import DateTime > Of course, with the proper metadata in the zipfiles (or their unpacked > versions), the routine could also search for the necessary dependencies, > and check for version conflicts. You'd probably want to place the meta data into the version directories as PKG-INFO file. However, this is only necessary if you plan to do the dependency checking *before* actually using the plugin. In the normal situation, you'd just do dynamic checking and then report the problem as run-time error. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Dec 15 2004) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From theller at python.net Thu Dec 16 20:54:00 2004 From: theller at python.net (Thomas Heller) Date: Thu Dec 16 20:53:32 2004 Subject: [Distutils] Re: Import extensions from zipfiles (windows only) References: <5.1.1.6.0.20041216020001.00a29560@mail.telecommunity.com> <5.1.1.6.0.20041216020001.00a29560@mail.telecommunity.com> <5.1.1.6.0.20041216100411.00a2b010@mail.telecommunity.com> <41C1AD03.3000400@egenix.com> Message-ID: (If someone feels the cross-posting is no longer appropriate, feel free to snip te recipients list) "M.-A. Lemburg" writes: > Phillip J. Eby wrote: >> I think Thomas would also like to have the option of py2exe >> generating single-file "installation-free" executables, even if an >> application includes extensions. Yes. >> (That's not a plugins issue, of course.) Sure. > That's a valid case. However, I'd consider that a cosmetic > thing: the days of one .exe does it all, drop-into c:\windows > are long over :-/ Single file executables are not only a cosmetic thing: Chances are that an application breaks because one of the files are damaged, missing, overwritten or so. And them are damned simple to handle - not always does one want to create an installer. But the real reason (for me, anyway) are maybe more than one dll python com servers in one process, which may even use python itself - but this has been discussed before, and Philip even has his own ideas an this topic. Back to plugins: wouldn't it be nice if I simply copy the file PyOpenGL.2.5.99.plugin into sys/site-packages and can then use it? (Crazy idea, probably, but with the zip extension importer it may even be possible to import the bdist_wininst created installer exe files directly) Thomas From theller at python.net Thu Dec 16 20:58:41 2004 From: theller at python.net (Thomas Heller) Date: Thu Dec 16 21:06:06 2004 Subject: [Distutils] Import extensions from zipfiles (windows only) References: <7jnjz5au.fsf@python.net> <5.1.1.6.0.20041216020001.00a29560@mail.telecommunity.com> Message-ID: <6532vwum.fsf@python.net> "Phillip J. Eby" writes: > At 09:15 PM 12/15/2004 +0100, Thomas Heller wrote: >>I have a first working version of an importer which can import extension >>modules from zipfiles, avoiding to unpack them to the file system. >>License is still LGPL, unfortunately. >> >>Subscribers to the py2exe-users list already know that it uses this code >>which simulates the windows LoadLibrary call: >> >> http://www.joachim-bauch.de/tutorials/load_dll_memory.html >> >>It works in simple cases, the only ones that I have tested so far. >> >>Shall I publish it for experimentation? > > Doesn't this technique require an extension module, which would mean > in turn that we can't bootstrap it from the zipfile? That is, it > would have to be included in Python, which the LGPL would rule out > anyway. > > Still, it sounds most interesting. In case you haven't seen the announcement I made here's the link: http://starship.python.net/crew/theller/moin.cgi/Hacks_2fZipExtImporter > I hope in another week or two to have some time to hammer out a > prototype for a self-extract API and a setuptools extension to build a > basic archive format. I'm thinking that rather than allowing metadata > to be a holdup, I'd like to get a base implementation we can > experiment with. In that regard, your technique sounds useful too, > but I'm kind of wary about the licensing issue. From bills2018 at hotmail.com Mon Dec 20 14:06:04 2004 From: bills2018 at hotmail.com (bill) Date: Mon Dec 20 14:03:35 2004 Subject: [Distutils] The limitation of the Photon Hypothesis Message-ID: <20041220130334.794951E4013@bag.python.org> The limitation of the Photon Hypothesis According to the electromagnetic theory of light, its energy is related to the amplitude of the electric field of the electromagnetic wave, W=eE^2V(where E is the amplitude and V is the volume). It apparently has nothing to do with the light's frequency f. To explain the photoelectric effect, Einstein put forward the photon hypothesis. His paper hypothesized light was made of quantum packets of energy called photons. Each photon carried a specific energy related to its frequency f, W=hf. This has nothing to do with the amplitude of the electromagnetic wave E. For the electromagnetic wave that the amplitude E has nothing to do with the light's frequency f, if the light's frequency f is high enough, the energy of the photon in light is greater than the light's energy, hf>eE^2V. Apparently, this is incompatible with the electromagnetic theory of light. THE UNCERTAINTY PRINCIPLE IS UNTENABLE By re-analysing Heisenberg's Gamma-Ray Microscope experiment and one of the thought experiment from which the uncertainty principle is demonstrated, it is actually found that the uncertainty principle cannot be demonstrated by them. It is therefore found to be untenable. Key words: uncertainty principle; Heisenberg's Gamma-Ray Microscope Experiment; thought experiment The History Of The Uncertainty Principle If one wants to be clear about what is meant by "position of an object," for example of an electron., then one has to specify definite experiments by which the "position of an electron" can be measured; otherwise this term has no meaning at all. --Heisenberg, in uncertainty paper, 1927 Are the uncertainty relations that Heisenberg discovered in 1927 just the result of the equations used, or are they really built into every measurement? Heisenberg turned to a thought experiment, since he believed that all concepts in science require a definition based on actual, or possible, experimental observations. Heisenberg pictured a microscope that obtains very high resolution by using high-energy gamma rays for illumination. No such microscope exists at present, but it could be constructed in principle. Heisenberg imagined using this microscope to see an electron and to measure its position. He found that the electron's position and momentum did indeed obey the uncertainty relation he had derived mathematically. Bohr pointed out some flaws in the experiment, but once these were corrected the demonstration was fully convincing. Thought Experiment 1 The corrected version of the thought experiment Heisenberg's Gamma-Ray Microscope Experiment A free electron sits directly beneath the center of the microscope's lens (please see AIP page http://www.aip.org/history/heisenberg/p08b.htm or diagram below) . The circular lens forms a cone of angle 2A from the electron. The electron is then illuminated from the left by gamma rays--high-energy light which has the shortest wavelength. These yield the highest resolution, for according to a principle of wave optics, the microscope can resolve (that is, "see" or distinguish) objects to a size of dx, which is related to and to the wavelength L of the gamma ray, by the expression: dx = L/(2sinA) (1) However, in quantum mechanics, where a light wave can act like a particle, a gamma ray striking an electron gives it a kick. At the moment the light is diffracted by the electron into the microscope lens, the electron is thrust to the right. To be observed by the microscope, the gamma ray must be scattered into any angle within the cone of angle 2A. In quantum mechanics, the gamma ray carries momentum as if it were a particle. The total momentum p is related to the wavelength by the formula, p = h / L, where h is Planck's constant. (2) In the extreme case of diffraction of the gamma ray to the right edge of the lens, the total momentum would be the sum of the electron's momentum P'x in the x direction and the gamma ray's momentum in the x direction: P' x + (h sinA) / L', where L' is the wavelength of the deflected gamma ray. In the other extreme, the observed gamma ray recoils backward, just hitting the left edge of the lens. In this case, the total momentum in the X direction is: P''x - (h sinA) / L''. The final x momentum in each case must equal the initial X momentum, since momentum is conserved. Therefore, the final X moment are equal to each other: P'x + (h sinA) / L' = P''x - (h sinA) / L'' (3) If A is small, then the wavelengths are approximately the same, L' ~ L" ~ L. So we have P''x - P'x = dPx ~ 2h sinA / L (4) Since dx = L/(2 sinA), we obtain a reciprocal relationship between the minimum uncertainty in the measured position, dx, of the electron along the X axis and the uncertainty in its momentum, dPx, in the x direction: dPx ~ h / dx or dPx dx ~ h. (5) For more than minimum uncertainty, the "greater than" sign may added. Except for the factor of 4pi and an equal sign, this is Heisenberg's uncertainty relation for the simultaneous measurement of the position and momentum of an object. Re-analysis The original analysis of Heisenberg's Gamma-Ray Microscope Experiment overlooked that the microscope cannot see the object whose size is smaller than its resolving limit, dx, thereby overlooking that the electron which relates to dx and dPx respectively is not the same. According to the truth that the microscope can not see the object whose size is smaller than its resolving limit, dx, we can obtain that what we can see is the electron where the size is larger than or equal to the resolving limit dx and has a certain position, dx = 0. The microscope can resolve (that is, "see" or distinguish) objects to a size of dx, which is related to and to the wavelength L of the gamma ray, by the expression: dx = L/(2sinA) (1) This is the resolving limit of the microscope and it is the uncertain quantity of the object's position. The microscope cannot see the object whose size is smaller than its resolving limit, dx. Therefore, to be seen by the microscope, the size of the electron must be larger than or equal to the resolving limit. But if the size of the electron is larger than or equal to the resolving limit dx, the electron will not be in the range dx. Therefore, dx cannot be deemed to be the uncertain quantity of the electron's position which can be seen by the microscope, but deemed to be the uncertain quantity of the electron's position which can not be seen by the microscope. To repeat, dx is uncertainty in the electron's position which cannot be seen by the microscope. To be seen by the microscope, the gamma ray must be scattered into any angle within the cone of angle 2A, so we can measure the momentum of the electron. But if the size of the electron is smaller than the resolving limit dx, the electron cannot be seen by the microscope, we cannot measure the momentum of the electron. Only the size of the electron is larger than or equal to the resolving limit dx, the electron can be seen by the microscope, we can measure the momentum of the electron. According to Heisenberg's Gamma-Ray Microscope Experiment, the electron¡¯s momentum is uncertain, the uncertainty in its momentum is dPx. dPx is the uncertainty in the electron's momentum which can be seen by microscope. What relates to dx is the electron where the size is smaller than the resolving limit. When the electron is in the range dx, it cannot be seen by the microscope, so its position is uncertain, and its momentum is not measurable, because to be seen by the microscope, the gamma ray must be scattered into any angle within the cone of angle 2A, so we can measure the momentum of the electron. If the electron cannot be seen by the microscope, we cannot measure the momentum of the electron. What relates to dPx is the electron where the size is larger than or equal to the resolving limit dx .The electron is not in the range dx, so it can be seen by the microscope and its position is certain, its momentum is measurable. Apparently, the electron which relates to dx and dPx respectively is not the same. What we can see is the electron where the size is larger than or equal to the resolving limit dx and has a certain position, dx = 0. Quantum mechanics does not rely on the size of the object, but on Heisenberg's Gamma-Ray Microscope experiment. The use of the microscope must relate to the size of the object. The size of the object which can be seen by the microscope must be larger than or equal to the resolving limit dx of the microscope, thus the uncertain quantity of the electron's position does not exist. The gamma ray which is diffracted by the electron can be scattered into any angle within the cone of angle 2A, where we can measure the momentum of the electron. What we can see is the electron which has a certain position, dx = 0, so that in no other position can we measure the momentum of the electron. In Quantum mechanics, the momentum of the electron can be measured accurately when we measure the momentum of the electron only, therefore, we have gained dPx = 0. And, dPx dx =0. (6) Thought Experiment 2 Single Slit Diffraction Experiment Suppose a particle moves in the Y direction originally and then passes a slit with width dx(Please see diagram below) . The uncertain quantity of the particle's position in the X direction is dx, and interference occurs at the back slit . According to Wave Optics , the angle where No.1 min of interference pattern can be calculated by following formula: sinA=L/2dx (1) and L=h/p where h is Planck's constant. (2) So the uncertainty principle can be obtained dPx dx ~ h (5) Re-analysis The original analysis of Single Slit Diffraction Experiment overlooked the corpuscular property of the particle and the Energy-Momentum conservation laws and mistook the uncertain quantity of the particle's position in the X direction is the slit's width dx. According to Newton first law , if an external force in the X direction does not affect the particle, it will move in a uniform straight line, ( Motion State or Static State) , and the motion in the Y direction is unchanged .Therefore , we can learn its position in the slit from its starting point. The particle can have a certain position in the slit and the uncertain quantity of the position is dx =0. According to Newton first law , if the external force at the X direction does not affect particle, and the original motion in the Y direction is not changed , the momentum of the particle in the X direction will be Px=0 and the uncertain quantity of the momentum will be dPx =0. This gives: dPx dx =0. (6) No experiment negates NEWTON FIRST LAW. Whether in quantum mechanics or classical mechanics, it applies to the microcosmic world and is of the form of the Energy-Momentum conservation laws. If an external force does not affect the particle and it does not remain static or in uniform motion, it has disobeyed the Energy-Momentum conservation laws. Under the above thought experiment , it is considered that the width of the slit is the uncertain quantity of the particle's position. But there is certainly no reason for us to consider that the particle in the above experiment has an uncertain position, and no reason for us to consider that the slit's width is the uncertain quantity of the particle. Therefore, the uncertainty principle, dPx dx ~ h (5) which is demonstrated by the above experiment is unreasonable. Conclusion Every physical principle is based on the Experiments, not based on MATHEMATICS, including heisenberg uncertainty principle. Einstein said, One Experiment is enough to negate a physical principle. >From the above re-analysis , it is realized that the thought experiment demonstration for the uncertainty principle is untenable. Therefore, the uncertainty principle is untenable. Reference: 1. Max Jammer. (1974) The philosophy of quantum mechanics (John wiley & sons , Inc New York ) Page 65 2. Ibid, Page 67 3. http://www.aip.org/history/heisenberg/p08b.htm Single Particles Do Not Exhibit Wave-Like Behavior Through a qualitative analysis of the experiment, it is shown that the presumed wave-like behavior of a single particle contradicts the energy-momentum conservation laws and may be expained solely through particle interactions. DUAL SLIT INTERFERENCE EXPERIMENT PART I If a single particle has wave-like behavior, it will create an interference image when it has passed through a single slit. But the experimental result shows that this is not the case Only a large number of particles can create an interference image when they pass through the two slits. PART II In the dual slit interference experiment, the single particle is thought to pass through both slits and interfere with itself at the same time due to its wave-like behavior. The motion of the wave is the same direction as the particle. If the particle passes through a single slit only, it can not be assumed that it has wave-like behavior. If it passes through two slits, it, and also the acompanying wave must be assumed to have motion in two directions. But a wave only has one direction of motion. PART III If one slit is obstructed in the dual slit interference experiment and a particle is launched in this direction, then according to Newton¡¯s first law, (assuming no external forces,) it will travel in a uniform straight line. It will not pass through the closed slit and will not make contact with the screen. If it has wavelike behavior, there is a probability that it will make contact. But this will negate Newton¡¯s first law and the law of conservation of energy and momentum. Both quantum mechanics and classical mechanics are dependent on this law. THE EXPLANATION FOR THE WAVE-LIKE BEHAVIOR OF THE PARTICLE In the dual slit interference experiment, if one slit is unobstructed, particles will impact at certain positions on the screen. But when two slits are open, the particles cannot reach these positions. This phenomenon brings us to the greatest puzzle regarding the image of the particle. But when we consider that the particle may experience two or more reflections, the puzzle may be resolved. As indicated, when one of the slits is obstructed, the particles that move towards this slit cannot get to the screen. However, they can return to the particle source by reflection and then pass through the open slit and reach the above positions since they have different paths when one or two slits are open. This indicates that wave-like behavior may be explained solely on the basis of particle interactions. EXPERIMENTAL TEST The above may be tested by an experiment that can absorb all the particles that move towards the closed slit. If one slit is obstructed by the stuff that can absorb all the particles that move towards it, the intensity of some positions on the screen should decrease THE CONCLUSION Single particles do not exhibit wave-like behavior. The similarity of wave and particle behavior may be attributed to initial impulse and path. The quantum mechanical explanation is suspect, since the probability of one particle and one particle among a large quantity reaching the screen is equal in mathematics and physics. Author : BingXin Gong Postal address : P.O.Box A111 YongFa XiaoQu XinHua HuaDu GuangZhou 510800 P.R.China E-mail: hdgbyi@public.guangzhou.gd.cn Tel: 86---20---86856616 From pje at telecommunity.com Tue Dec 21 08:06:40 2004 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue Dec 21 08:04:49 2004 Subject: [Distutils] How to define a platform? Message-ID: <5.1.1.6.0.20041221013053.02e3b670@mail.telecommunity.com> One of the trickier things about defining our "plugin" format is the definition of a platform. That is, how can we tell from metadata whether a given plugin is executable on the current platform? If we use 'distutils.util.get_platform()' and do simple string comparison, I believe this will result in both false positives and false negatives, due to e.g. the absence of processor info for Windows platforms, and the presence of processor info or OS version info for other platforms. For example, a 'linux-i386' module might work with a 'linux-i686' platform, yet not be accepted by a simple string comparison. Of course, if that metadata is encoded only in the plugin filename, then it's relatively simple to create copies with different filenames, one per applicable platform string. Or, perhaps there could be some type of configuration file that simply lists what platform strings (besides the one the machine itself generates) are acceptable on the machine. But, false positives seem harder to fix. For example, "win32" is used for both 32-bit and 64-bit Windows targets. I'm unsure whether any of the other targets have similar issues. Does anybody have any ideas? From sympa at ens.fr Wed Dec 29 09:02:22 2004 From: sympa at ens.fr (SYMPA) Date: Wed Dec 29 09:02:23 2004 Subject: [Distutils] Results of your commands Message-ID: <200412290802.iBT82MvR079550@nef2.ens.fr> > Delivery Failure - Invalid mail specification Command not understood: ignoring end of message. No command found in message From bob at redivi.com Fri Dec 31 05:03:16 2004 From: bob at redivi.com (Bob Ippolito) Date: Fri Dec 31 05:03:32 2004 Subject: [Distutils] ANN: py2app 0.1.7 Message-ID: This announcement is available in HTML at: http://bob.pythonmac.org/archives/2004/12/30/ann-py2app-017/ `py2app`_ is the bundlebuilder replacement we've all been waiting for. It is implemented as a distutils command, similar to `py2exe`_, that builds Mac OS X applications from Python scripts, extensions, and related data files. It tries very hard to include all dependencies it can find so that your application can be distributed standalone, as Mac OS X applications should be. `py2app`_ 0.1.7 is included in the installer for `PyObjC`_ 1.2. If you have installed `PyObjC`_ 1.2, then you already have `py2app`_ 0.1.7 installed. Download and related links are here: http://undefined.org/python/#py2app `py2app`_ 0.1.7 is a bug fix release: * The ``bdist_mpkg`` script will now set up sys.path properly, for setup scripts that require local imports. * ``bdist_mpkg`` will now correctly accept ``ReadMe``, ``License``, ``Welcome``, and ``background`` files by parameter. * ``bdist_mpkg`` can now display a custom background again (0.1.6 broke this). * ``bdist_mpkg`` now accepts a ``build-base=`` argument, to put build files in an alternate location. * ``py2app`` will now accept main scripts with a ``.pyw`` extension. * ``py2app``'s not_stdlib_filter will now ignore a ``site-python`` directory as well as ``site-packages``. * ``py2app``'s plugin bundle template no longer displays GUI dialogs by default, but still links to ``AppKit``. * ``py2app`` now ensures that the directory of the main script is now added to ``sys.path`` when scanning modules. * The ``py2app`` build command has been refactored such that it would be easier to change its behavior by subclassing. * ``py2app`` alias bundles can now cope with editors that do atomic saves (write new file, swap names with existing file). * ``macholib`` now has minimal support for fat binaries. It still assumes big endian and will not make any changes to a little endian header. * Add a warning message when using the ``install`` command rather than installing from a package. * New ``simple/structured`` example that shows how you could package an application that is organized into several folders. * New ``PyObjC/pbplugin`` Xcode Plug-In example. Since I have been slacking and the last announcement was for 0.1.4, here are the changes for the soft-launched releases 0.1.5 and 0.1.6: `py2app`_ 0.1.6 was a major feature enhancements release: * ``py2applet`` and ``bdist_mpkg`` scripts have been moved to Python modules so that the functionality can be shared with the tools. * Generic graph-related functionality from ``py2app`` was moved to ``altgraph.ObjectGraph`` and ``altgraph.GraphUtil``. * ``bdist_mpkg`` now outputs more specific plist requirements (for future compatibility). * ``py2app`` can now create plugin bundles (MH_BUNDLE) as well as executables. * New recipe for supporting extensions built with `sip`_, such as `PyQt`_. Note that due to the way that `sip`_ works, when one sip-based extension is used, *all* sip-based extensions are included in your application. In practice, this means anything provided by `Riverbank`_, I don't think anyone else uses `sip`_ (publicly). * New recipe for `PyOpenGL`_. This is very naive and simply includes the whole thing, rather than trying to monkeypatch their brain-dead version acquisition routine in ``__init__``. * Bootstrap now sets ``ARGVZERO`` and ``EXECUTABLEPATH`` environment variables, corresponding to the ``argv[0]`` and the ``_NSGetExecutablePath(...)`` that the bundle saw. This is only really useful if you need to relaunch your own application. * More correct ``dyld`` search behavior. * Refactored ``macholib`` to use ``altgraph``, can now generate `GraphViz`_ graphs and more complex analysis of dependencies can be done. * ``macholib`` was refactored to be easier to maintain, and the structure handling has been optimized a bit. * The few tests that there are were refactored in `py.test`_ style. * New `PyQt`_ example. * New `PyOpenGL`_ example. `py2app`_ 0.1.5 was a major feature enhancements release: * Added a ``bdist_mpkg`` distutils extension, for creating Installer an metapackage from any distutils script. - Includes PackageInstaller tool - bdist_mpkg script - setup.py enhancements to support bdist_mpkg functionality * Added a ``PackageInstaller`` tool, a droplet that performs the same function as the ``bdist_mpkg`` script. * Create a custom ``bdist_mpkg`` subclass for `py2app`_'s setup script. * Source package now includes `PJE`_'s `setuptools`_ extension to distutils. * Added lots of metadata to the setup script. * ``py2app.modulegraph`` is now a top-level package, ``modulegraph``. * ``py2app.find_modules`` is now ``modulegraph.find_modules``. * Should now correctly handle paths (and application names) with unicode characters in them. * New ``--strip`` option for ``py2app`` build command, strips all Mach-O files in output application bundle. * New ``--bdist-base=`` option for ``py2app`` build command, allows an alternate build directory to be specified. * New `docutils`_ recipe. * Support for non-framework Python, such as the one provided by `DarwinPorts`_. .. _`py.test`: http://codespeak.net/py/current/doc/test.html .. _`GraphViz`: http://www.pixelglow.com/graphviz/ .. _`PyOpenGL`: http://pyopengl.sf.net/ .. _`Riverbank`: http://www.riverbankcomputing.co.uk/ .. _`sip`: http://www.riverbankcomputing.co.uk/sip/index.php .. _`PyQt`: http://www.riverbankcomputing.co.uk/pyqt/index.php .. _`DarwinPorts`: http://darwinports.opendarwin.org/ .. _`docutils`: http://docutils.sf.net/ .. _`setuptools`: http://cvs.eby-sarna.com/PEAK/setuptools/ .. _`PJE`: http://dirtSimple.org/ .. _`py2app`: http://undefined.org/python/#py2app .. _`py2exe`: http://starship.python.net/crew/theller/py2exe/ .. _`PyObjC`: http://pyobjc.sf.net/