[C++-sig] Re: Interest in luabind

David Abrahams dave at boost-consulting.com
Sun Jun 22 22:11:48 CEST 2003


"Daniel Wallin" <dalwan01 at student.umu.se> writes:

>> --- Daniel Wallin <dalwan01 at student.umu.se> wrote:
>>
>> > Right. We didn't really intend for luabind to be used in this
>> > way, but rather for binding closed modules. It seems to me like
>> > this can't be very common thing to do though, at least not with
>> > lua. I have very little insight in how python is used.
>>
>> Boost.Python's "cross-module" feature is absolutely essential for
>> us.

I want to lean a little bit in luabind's direction here.  One thing
we've been discussing on-and-off is how we can provide some "scoping"
for conversions (especially to-python conversions, of which you get
only one per type), to prevent different modules from colliding in
unpleasant ways.

While sharing conversions and types across modules is important for
some applications, it's clear that in many situations it's
undesirable.  For example, two independent modules may be compiled
with different compilers, or different alignment options.  You just
don't want those stepping on each others' toes.  Furthermore, on many
systems, when two extension modules link to the same shared library,
their link symbol spaces are automatically shared, so the symbol
insulation one normally gets by being in a separate shared object
accessed via dlopen is lost.

It seems to me that for groups interested in sharing conversions it
might be reasonable to have them to build a shared Boost.Python
library for their project, and have every module in the project link
to it.  That would provide some degree of isolation.  

Is it important for an extension module author to want to work with
types from two packages that have been wrapped in that way?  That
would imply linking to both of their BPL libraries, which is
impossible, unless we find a way to import converters from each
without actually using them.

I am envisioning a flexible system with at least one dynamic and
probably two static library configurations that can be combined to
achieve the desired sharing/isolation.

>> To summary my practical experience: Maybe (?) static dispatch is
>> more efficient if most of your loops are in the interpreted layer,
>> but it is vastly more efficient if you push the rate-limiting loops
>> down into the compiled layer. This requires wrapping arrays of
>> user-defined types which is much easier handled in a system based
>> on dynamic dispatch. So overall dynamic dispatch wins out by a
>> large margin.
>
> I mostly agree with everything you say. However, it may
> still be of interest to be able to bypass the dynamic
> dispatch system and use converters with static dispatch. I
> fail to see how wrapping arrays of user-defined types is
> easier with dynamic dispatch though.

Me too.  Comments, Ralf?

> How do you import the converters from one module to another?

The system does that; the demand for a converter for a given type
causes the converter chain in the global converter registry to be
bound to a reference at static initialization time.  Since all
modules that work with the same type are referring to the same
registry entry, it "just works"

> And how does type_info objects compare between dll boundries?

On most platforms, just fine because we've normalized them using
boost/python/type_id.hpp.  A few platforms (e.g. SGI) still have
problems, though.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com





More information about the Cplusplus-sig mailing list