[C++-sig] Re: Interest in luabind

David Abrahams dave at boost-consulting.com
Sun Jun 22 22:27:13 CEST 2003


"Daniel Wallin" <dalwan01 at student.umu.se> writes:

I wrote:

>> >> In fact, the more I look at the syntax of luabind, the more I like.
>> >> Using addition for policy accumulation is cool.  The naming of the
>> >> policies is cool.
>> >
>> > It does increase compile times a bit though
>>
>> What, overloading '+'?  I don't think it's significant.
>
> I meant composing typelist's with '+' opposed to composing
> the typelist manually like in BPL.

I think we agree that's probably minor.

>> >> >> > This doesn't increase compile times.
>> >> >>
>> >> >> Good.  Virtual functions come with bloat of their own, but that's
>> >> >> an implementation detail which can be mitigated.
>> >> >
>> >> > Right. The virtual functions isn't generated in the
>> >> > template, so there is very little code generated.
>> >>
>> >> I don't see how that's possible, but I guess I'll learn.
>> >
>> > We can generate the wrapper code in the template, and store
>> > function pointers in the object instead of generating a
>> > virtual function which generates the wrapper functions.
>>
>> Well, IIUC, that means you have to treat def() the same when it
>> appears inside a class [...] as when it's inside a module [ ... ],
>> since there's no delayed evaluation.
>>
>>     ah, wait: you don't use [ ... ] for class, which gets you off
>>     the hook.
>>
>>     but what about nested classes?  Consistency would dictate the
>>     use of [ ... ].
>
> Right, we don't have nested classes. We have thought about a
> few solutions:
>
> class_<A>("A")
>   .def(..)
>   [
>     class_<inner>("inner")
>       .def(..)
>   ]
>   .def(..)
>   ;

Looks pretty!

> Or reusing namespace_:
>
>   class_<A>("A"),
>   namespace_("A")
>   [ class_<inner>(..) ]
>
> We thought that nested classes is less common than nested
> namespaces.

Either one works; I like the former, but I think you ought to be able
to do both.

>> >> The ordering issues basically have to do with the requirement that
>> >> classes be wrapped and converters defined before they are used,
>> >> syntactically speaking.  That caused all kinds of inconveniences in
>> >> BPLv1 when interacting classes were wrapped.  OTOH I bet it's
>> >> possible to implicltly choose conversion methods for classes which
>> >> you haven't seen a wrapper for, so maybe that's less of a problem
>> >> than I'm making it out to be.
>> >
>> > Ok. In BPLv1 you generated converter functions using friend
>> > functions in templates though, and this was the cause for
>> > these ordering issues?
>>
>> That was one factor.  The other factor of course was that each
>> class which needed to be converted from Python used its own
>> conversion function, where a generalized procedure for converting
>> classes will do perfectly well.
>
> Right. We have a general conversion function for all
> user-defined types. 

We actually have something similar, plus dynamic lookup **as a
fallback in case the usual method doesn't work**

> More on this later.

OK

>> There is still an issue of to-python conversions for wrapped
>> classes; different ones get generated depending on how the class is
>> "held".  I'm not convinced that dynamically generating the smart
>> pointer conversions is needed, but conversions for virtual function
>> dispatching subclass may be.
>
> I don't understand how this has anything to do with ordering. Unless
> you mean that you need to register the types before executing
> python/lua code that uses them, which seems pretty obvious. :)

It has nothing to do with ordering; I'm just thinking out loud about
how much dynamic lookup is actually buying in Boost.Python.

>> >> >> How do *add* a way to convert from Python type A to C++ type B
>> >> >> without masking the existing conversion from Python type Y to C++
>> >> >> type Z?
>> >> >
>> >> > I don't understand. How are B and Z related? Why would a
>> >> > conversion function for B mask conversions to Z?
>> >>
>> >> Sorry, B==Z ;-)
>> >
>> > Ah, ok. Well, this isn't finished either. We have a
>> > (unfinished) system which works like this:
>> >
>> > template<>
>> > struct implicit_conversion<0, B> : from<A> {};
>> > template<>
>> > struct implicit_conversion<1, B> : from<Y> {};
>> >
>> > Of course, this has all the problems with static dispatch as
>> > well..
>>
>> And with multiple implicit conversions being contributed by
>> multiple people.  Also note that in many environments there's no
>> guarantee that different extension modules won't share a link
>> namespace, so you have to watch out for ODR problems.
>
> Right. We didn't really intend for luabind to be used in this way,
> but rather for binding closed modules. 

I think I'm saying that on some systems (not many), there's no such
thing as a "closed module".  If they're loaded in the same process,
they share a link namespace :(

>> > I think so too. I'm looking around in BPL's conversion system now
>> > trying to understand how I incorporate it in luabind.
>>
>> I am not convinced I got it 100% right.  You've forced me to think
>> about the issues again in a new way.  It may be that the best
>> answer blends our two approaches.
>
> Your converter implementation with static ref's to the
> registry entry is really clever. 

Thanks!

> Instead of doing this we have general converters which is used to
> convert all user-defined types. 

I have the same thing for most from_python conversions; the registry
is only used as a fallback in that case.

> To do this we need a map<..> lookup to find the appropriate
> converter and this really sucks.

I can't understand why you'd need that, but maybe I'm missing
something.  The general mechanism in Boost.Python is that
instance_holder::holds(type_info) will give you the address of the
contained instance if it's there.

> As mentioned before, lua can have multiple states, so it would be
> cool if the converters would be bound to the state somehow. 

Why?  It doesn't seem like it would be very useful to have different
states doing different conversions.

> This would probably mean we would need to store a hash table in the
> registry entries and hash the lua state pointer (or something
> associated with the state) though, and I don't know if there is
> sufficient need for the feature to introduce this overhead.
>
> I don't know if I understand the issues with multiple extension
> modules. You register the converters in a map with the typeinfo as
> key, but I don't understand how this could ever work between
> dll's. Do you compare the typenames? 

Depends on the platform.  See my other message and
boost/python/type_id.hpp.

> If so, this could never work between modules compiled with different
> compilers. 

If they don't have compatible ABIs you don't want them to match
anyway, but this is currently an area of weakness in the system.

> So it seems to me like this feature can't be that useful,
> what am I missing?

Well, it's terribly useful for teams who are developing large
systems.  Each individual can produce wrappers just for just her part
of it, and they all interact correctly.

> Anyway, I find your converter system more appealing than
> ours. There are some issues which need to be taken care of;
> We choose best match, not first match, when trying different
> overloads. This means we need to keep the storage for the
> converter on the stack of a function that is unaware of the
> converter size (at compile time). So we need to either have
> a fixed size buffer on the stack, and hope it works, or
> allocate the storage at runtime.

I would love to have best match conversion.  I was going to do it at
one point, but realized eventually that users can sort the overloads
so that they always work so I never bothered to code it.

> For clarification:
>
> void dispatcher(..)
> {
>   *storage here*
>   try all overloads
>   call best overload
> }

I've already figured out how to solve this problem; if we can figure
out how to share best-conversion technology I'll happily code it up
;-)

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com





More information about the Cplusplus-sig mailing list