[C++-sig] V2: wrapping int/double bug(?)

Pearu Peterson pearu at cens.ioc.ee
Tue May 14 00:05:01 CEST 2002


On Tue, 30 Apr 2002, David Abrahams wrote:

<snip>

> > That is, definitions
> >      .def_init(boost::python::args<double const>())
> > and
> >      .def("__add__",ex_add_int)
> > are ignored.
> 
> They're not ignored, but the effect ends up being the same. The built-in
> numeric type converters will convert any Python object whose type
> implements __float__ to a double, and any object whose type implements
> __int__ to an int. The builtin numeric types all implement these
> methods:
> 
> >>> 3.14.__int__()
> 3
> >>> x = 3
> >>> x.__float__()
> 3.0
> 
> The overloading mechanism in Boost.Python is quite simple-minded: rather
> than searching for some kind of "best match", it calls the first
> function whose signature can match the argument list. In Boost.Python v1
> this was never a problem, but the conversion rules were less-liberal.

Actually, it was also a problem in Boost.Python v1 and then I solved this
issue by not using methods with int or float arguments but using
methods with PyObject* arguments and so avoiding the "smart" behaviour of
BPL and implementing conversion rules to my particular needs explicitly.

> It's possible that liberal conversion rules are just a mistake here, but

I think so too. At least there should be a way to disable them (see also
below).

> I'm inclined to think that a more-sophisticated overloading approach
> akin to multimethods would be more appropriate, as I suggest in
> libs/python/doc/v2/Mar2002.html#implicit_conversions. However, that
> would be a nontrivial change and I'm not prepared to put it in my
> development schedule at this time.

I don't know what is "CLOS-style multimethod.." (and therefore what
follows may be irrelevant) but I don't see how your suggestion would solve 
the problem in hand. That is, if a wrapped class A defines methods

 .foo(int)
 .foo(float)

and a class B has both __int__ and __float__ methods, then in Python

 A().foo(B())

how can you tell which method, __int__ or __float__, should be called when
doing conversion? E.g. take B=int and then take B=float.

In order to do it properly, at some point the responsible mechanism in BPL 
should check whether an instance B() has more an "int nature" or more a
"float nature". This is easy, in principle, if B is int or float (or
a subclass of int or float) but not obvious at all if B is, for example,
str or an user defined class that defines both __int__ and __float__
methods.

In addition, I would expect an exception in cases when there are
multiple choices in doing implicit conversions. Then, users are forced
to call, for example,

  A().foo(int(B()))

or

  A().foo(float(B()))

depending on their intention, explicitly.


Currently, I can workaround this by defining a single method

  .foo(PyObject*)

and explicitly checking whether an object is int or float and doing
the appropiate conversion in that method. This approach works fine with
methods with int/float arguments and I am happy with that.
 
BUT, there is a real problem with constructors. Namely, one cannot define
an additional constructor (that is needed for being more explicit with
conversions for the same reasons as discussed above for methods)

  A(PyObject*)

without introducing a lightweighted wrapper to a library class A.
And that approach is unacceptable because it would mean that all methods
(that I would like to use from Python) of A, must be re-wrapped as
well. In my particular case the number of relevant methods can be more
than 50 and therefore this lightweighted wrapper would get quite
"heavy". In addition, there are about ten additional classes from the same
library that I would like to wrap to Python. All these classes define
methods with arguments of each others instances and these methods may
return different instances, etc...

So, you can imagine that this approach (of introducing additional wrapper
classes for library classes because one needs a constructor
A(PyObject*) to work around the implicit conversion mechanism of
BPL, for instance, but not only -- see below about __init__
proposal) will lead to too complex interface.

Now, the question is whether it is possible to introduce additional
constructors to library classes without deriving a new class for that?
For example, if one has the following library class

class A {
  public:
    A(int);
    A(float);
}

then instead of the problematic case

 m.add(boost::python::class_<A>("A")
  .def_init(boost::python::args<int>())
  .def_init(boost::python::args<float>())
 );

one could define

 m.add(boost::python::class_<A>("A")
  .def("__init__",my_A_init)
 );

where

  A my_A_init(PyObject*);

What do you think? Would the above be possible in principle?

Notice that having an option of defining additional __init__ methods can
be useful in general, not only for this specific workaround of
the int/float issue.


> Would it be acceptable to simply use the versions of those functions
> which accept double parameters, and skip the "int" versions?

Unfortunatelly no. The library that I am trying to wrap does symbolic
algebraic calculations and it is crucial to keep "exact" and
"inexact" (but arbitrary precision) numbers separate.

Regards,
	Pearu

PS: Sorry for the late response.






More information about the Cplusplus-sig mailing list