[Python-Dev] Pre-PEP: Redesigning extension modules

Stefan Behnel stefan_ml at behnel.de
Sat Aug 31 21:16:10 CEST 2013


Nick Coghlan, 31.08.2013 18:49:
> On 25 Aug 2013 21:56, "Stefan Behnel" wrote:
>>>>> One key point to note is that it *doesn't* call
>>>>> _PyImport_FixupExtensionObject, which is the API that handles all the
>>>>> PEP 3121 per-module state stuff. Instead, the idea will be for modules
>>>>> that don't need additional C level state to just implement
>>>>> PyImportExec_NAME, while those that *do* need C level state implement
>>>>> PyImportCreate_NAME and return a custom object (which may or may not
>>>>> be a module subtype).
>>>>
>>>> Is it really a common case for an extension module not to need any C
>>>> level
>>>> state at all? I mean, this might work for very simple accelerator
>>>> modules
>>>> with only a few stand-alone functions. But anything non-trivial will
>>>> almost
>>>> certainly have some kind of global state, cache, external library,
>>>> etc.,
>>>> and that state is best stored at the C level for safety reasons.
> 
> In my experience, most extension authors aren't writing high performance C
> accelerators, they're exposing an existing C API to Python. It's the cffi
> use case rather than the Cython use case.

Interesting. I can't really remember a case where I could afford the
runtime overhead of implementing a wrapper in Python and going through
something like ctypes or cffi. I mean, testing C libraries with Python
tools would be one, but then, you wouldn't want to write an extension
module for that and instead want to call it directly from the test code as
directly as possible.

I'm certainly aware that that use case exists, though, and also the case of
just wanting to get things done as quickly and easily as possible.


> Mutable module global state is always a recipe for obscure bugs, and not
> something I will ever let through code review without a really good
> rationale. Hidden process global state is never good, just sometimes a
> necessary evil.

I'm not necessarily talking about mutable state. Rather about things like
pre-initialised data or imported functionality. For example, I often have a
bound method of a compiled regex lying around somewhere in my Python
modules as a utility function. And the same kind of stuff exists in C code,
some may be local to a class, but other things can well be module global.
And given that we are talking about module internals here I'd always keep
them at the C level rather than exposing them through the module dict. The
module dict involves a much higher access overhead, in addition to the
reduced safety due to user accessibility.

Exported C-APIs are also a use case. You'd import the C-API of another
module at init time and from that point on only go through function
pointers etc. Those are (sub-)interpreter specific, i.e. they are module
global state that is specific to the currently loaded module instances.


> However, keep in mind my patch is currently just the part I can implement
> without PEP 451 module spec objects.

Understood.


>> Note that even global functions usually hold state, be it in the form of
>> globally imported modules, global caches, constants, ...
> 
> If they can be shared safely across multiple instances of the module (e.g.
> immutable constants), then these can be shared at the C level. Otherwise, a
> custom Python type will be needed to make them instance specific.

I assume you meant a custom module (extension) type here.

Just to be clear, the "module state at the C-level" is meant to be stored
in the object struct fields of the extension type that implements the
module, at least for modules that want to support reloading and
sub-interpreters. Obviously, nothing should be stored in static (global)
variables etc.


>>> We also need the create/exec split to properly support reloading. Reload
>>> *must* reinitialize the object already in sys.modules instead of
>>> inserting
>>> a different object or it completely misses the point of reloading
>>> modules
>>> over deleting and reimporting them (i.e. implicitly affecting the
>>> references from other modules that imported the original object).
>>
>> Interesting. I never thought of it that way.
>>
>> I'm not sure this can be done in general. What if the module has threads
>> running that access the global state? In that case, reinitialising the
>> module object itself would almost certainly lead to a crash.
> 
> My current proposal on import-sig is to make the first hook
> "prepare_module", and pass in the existing object in the reload case. For
> the extension loader, this would be reflected in the signature of the C
> level hook as well, so the module could decide for itself if it supported
> reloading.

I really don't like the idea of reloading by replacing module state. It
would be much simpler if the module itself would be replaced, then the
original module could stay alive and could still be used by those who hold
a reference to it or parts of its contents. Especially the from-import case
would benefit from this. Obviously, you could still run into obscure bugs
where a function you call rejects the input because it expects an older
version of a type, for example. But I can't see that being worse (or even
just different) from the reload-by-refilling-dict case.

You seemed to be ok with my idea of making the loader return a wrapped
extension module instead of the module itself. We should actually try that.


> This is actually my primary motivation for trying to improve the
> "can this be reloaded or not?" aspects of the loader API in PEP 451.

I assume you mean that the extension module would be able to clearly signal
that it can't be reloaded, right? I agree that that's helpful. If you're
wrapping a C library, then the way that library is implemented might simply
force you to prevent any attempts at reloading the wrapper module. But if
reloading is possible at all, it would be even more helpful if we could
make it really easy to properly support it.


> (keep in mind existing extension modules using the existing API will still
> never be reloaded)

Sure, that's the cool thing. We can really design this totally from scratch
without looking back.


>>> Take a look at the current example - everything gets stored in the
>>> module dict for the simple case with no C level global state.
>>
>> Well, you're storing types there. And those types are your module API. I
>> understand that it's just an example, but I don't think it matches a
>> common
>> case. As far as I can see, the types are not even interacting with each
>> other, let alone doing any C-level access of each other. We should try to
>> focus on the normal case that needs C-level state and C-level field access
>> of extension types. Once that's solved, we can still think about how to
>> make the really simple cases simpler, if it turns out that they are not
>> simple enough.
> 
> Our experience is very different - my perspective is that the normal case
> either eschews C level global state in the extension module, because it
> causes so many problems, or else just completely ignores subinterpreter
> support and proper module cleanup.

As soon as you have more than one extension type in your module, and they
interact with each other, they will almost certainly have to do type checks
against each other to make sure users haven't passed them rubbish before
they access any C struct fields of the object. Doing a type check means
that at least one type has a pointer to the other, meaning that it holds
global module state.

I really think that having some kind of global module state is the
exceedingly common case for an extension module.


>> I didn't know about PyType_FromSpec(), BTW. It looks like a nice addition
>> for manually written code (although useless for Cython).
> 
> This is the only way to create custom types when using the stable ABI. Can
> I take your observation to mean that Cython doesn't currently offer the
> option of limiting itself to the stable ABI?

Correct. I've taken a bird's view at it back then, and keep stumbling over
"wow - I couldn't even use that?" kind of declarations in the header files.
I don't think it makes sense for Cython. Existing CPython versions are easy
to support because they don't change anymore, and new major releases most
likely need adaptations anyway, if only to adapt to new features and
performance changes. Cython actually knows quite a lot about the inner
workings of CPython and its various releases. Going only through the stable
ABI parts of the C-API would make the code horribly slow in comparison, so
there are huge drawbacks for the benefit it might give.

The Cython way of doing it is more like: you want your code to run on a new
CPython version, then use a recent Cython release to compile it. It may
still work with older ones, but what you actually want is the newest
anyway, and you also want to compile the C code for the specific CPython
version at hand to get the most out of it. It's the C code that adapts, not
the runtime code (or Cython itself).

We run continuous integration tests with all of CPython's development
branches since 2.4, so we usually support new CPython releases long before
they are out. And new releases of CPython rarely affect Cython user code.

Stefan




More information about the Python-Dev mailing list