Multiple, separated interpreters in a single process

Tobias Oberstein Tobias.Oberstein at gmx.de
Fri Feb 7 06:05:40 EST 2003


Paul Rubin phr-n2003b at NOSPAMnightsong.com  06 Feb 2003 12:29:20 -0800 wrote

> "Tobias Oberstein" <Tobias.Oberstein at gmx.de> writes:
> > I cannot go with namespaces or interpreters in different
> > processes, because though I don't need to share Python object
> > spaces directly (which would obviously require fine-grained
> > locking with all it's performance hits), I need to share the
> > application context (OODBMS) of the multithreaded application
> > that embeds Python. The application context is wrapped up in
> > Python extension classes, which take care of the necessary
> > synchronisation and locking.
>
> If those extension classes don't themselves call the interpreter,
> maybe you can put those objects in a separate process and let your
> different interpreters (each in their own process) communicate through
> an RPC interface.  That's sort of like a traditional database that
> accepts multiple client connections.

The extension class I'm looking for would serve as a base type to
inherit from in Python. All inherited classes are then persistable
(similar to ZODB). I suppose you suggest to instantiate _all_ objects of
persistable classes within a single interpreter (the one embedded in
the backend OODBMS) and access those object out-of-process via some
kind of IPC. If I see this right, this would mean

- to implement a distributed object schema
- have a context switch (because of IPC stuff) at every method call of an
  object of a persistable class
- have the bottleneck (== non-scalability on SMP machines) at
  the "backend-interpreter"

Seems to bring more problems than it solves.


Aahz aahz at pythoncraft.com 6 Feb 2003 16:38:02 -0500 wrote

> In article <mailman.1044547994.22179.python-list at python.org>,
> Tobias Oberstein <Tobias.Oberstein at gmx.de> wrote:
> >
> >I'd like to have multiple interpreters within a single process
> >such that the interpreters are competely separated (like in TCL):
> >
> >- different object spaces
> >- different GILs
>
> No can do.  Problem is that extensions loaded by Python might have
> static data.
> --

Wasn't it a design flaw (in retrospective) to allow for static data at
all? Anyway, I could live with a situation where only the most important
extensions are working.


Mark Hammond mhammond at skippinet.com.au
Thu, 06 Feb 2003 22:20:25 GMT

> Tobias Oberstein wrote:
> > I know the issue was there previously, but I couldn't find a
> > definitive and current answer. So maybe someone could clarify?
> >
> > I'd like to have multiple interpreters within a single process
> > such that the interpreters are competely separated (like in TCL):
> >
> > - different object spaces
> > - different GILs
> >
> > ...
>
> There is an existing PyInterpreterState, including nice API for
> instantiating and installing, for exactly this purpose.

Thanks for pointing this out. I've looked at it, but it seems to me that
its not integrated with the Python API at all, since otherwise my
understanding would be to have something like this
(C hasn't overloaded functions .. OK new names .. append "_Ex" e.g.):


PyInterpreterState* Py_Initialize_Ex()

PyThreadState* Py_NewInterpreter_Ex(PyInterpreterState *interp)

int PyRun_SimpleString_Ex(PyInterpreterState *interp, char *command)

void Py_EndInterpreter_Ex(PyInterpreterState *interp, PyThreadState *tstate)

void Py_Finalize_Ex(PyInterpreterState *interp)


which isn't the case. Please correct if I don't got you ..

Nevertheless, if above is not complete nonsense, it should be possible
to do it without breaking backward compat.:

static PyInterpreterState* interp;

void Py_Initialize()
{
  interp = Py_Initialize_Ex();
}

int
PyRun_SimpleString(char *command)
{
	return PyRun_SimpleStringinterp_Ex(interp, command);
}

and so on. Comments?

>
> Note that hardly anyone has used it, so there may be bugs.  Indeed,
> there may be additional static data that needs to be moved into this
> structure.

Yes, the GIL for example. Incorporating the GIL (PyThread_type_lock
interpreter_lock;)
into PyInterpreterState would ultimately allow for better scalabilty on SMP:
Given one has an OS that allows for thread affinity on SMP machines
and given one created OS threads such that each is bound to one specific
PyInterpreterState (with it's own GIL each). This would allow Python to
scale
in multithreaded embedded scenarios on SMP machines.






More information about the Python-list mailing list