Nolan's question of the day: distributed servers?

Nolan Darilek nolan at ethereal.dhis.org
Thu Sep 30 00:39:50 EDT 1999


I've kept up a good tradition (Maybe not, depending on your point of
view :) of asking one question per day. Here's today's. :)

My current big project is a MUD server in Python. I have some
interesting ideas, only one of which I'll bore you all with. :) I
honestly don't know how Python will perform in this area. I'm not
aiming my code at lower-end processors though, so I'm not terribly
worried, but in order to alleviate some of this, I'm trying to either
use an existing object distribution scheme such as CORBA/ILU, or
*shudder* write my own. :)

Let me start by saying that I am not intimately familiar with CORBA or
ILU. I know what they were meant to accomplish, and I could probably
write a simpler program using them, but my needs are slightly
different, and I don't know if these schemes will handle them.

Basically, persistent objects are loaded into Namespaces. A Namespace
is a class which stores all objects in a specified location (Location
meaning a database) as local variables. So, if you had an object
called Ball in a database which is stored in a file called
"objects.db", you could do:

# Open the database and load all objects.
root = Namespace("objects.db")
# Access our Ball.
ball = root.Ball
# And then bounce it.
ball.bounce()

Since a namespace itself is a persistent object, databases can store
other namespaces. However, namespaces don't just access databases;
they can also access servers. So, you could do:

root.server1 = Namespace("thematrix.mud.org", "4000)
and then all objects which are available to the server
thematrix.mud.org 8000 would be available. Initially this scheme will
require a trusted environment, and security should be implemented
later.

So, a network of namespaces will exist, each accessing either a
database or a server. My current concern, though, is that the server
shouldn't have to know how every single object on every single server
works before it starts. From what I've seen of CORBA (And correct me
if I'm wrong.) class interfaces need to be specified before hand and
compiled to skeletons. This wouldn't work for me, since if someone on
server x decides to code, say, a flying car, making a skeleton
wouldn't be practical.

So my current questions are, is it possible to somehow export a
dynamic interface in realtime, as opposed to carefully specifying an
interface beforehand and loading it into the server? Another idea
which I have been considering is breaking the object into two
components, an exportable version and a server-resident version. So,
if someone gets a ball from another server, the user's server is only
sent a ball with its description and a list of commands/methods. If
the user chooses to bounce the ball, the user's server passes the
bounce request to the server which originated the ball, the ball
processes the method and returns its results to the user. This method
seems somewhat unclean though, since the person's ball wouldn't bounce
if the server which originated it was down. If, instead, the user's
server knew how to bounce the ball, the server would be able to work
independently. Another option would be to pickle requested objects and
dump them over the network to the originating server, possibly using
zlib for speed. So, if someone requests a copy of root.server1.ball,
the ball object would be pickled at the destination and sent across
the network. Perhaps this would be the best solution, since both sides
of the connection would have a copy of the object, though changes in
one would still need to effect the other, which may prove difficult.

Has anyone done something like this? If so, do you have any
recommendations? Thanks for listening/reading the somewhat confusing
and confused rants of a newbie. :)




More information about the Python-list mailing list