Python Scalability TCP Server + Background Game

phiwer at gmail.com phiwer at gmail.com
Sat Jan 18 02:44:14 EST 2014


> Quick smoke test. How big are your requests/responses? You mention
> 
> REST, which implies they're going to be based on HTTP. I would expect
> 
> you would have some idea of the rough size. Multiply that by 50,000,
> 
> and see whether your connection can handle it. For instance, if you
> 
> have a 100Mbit/s uplink, supporting 50K requests/sec means your
> 
> requests and responses have to fit within about 256 bytes each,
> 
> including all overhead. You'll need a gigabit uplink to be able to
> 
> handle a 2KB request or response, and that's assuming perfect
> 
> throughput. And is 2KB enough for you?
> 
> 
> 
> ChrisA

My assumption is that there will be mostly reads and some writes; maybe in the order of 80-20%. There is a time element in the game, which forces player's entity to update on-demand. This is part of the reason why I wanted the server to be able to handle so many reques, so that it could handle the read part without having any caching layer.

Let me explain a bit more about the architecture, and possible remedies, to give you an idea:

* On top are the web servers exposing a REST api.

* At the bottom is the game.

* Communication betweeen these layers is handled by a simple text protocol using TCP.

The game has a tick function every now and then, which forwards the game's time. If a player enters the game, a message is sent to the game server (querying for player details), and if the game's tick is greater than the cached version of the player details, then the game updates the player details (and caches it).

This design obviously has its flaws. One being that both reads/writes has to pass through the game server. One way to remedy this is by using a new layer, on top of the game, which would hold the cache. But then another problem arises, that of invalidating the cache when a new tick has been made. I'm leaning towards letting the cache layer check the current tick every now and then, and if new tick is available, update a local variable in the cache (which each new connection would check against). Any thoughts regarding this?

There are some periods in the game, where many people will be online during the same tick, which could potentially cause the game to become slow at times, but maybe this should be accepted for the pleasure of making the game in python... :D

A follow-up question (which is more to the point really): How does other python game development frameworks solve this issue? Do they not use greenlets for the network layer, to be able to use the shared Queue from multiprocess? Do they only use one process for both network and game operations?

On a side note, I'm considering using 0MQ for the message layering between services (web-server <-> cache <-> game) on the back-end. Besides being a great message protocol, it also has built in queues which might be able to remedy the situation when many clients are requesting data. Anyone with experience with regards to this?

(This problem can be boggled down multiple producer, single consumer, and then back to producer again).

Thanks for all the replies.

/Phil



More information about the Python-list mailing list