obviscating python code for distribution

Steven D'Aprano steve+comp.lang.python at pearwood.info
Mon May 16 04:49:12 EDT 2011


On Sun, 15 May 2011 23:41:23 -0600, Littlefield, Tyler wrote:

> Here's kind of what I want to prevent. I want to write a multi-player
> online game; everyone will essentually end up connecting to my server to
> play the game. I don't really like the idea of security through
> obscurity, but I wanted to prevent a couple of problems. 1) First I want
> to prevent people from hacking at the code, then using my server as a
> test for their new setups. I do not want someone to gain some extra
> advantage just by editing the code. Is there some other solution to
> this, short of closed-source? Thanks,

Closed source is not a solution. Please wipe that out of your mind. 
People successfully hack closed source applications. The lack of source 
is hardly a barrier at all: it's like painting over the door to your 
house in camouflage colours so from a distance people won't see it. To a 
guy with a network sniffer and debugger, the lack of source is no barrier 
at all.

You're trying to solve a hard problem, and by hard, I mean "impossible". 
It simply isn't possible to trust software on a machine you don't 
control, and pretty damn hard on a machine you do control. To put it in a 
nutshell, you can't trust *anything*. See the classic paper by Ken 
Thompson, "Reflections on Trusting Trust":

http://cm.bell-labs.com/who/ken/trust.html

Now, in a more practical sense, you might not fear that the operating 
system will turn on you, or the Python compiler. Some threats you don't 
care about. The threat model you do care about is a much more straight-
forward one: how to trust the desktop client of your game?

Alas, the answer is, you can't. You can't trust anything that comes from 
the client until you've verified it is unmodified, and you can't verify 
it is unmodified until you can trust the information it sends you. A 
vicious circle. You're fighting physics here. Don't think that obscuring 
the source code will help.

On-line game servers are engaged in a never-ending arms race against 
"punks" who hack the clients. The servers find a way to detect one hack 
and block it, and the punks find another hack that goes unnoticed for a 
while. It's like anti-virus and virus, or immune systems and germs.

The question you should be asking is not "how do I make this secure 
against cheats?", but "how much cheating can I afford to ignore?".

If your answer is "No cheating is acceptable", then you have to do all 
the computation on the server, nothing on the client, and to hell with 
performance. All your client does is the user interface part.

If the answer is, "Its a MUD, who's going to cheat???" then you don't 
have to do anything. Trust your users. If the benefit from "cheating" is 
small enough, and the number of cheaters low, who cares? You're not 
running an on-line casino for real money.

See also here:

http://web.archiveorange.com/archive/v/bqumydkHsi2ytdsX7ewa


Another approach might be to use psychology on your users. Run one server 
for vanilla clients to connect to, and another server where anything 
goes. Let the punks get it out of their system by competing with other 
punks. Run competitions to see who can beat the most souped up, dirty, 
cheating turbo-powered clients, for honour and glory. Name and shame the 
punks who cheat on the vanilla server, praise the best cheaters on the 
anything-goes machine, and  you'll (hopefully!) find that the level of 
cheating on the vanilla server is quite low. Who wants to be the low-life 
loser who wins by cheating when you can challenge your hacker peers 
instead?

(Note: I don't know if this approach ever works, but I know it does *not* 
work when real money or glory is involved. Not even close.)

If Blizzard can't stop private servers, rogue clients and hacked 
accounts, what makes you think you can?




-- 
Steven



More information about the Python-list mailing list