Python as network protocol

Daniel Fetchinson fetchinson at googlemail.com
Tue Nov 10 13:58:31 EST 2009


>>> This is a *really* bad idea.
>>
>> How do you know for sure? Maybe the OP wants to use this thing with 3
>> known researchers working on a cluster that is not even visible to the
>> outside world. In such a setup the model the OP suggested is a perfectly
>> reasonable one. I say this because I often work in such an environment
>> and security is never an issue for us. And I find it always amusing that
>> whenever I outline our code to a non-scientist programmer they always
>> run away in shock and never talk to us again
>
> You might be a great scientist, but perhaps you should pay attention to
> the experts on programming who tell you that this is opening a potential
> security hole in your system.

Well, I'm completely aware of the potential security threats. It's not
something I'm overlooking, rather, I'm taking them into account
according to their real importance in our specific environment. And by
the way, I'm not a great scientist :)

However if the environment is such that the potential risks can not be
exploited (not even in theory), because exactly 3 people have access
to a machine and all of them are trustworthy and the clusters on which
the code runs is not accessible from the internet, well, then the
'security hole' which would be dangerous otherwise, is risk free in
this case.

> No, it's not a "perfectly reasonable" tactic.

I believe it is.

> It's a risky tactic that
> only works because the environment you use it in is so limited and the
> users so trusted.

Exactly!

> Can you guarantee that will never change?

Yes. I will simply not release the code to anyone.

> If not, then you should rethink your tactic of using exec.

I agree.

> Besides, as a general rule, exec is around an order of magnitude slower
> than running code directly. If performance matters at all, you are better
> off to try to find an alternative to exec.

That is a good point, thanks. If we'll have performance issues, I'll
remember this.


>> Nevertheless our code works perfectly for our purposes.
>
> Until the day that some manager decides that it would be great to make
> your code into a service available over the Internet, or until one of the
> other scientists decides that he really needs to access it from home, or
> somebody pastes the wrong text into the application and it blows up in
> your face

I agree. If any of those things would occur, our software would be
pretty dangerous.

> ... it's not just malice you need to be careful of, but also accidents.

Agreed. If we mistype something (as suggested by others), well, it's
our fault. We know what will happen, if we still do it, well, it's our
fault, we'll fix it. Believe it or not, so far (after about 1.5 years
of operation) there were no typos that created problems.

> The history of computing is full of systems that were designed with no
> security because it wasn't needed, until it was needed, but it was too
> late by then.
>
> There's no need, or at least very little need, to put locks on the
> internal doors of your house, because we're not in the habit of taking
> internal doors and turning them into outside doors. But code designed to
> run inside your secure, safe network has a tendency to be re-purposed to
> run in insecure, unsafe networks, usually by people who have forgotten,
> or never knew, that they were opening up their system to code injection
> attacks.

On general grounds, you are right, of course.

My point is that hacking can still be a fun and easy-going activity
when one writes code for himself (almost) without regards to security
and nasty things like that creeping in from the outside. I'm the king
in my castle, although I'm fully aware of the fact that my castle
might be ugly from the outside :)

Cheers,
Daniel


-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown



More information about the Python-list mailing list