[Python-Dev] Challenge: Please break this! (a.k.a restricted mode revisited)

Chris Angelico rosuav at gmail.com
Tue Apr 12 06:27:14 EDT 2016


On Tue, Apr 12, 2016 at 8:06 PM, Jon Ribbens
<jon+python-dev at unequivocal.co.uk> wrote:
> On Tue, Apr 12, 2016 at 06:57:37PM +1000, Chris Angelico wrote:
>> The sandbox code assumes that an attacker cannot create files in the
>> current directory.
>
> If the attacker can create such files then the system is already
> compromised even if you're not using any sandboxing system, because
> you won't be able to trust any normal imports from your own code.

Just confirming that, yeah. Though you could protect against it
somewhat by pre-importing everything that can legally be imported;
that way, at least the attack surface ceases once untrusted code
starts executing. Consider it a privilege escalation attack; you can
move from "create file in current directory" to "remote code
execution" simply by creating hashlib.py and then importing it.

>> Setting LC_ALL and then working with calendar.LocaleTextCalendar()
>> causes locale files to be read.
>
> I don't think that has any obvious relevance. Doing "import enum"
> causes "enum.py" to be read too, and that isn't a security hole.

I mean the system locale files, not just locale.py itself. If nothing
else, it's a means of discovering info about the system. I don't know
what you can get by figuring out what locales are installed, but it's
another concern to think about.

>> This is still a massive game of whack-a-mole.
>
> No, it still isn't. If the names blacklist had to keep being extended
> then you would be right, but that hasn't happened so far. Whitelists
> by definition contain only a small, limited number of potential moles.
>
> The only thing you found above that even remotely approaches an
> exploit is the decimal.getcontext() thing, and even that I don't
> think you could use to do any code execution.

decimal.getcontext is a simple and obvious example of a way that
global mutable objects can be accessed across the boundary. There is
no way to mathematically prove that there are no more, so it's still a
matter of blacklisting.

I still think you need to work out a "minimum viable set" and set down
some concrete rules: if any feature in this set has to be blacklisted
in order to achieve security, the experiment has failed.

ChrisA


More information about the Python-Dev mailing list