[Python-ideas] PEP 504: Using the system RNG by default

Nick Coghlan ncoghlan at gmail.com
Wed Sep 16 10:34:35 CEST 2015


On 16 September 2015 at 17:43, David Mertz <mertz at gnosis.cx> wrote:
>
> On Sep 15, 2015 11:00 PM, "Nick Coghlan" <ncoghlan at gmail.com> wrote:
>> "But *why* can't I use the random module for security sensitive
>> tasks?" argument as it is at anything else. I'd like the answer to
>> that question to eventually be "Sure, you can use the random module
>> for security sensitive tasks, so let's talk about something more
>> important, like why you're collecting and storing all this sensitive
>> personally identifiable information in the first place".
>
> I believe this attitude makes overall security WORSE, not better. Giving a
> false assurance that simply using a certain cryptographic building block
> makes your application secure makes it less likely applications will fail to
> undergo genuine security analysis.
>
> Hence I affirmatively PREFER a random module that explicitly proclaims that
> it is non-cryptographic.  Someone who figures out enough to use
> random.SystemRandom, or a future crypto.random, or the like is more likely
> to think about why they are doing so, and what doing so does and does NOT
> assure them off.

You're *describing the status quo*. This isn't a new concept, as it's
the way our industry has worked since forever:

1. All the security features are off by default
2. The onus is on individual developers to "just know" when the work
they're doing is security sensitive
3. Once they realise what they're doing is security sensitive
(probably because a security engineer pointed it out), the onus is
*still* on them to educate themselves as to what to do about it

Meanwhile, their manager is pointing at the project schedule demanding
to know why the new feature hasn't shipped yet, and they're in turn
pointing fingers at the information security team blaming them for
blocking the release until the security vulnerabilities have been
addressed.

And that's the *good* scenario, since the only people it upsets are
the people working on the project. In the more typical cases where the
security team doesn't exist, gets overruled, or simply has too many
fires to try to put out, we get
http://www.informationisbeautiful.net/visualizations/worlds-biggest-data-breaches-hacks/
and http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/

On the community project side, we take the manager, the product
schedule and the information security team out of the picture, so
folks never even get to find out that there are any problems with the
approach they're taking - they just ship and deploy software, and are
mostly protected by the lack of money involved (companies and
governments are far more interesting as targets than online
communities, so open source projects mainly need to worry about
protecting the software distribution infrastructure that provides an
indirect attack vector on more profitable targets).

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


More information about the Python-ideas mailing list