Make a unique filesystem path, without creating the file

Steven D'Aprano steve at pearwood.info
Tue Feb 23 20:40:07 EST 2016


On Tue, 23 Feb 2016 05:54 pm, Marko Rauhamaa wrote:

> Steven D'Aprano <steve at pearwood.info>:
> 
>> On Tue, 23 Feb 2016 06:32 am, Marko Rauhamaa wrote:
>>> Under Linux, /dev/random is the way to go when strong security is
>>> needed. Note that /dev/random is a scarce resource on ordinary
>>> systems.
>>
>> That's actually incorrect, but you're not the only one to have been
>> mislead by the man pages.
>>
>> http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/
> 
> Still, mostly hypnotic repetitions.

Repetition for the sake of emphasis, because there are so many misled and
confused people on the internet who misunderstand the difference between
urandom and random and consequently give bad advice. I believe that the
Linux man page for urandom is to blame, although I don't know why it hasn't
been fixed.

Possibly because it is *technically* correct, in the sense of "if you are
concerned by the risk of being hit by a meteorite, wearing a stainless
steel cooking pot on your head will give you some protection from meteorite
strikes to the head". Everything in it is technically correct, but
misleading.


> However, it admits:
> 
>    But /dev/random also tries to keep track of how much entropy remains
>    in its kernel pool, and will occasionally go on strike if it decides
>    not enough remains.
> 
> That's the whole point. 

Exactly, but you've missed the point. That is precisely why the blocking
random is HARMFUL and should not be used. There is one, and only one,
scenario when your CSPRNG should block: before the system has enough
entropy to securely seed the CSPRNG and it is at risk of returning
predictable numbers.

But after that point has passed, there is no test you can perform to
distinguish the outputs of /dev/random and /dev/urandom (apart from the
blocking behaviour itself). If I give you a million numbers, there is no
way you can tell whether I used random or urandom.

The important thing here is that there is no difference in "quality"
(whatever that means!) between the random numbers generated by urandom and
those generated by random. They are equally unpredictable. They pass the
same randomness tests. Neither is "better" or "worse" than the other,
because they are both generated by the same CSPRNG or HRNG.

Here is a summary of the random/urandom distinction on various Unixes:


Linux: 
    random blocks, urandom never blocks, both use the same CSPRNG 
    based on SHA-1 hashes, both will use a HRNG if available

FreeBSD:
    urandom is a link to random, which never blocks; uses 256-bit 
    Yarrow CSPRNG, will use a HRNG if available

OpenBSD:
    both never block; both use a variant of the RC4 CSPRNG 
    (misleadingly renamed ARC4 due to licencing issues), in
    newer versions use the ChaCha20 CSPRNG

OS X:
    both never block and use 160-bit Yarrow

NetBSD:
    random blocks, urandom never blocks, both use the same AES-128
    CSPRNG


The NetBSD man pages are quite scathing:


    "The entropy accounting described here is not grounded in any
    cryptography theory.  It is done because it was always done, 
    and because it gives people a warm fuzzy feeling about 
    information theory.

    ...

    History is littered with examples of broken entropy sources and
    failed system engineering for random number generators.  Nobody 
    has ever reported distinguishing AES ciphertext from uniform 
    random without side channels, nor reported computing SHA-1 
    preimages faster than brute force.  The folklore information-
    theoretic defence against computationally unbounded attackers 
    replaces system engineering that successfully defends against 
    realistic threat models by imaginary theory that defends only 
    against fantasy threat models."


To be clear, the "folklore information-theoretic defence" they are referring
to is /dev/random's blocking behaviour.

http://netbsd.gw.com/cgi-bin/man-cgi?rnd+4+NetBSD-current


The blocking behaviour of /dev/random (on Linux) doesn't solve any real
problems, but it *creates* new problems. /dev/random can block for minutes
or even hours, especially straight after booting a freshly installed OS.
This can be considered a Denial Of Service attack, and even if it isn't, it
encourages developers to "fix" the problem by using their own home-brewed
random numbers, weakening the security of the system.

There's even a minority viewpoint that constantly adding new entropy to the
CSPRNG is useless. Apart from collecting sufficient entropy for the initial
seed, you should never add new entropy to the CSPRNG. Your CSPRNG is either
cryptographically strong, or it isn't. If it is, then it is already
unpredictable and adding more entropy is a waste of time. If it isn't, then
adding more entropy isn't going to help you.

Adding entropy is just one more component that can contain bugs (see the
NetSBD comment about "broken entropy sources") or even allow an attack on
the CSPRNG:

http://blog.cr.yp.to/20140205-entropy.html

There's one good argument for adding new entropy: imagine an attacker who,
somehow, manages to get access to the internal state of your CSPRNG, but
nothing else. (If they have access to your whole system, they own you, and
the game is over.) In this scenario, instead of being able to predict the
random numbers you produce forever, they will only be able to predict them
until you've added sufficient fresh entropy into the system. Assuming they
can't read or modify the entropy going in.


> /dev/random will rather block the program than 
> lower the quality of the random numbers below a threshold. /dev/urandom
> has no such qualms.

No, that's wrong. There is no "lower the quality below a threshold".
Both /dev/random and /dev/urandom use the same CSPRNG on all the Unixes I
know of. So long as the CSPRNG has been seeded with enough entropy at the
start, they are both of equally good quality. If the OS supports a HRNG,
both will use it. On at least one Unix, FreeBSD, the two block devices are
literally identical, and one is a link to the other.


>    If you use /dev/random instead of urandom, your program will
>    unpredictably (or, if you’re an attacker, very predictably) hang when
>    Linux gets confused about how its own RNG works.
> 
> Yes, possibly indefinitely, too.
> 
>    Using /dev/random will make your programs less stable, but it won’t
>    make them any more cryptographically safe.
> 
> It is correct that you shouldn't use /dev/random as a routine source of
> bulk random numbers. It is also correct that /dev/urandom depletes the
> entropy pool as effectively as /dev/random. 

Correct so far. But then:

> However, when you are 
> generating signing or encryption keys, you should use /dev/random.

And that is where you repeat something which is rank superstition.


> As stated in <URL: https://lwn.net/Articles/606141/>:
> 
>    /dev/urandom should be used for essentially all random numbers
>    required, but /dev/random is sometimes used for things like extremely
>    sensitive, long-lived keys (e.g. GPG) or one-time pads.

That doesn't mean anything beyond a statement of fact that people sometimes
use /dev/random. Yeah, okay. So what? People sometimes use urandom for the
same purposes too. I'm sure that, somewhere in the world, some poor dofus
has used a lousy 16-bit PRNG to generate his GPG key. Doesn't mean that it
is good to do so, or necessary to do so.



-- 
Steven




More information about the Python-list mailing list