I have a hard time following the lines of thought in a recent LKML thread on what to do about prolonged (unbounded?) blocking of a getrandom
call at system initialization.
The crux of the problem is the kernel neither getting nor creating enough entropy for an early getrandom
call to succeed in a timely manner.
Linus Torvalds was, incomprehensibly to me, arguing for breaking userspace in that thread, changing the behavior of getrandom
to error out with EINVAL
. (Who stops Linus from breaking userspace?)
Arguably, it's the kernel that is at fault for not getting enough entropy to userspace even in early userspace.
(void *)getauxval(AT_RANDOM)
in an init process already yields a pointer to 128 bits of randomness. (A C library may even make use of it to initialize things like the __stack_chk_guard
value.)
Why can't the kernel take (those?) 128 bits to initialize a CSPRNG to be used by the kernel implementations of getrandom(..., 0)
and /dev/urandom
, enabling those interfaces to spew out whatever amount of pseudo-random values requested?
Apparently OpenBSD does not have this problem. Its arc4random_buf
function always returns the randomness requested with no room for error. So I'm guessing the OpenBSD kernel can always get enough entropy to then initialize a CSPRNG either in-kernel or on the userspace side.
What were the choices that led OpenBSD to having an error-free function? And could the Linux kernel/userspace adopt those or similar choices for getrandom
?
Aucun commentaire:
Enregistrer un commentaire