currently I'm analyzing the process of entropy generation of a linux 64-bit kernel during system startup (for enducational purpose). The system is hosted as/on a (64 bit) virtual machine (Xen domU). For a deep analysis, I'm tracking the state of relevant input parameters i.e. how those are proccessed. In function 'add_interrupt_randomness' I found some code whose itention is not comprehensible to me: The handling of 'cycles' (value provided by cpu cycles counter) and 'now' (jiffies). Both are unsigned 64 bit values, and proccessed as followig:
c_high = (sizeof(cycles) > 4) ? cycles >> 32 : 0;
j_high = (sizeof(now) > 4) ? now >> 32 : 0;
fast_pool->pool[0] ^= cycles ^ j_high ^ irq;
So c_high/j_high (__u32) are assigned with the upper 32 bit of cycles/now and then assigened (after xor) to the fast entropypool. Hence a maximum variation of the values provided by c_high and j_high should be desireable(?). But since c_high and j_high are based on cycles and now/jiffies ,which are purely incremented variables, there is very little/no variation in the upper 32-bits as the traced values reveal:
Values in call no. 1 of 'add_interrupt_randomness':
cycles:0xFFFEA432A6C2CB89
c_high:0xFFFEA432
now_jiffies:0x00000000FFFEDB0A
j_high:0x00000000
Values in call no. 4265* of 'add_interrupt_randomness':
cycles:0xFFFEA43FBA85B313
c_high:0xFFFEA43F
now_jiffies:0x00000000FFFEE80C
j_high:0x00000000
*(startup is completed at this point)
So my question is: why are the upper 32 bits proccessed instead of the lower, which would provide more randomness? Thanks for enlightenment!
If interessted: this is the complete definition of 'add_interrupt_randomness':
void add_interrupt_randomness(int irq, int irq_flags)
{
struct entropy_store *r;
struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness);
struct pt_regs *regs = get_irq_regs();
unsigned long now = jiffies;
cycles_t cycles = random_get_entropy();
__u32 c_high, j_high;
__u64 ip;
unsigned long seed;
int credit = 0;
if (cycles == 0)
cycles = get_reg(fast_pool, regs);
c_high = (sizeof(cycles) > 4) ? cycles >> 32 : 0;
j_high = (sizeof(now) > 4) ? now >> 32 : 0;
fast_pool->pool[0] ^= cycles ^ j_high ^ irq;
fast_pool->pool[1] ^= now ^ c_high;
ip = regs ? instruction_pointer(regs) : _RET_IP_;
fast_pool->pool[2] ^= ip;
fast_pool->pool[3] ^= (sizeof(ip) > 4) ? ip >> 32 :
get_reg(fast_pool, regs);
fast_mix(fast_pool);
add_interrupt_bench(cycles);
if (!crng_ready()) {
if ((fast_pool->count >= 64) &&
crng_fast_load((char *) fast_pool->pool,
sizeof(fast_pool->pool))) {
fast_pool->count = 0;
fast_pool->last = now;
}
return;
}
if ((fast_pool->count < 64) &&
!time_after(now, fast_pool->last + HZ))
return;
r = &input_pool;
if (!spin_trylock(&r->lock))
return;
fast_pool->last = now;
__mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool));
/*
* If we have architectural seed generator, produce a seed and
* add it to the pool. For the sake of paranoia don't let the
* architectural seed generator dominate the input from the
* interrupt noise.
*/
if (arch_get_random_seed_long(&seed)) {
__mix_pool_bytes(r, &seed, sizeof(seed));
credit = 1;
}
spin_unlock(&r->lock);
fast_pool->count = 0;
/* award one bit for the contents of the fast pool */
credit_entropy_bits(r, credit + 1);
}
from: https://elixir.bootlin.com/linux/v4.15.6/source/drivers/char/random.c#L1118
Aucun commentaire:
Enregistrer un commentaire