Random numbers from Linux kernel

The Linux kernel provides two devices that create some random data:

  • /dev/random
  • /dev/urandom

In this article I would like to sched some light on their usability.

Getting data

In a first step I totally neglect their mode of operation but just concentrate on their output. Doing something like

dd if=/dev/random of=random.1k bs=1 count=1000

for 1k, 10k and 100k with kernel 2.6.25, 2.6.32 and 3.2.0 on /dev/random and /dev/urandom should result in 18 files with data.

Unfortunately creating these files takes some time:


------------------------------------------------------------------
Kernel 2.6.25
/dev/urandom
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 0.00850279 seconds, 118 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 0.0789053 seconds, 127 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 0.804752 seconds, 124 kB/s

/dev/random
1000+0 records in
1000+0 records out
1000 bytes (1,0 kB) copied, 188,884 seconds, 0,0 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 6109,87 seconds, 0,0 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 80509,7 seconds, 0,0 kB/s

------------------------------------------------------------------
Kernel 2.6.32
/dev/urandom
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 0.0110714 s, 90.3 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 0.111313 s, 89.8 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 1.10515 s, 90.5 kB/s
1000000+0 records in
1000000+0 records out
1000000 bytes (1.0 MB) copied, 11.1315 s, 89.8 kB/s

/dev/random
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 62.7861 s, 0.0 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 41183.7 s, 0.0 kB/s

------------------------------------------------------------------
Kernel 3.2.0
/dev/urandom
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 0.00949647 s, 105 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 0.127453 s, 78.5 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 0.988632 s, 101 kB/s
1000000+0 records in
1000000+0 records out
1000000 bytes (1.0 MB) copied, 10.3351 s, 96.8 kB/s

/dev/random
1000+0 records in
1000+0 records out
1000 bytes (1,0 kB) copied, 188,692 s, 0,0 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 1238,26 s, 0,0 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 49876,2 s, 0,0 kB/s

So as a first result, not every machine is able to create enough random numbers from /dev/random within a reasonable timeframe. /dev/urandom is able to deliver much more data.

Quality of data

In order to analyse the quality of all those data, I am using software from the Debian package ent. The output from this program is:

  • entropy (8 bits per byte is random)
  • compression rate (0% is random)
  • arithmetic mean value of data bytes (127.5 is random)
  • error of Monte Carlo value for Pi (0% is random)
  • serial correlation coefficient (0.0 is random)

I will compare the output of the device files with data extracted from a real random number generator. It is an entropy key from Simtec Electronics. This device is very well supported in Linux and has an own package in Debian.

filename bytes of random data entropy compression rate [%] mean value Pi error [%] serial correlation
kernel-2.6.25-random.1k 1000 7.805056 2 122.2400 1.82 -0.009945
kernel-2.6.25-random.10k 10000 7.982797 0 128.3138 0.19 0.019648
kernel-2.6.25-random.100k 100000 7.998234 0 127.4827 0.20 -0.003984
kernel-2.6.25-urandom.1k 1000 7.809595 2 127.0670 5.85 -0.014163
kernel-2.6.25-urandom.10k 10000 7.983436 0 126.2238 2.18 0.002686
kernel-2.6.25-urandom.100k 100000 7.998327

0 127.5812 0.42 0.000916
kernel-2.6.25-urandom.1m 1000000 7.999830 0 127.4333 0.02 -0.000883
kernel-2.6.32-random.1k 1000 7.793478 2 128.0320 2.01 -0.012834
kernel-2.6.32-random.10k 10000 7.981851 0 126.4048 0.04 -0.010927
kernel-2.6.32-urandom.1k 1000 7.816192 2 125.9590 4.31 0.018713
kernel-2.6.32-urandom.10k 10000 7.981499 0 127.1751 1.03 -0.004903
kernel-2.6.32-urandom.100k 100000 7.998210 0 127.8787 1.20 0.002980
kernel-2.6.32-urandom.1m 1000000 7.999809 0 127.4078 0.04 0.001283
kernel-3.2.0-random.1k 1000 7.821790 2 129.0400 3.36 0.039336
kernel-3.2.0-random.10k 10000 7.983251 0 127.3707 1.72 -0.008163
kernel-3.2.0-random.100k 100000 7.998081 0 127.1491 0.23 -0.005288
kernel-3.2.0-urandom.1k 1000 7.786879 2 127.8330 5.85 -0.067891
kernel-3.2.0-urandom.10k 10000 7.981188 0 127.8730 3.02 -0.006596
kernel-3.2.0-urandom.100k 100000 7.998690 0 127.6806 0.09 -0.001264
kernel-3.2.0-urandom.1m 1000000 7.999840 0 127.4574 0.17 -0.000390
ekg.1k 1000 7.829743 2 127.0100 4.89 -0.021692
ekg.10k 10000 7.982590 0 128.5926 0.96 -0.018205
ekg.100k 100000 7.998119 0 127.3629 0.52 0.000735
ekg.1m 1000000 7.999819 0 127.4686 0.22 -0.001522

So as a first conclusion, there is not much difference between /dev/urandom, /dev/random and an EKG. Up to now I would not mind to use /dev/urandom for example to generate short living session keys in an https-connection. But I am sure that there are more sophisticated tests to get the quality of randomness, so expect more to come in this blog. Of course any hints and tipps are always welcome.