The Linux kernel provides two devices that create some random data:
In this article I would like to sched some light on their usability.
Getting data
In a first step I totally neglect their mode of operation but just concentrate on their output. Doing something like
dd if=/dev/random of=random.1k bs=1 count=1000
for 1k, 10k and 100k with kernel 2.6.25, 2.6.32 and 3.2.0 on /dev/random and /dev/urandom should result in 18 files with data.
Unfortunately creating these files takes some time:
------------------------------------------------------------------
Kernel 2.6.25
/dev/urandom
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 0.00850279 seconds, 118 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 0.0789053 seconds, 127 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 0.804752 seconds, 124 kB/s
/dev/random
1000+0 records in
1000+0 records out
1000 bytes (1,0 kB) copied, 188,884 seconds, 0,0 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 6109,87 seconds, 0,0 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 80509,7 seconds, 0,0 kB/s
------------------------------------------------------------------
Kernel 2.6.32
/dev/urandom
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 0.0110714 s, 90.3 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 0.111313 s, 89.8 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 1.10515 s, 90.5 kB/s
1000000+0 records in
1000000+0 records out
1000000 bytes (1.0 MB) copied, 11.1315 s, 89.8 kB/s
/dev/random
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 62.7861 s, 0.0 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 41183.7 s, 0.0 kB/s
------------------------------------------------------------------
Kernel 3.2.0
/dev/urandom
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 0.00949647 s, 105 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 0.127453 s, 78.5 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 0.988632 s, 101 kB/s
1000000+0 records in
1000000+0 records out
1000000 bytes (1.0 MB) copied, 10.3351 s, 96.8 kB/s
/dev/random
1000+0 records in
1000+0 records out
1000 bytes (1,0 kB) copied, 188,692 s, 0,0 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 1238,26 s, 0,0 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 49876,2 s, 0,0 kB/s
So as a first result, not every machine is able to create enough random numbers from /dev/random within a reasonable timeframe. /dev/urandom is able to deliver much more data.
Quality of data
In order to analyse the quality of all those data, I am using software from the Debian package ent. The output from this program is:
- entropy (8 bits per byte is random)
- compression rate (0% is random)
- arithmetic mean value of data bytes (127.5 is random)
- error of Monte Carlo value for Pi (0% is random)
- serial correlation coefficient (0.0 is random)
I will compare the output of the device files with data extracted from a real random number generator. It is an entropy key from Simtec Electronics. This device is very well supported in Linux and has an own package in Debian.
So as a first conclusion, there is not much difference between /dev/urandom, /dev/random and an EKG. Up to now I would not mind to use /dev/urandom for example to generate short living session keys in an https-connection. But I am sure that there are more sophisticated tests to get the quality of randomness, so expect more to come in this blog. Of course any hints and tipps are always welcome.