Fun with puppet: Is puppet really running?

I am using puppet to configure most of my machines. Unfortunately I am not perfect and introduce errors in my modules. Of course I only test such modules on machines that are not affected. On an affected machine puppet starts running, works on some modules, detects an error and stops. So sometimes I have a happily running puppet that does only half of the tasks it should do. Using stages in puppet I can hopefully detect such situations.

First I define stages in my manifest/nodes.pp:

stage { 'start':
before => Stage['main'],
}
stage { 'last': }
Stage['main'] -> Stage['last']

class { 'createstamp':
stage => 'last',
}

class { 'resolv_conf':
stage => 'start',
}

I have one stage start that is executed at the beginning and one stage last that shall be done when everything else is ready. Everything else will run in stage main.
At the moment I only have one module resolv_conf at the beginning. DNS should always work as expected. The only module in the last stage is createstamp that just creates a temporary file containing a time stamp.


class createstamp {
file { 'stamp':
path => "/usr/local/nagios/createStamp",
ensure => file,
mode => '0644',
owner => 'root',
group => 'root',
source => [
"puppet:///modules/createstamp/stamp"
],
}
}

The file in this module will be created on the puppetmaster with a cronjob that runs every two hours:

#!/bin/bash
STAMPFILE=/etc/puppet/code/environments/production/modules/createstamp/files/stamp
s2000=`date +%s --date="Jan 1 00:00:00 UTC 2000"`
now=`date +%s`
echo $((now-s2000)) > $STAMPFILE

No I just have to check this file with nagios and a custom nrpe check like:

#!/bin/sh
STAMPFILE=/usr/local/nagios/createStamp
s2000=`date +%s --date="Jan 1 00:00:00 UTC 2000"`
if [ ! -f $STAMPFILE ]; then
echo "CRITICAL - no stampfile available here"
exit 2
fi
now=`date +%s`
if [ -f $STAMPFILE ]; then
stampTime=`cat $STAMPFILE`
fi
diff=$((now-s2000-stampTime))
if [ $diff -gt 60000 ]; then
echo "CRITICAL - stamp to old: $now / $((now-s2000)) $stampTime"
exit 2
else
echo "OK - stamp ok $now / $((now-s2000)) $stampTime"
fi
exit 0

In this case I wait for 60000s before nagios complains. This is due to some external machines running nagios only every 8h. So I wait 16h before everything goes red.

HowTo build a kernel

I write this article mainly to be able to remember all steps in a few months.

– Get kernel: git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
This is the main kernel repository. Other repositories can be found at http://git.kernel.org/
For example the development of /dev/random takes place in git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random.git

– As I don’t want to manually handle entries in grub, I want to use make-kpkg to build Debian packages.

– The package needs to be built as root, ‘–rootcmd fakeroot’ does not always work. Maybe there is some time to check this issue.

– Working with original sources instead of the Debian source package results in a plus sign added to the kernel version. This is done by one of the kernel scripts (scripts/setlocalversion)
For more info see comment at end of that script. I avoid this ‘+’ by doing something like: ‘export LOCALVERSION=”-ta-1″

– As /tmp nowadays is a bit small, you need to do somehting like ‘export TEMPDIR=/home/tmp’ or whatever suits your system

– Target ‘buildpackage’ calls ‘clean’ + ‘binary’
‘binary’ -> ‘binary-indep’ + ‘binary-arch’
‘binary-indep’ -> ‘kernel_source’, ‘kernel_manual’, ‘kernel_doc’
‘binary-arch’ -> ‘kernel_headers’, ‘kernel_image’

So for a normal builds, call ‘make-kpkg –initrd binary-arch’ or at least ‘make-kpkg –initrd kernel_image’

– In case of several cores call ‘make-kpkg -j 4 –initrd binary-arch’
(the blank between ‘j’ and ‘4’ is important)
The best results will be obtained if the given number equals the number of cores

– The old Laptop needs the following times to build the package:
make-kpkg –initrd binary-arch 91 minutes
make-kpkg -j 3 –initrd binary-arch 78 minutes
make-kpkg -j 4 –initrd binary-arch 59 minutes
make-kpkg -j 4 –initrd binary-arch &> log 57 minutes

– Check version of package in automatically build debian/changelog
The debian-directory can be rebuild by ‘make-kpkg debian’

– Move kernel package to Xen VM (grub-legacy should be installed, otherwise pygrub on wheezy dom0 is not able to start domU)
– dpkg -i
– edit /boot/grub/menu.lst (again a pygrub issue with some entries)

Random numbers from Linux kernel

The Linux kernel provides two devices that create some random data:

  • /dev/random
  • /dev/urandom

In this article I would like to sched some light on their usability.

Getting data

In a first step I totally neglect their mode of operation but just concentrate on their output. Doing something like

dd if=/dev/random of=random.1k bs=1 count=1000

for 1k, 10k and 100k with kernel 2.6.25, 2.6.32 and 3.2.0 on /dev/random and /dev/urandom should result in 18 files with data.

Unfortunately creating these files takes some time:


------------------------------------------------------------------
Kernel 2.6.25
/dev/urandom
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 0.00850279 seconds, 118 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 0.0789053 seconds, 127 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 0.804752 seconds, 124 kB/s

/dev/random
1000+0 records in
1000+0 records out
1000 bytes (1,0 kB) copied, 188,884 seconds, 0,0 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 6109,87 seconds, 0,0 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 80509,7 seconds, 0,0 kB/s

------------------------------------------------------------------
Kernel 2.6.32
/dev/urandom
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 0.0110714 s, 90.3 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 0.111313 s, 89.8 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 1.10515 s, 90.5 kB/s
1000000+0 records in
1000000+0 records out
1000000 bytes (1.0 MB) copied, 11.1315 s, 89.8 kB/s

/dev/random
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 62.7861 s, 0.0 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 41183.7 s, 0.0 kB/s

------------------------------------------------------------------
Kernel 3.2.0
/dev/urandom
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 0.00949647 s, 105 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 0.127453 s, 78.5 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 0.988632 s, 101 kB/s
1000000+0 records in
1000000+0 records out
1000000 bytes (1.0 MB) copied, 10.3351 s, 96.8 kB/s

/dev/random
1000+0 records in
1000+0 records out
1000 bytes (1,0 kB) copied, 188,692 s, 0,0 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 1238,26 s, 0,0 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 49876,2 s, 0,0 kB/s

So as a first result, not every machine is able to create enough random numbers from /dev/random within a reasonable timeframe. /dev/urandom is able to deliver much more data.

Quality of data

In order to analyse the quality of all those data, I am using software from the Debian package ent. The output from this program is:

  • entropy (8 bits per byte is random)
  • compression rate (0% is random)
  • arithmetic mean value of data bytes (127.5 is random)
  • error of Monte Carlo value for Pi (0% is random)
  • serial correlation coefficient (0.0 is random)

I will compare the output of the device files with data extracted from a real random number generator. It is an entropy key from Simtec Electronics. This device is very well supported in Linux and has an own package in Debian.

filename bytes of random data entropy compression rate [%] mean value Pi error [%] serial correlation
kernel-2.6.25-random.1k 1000 7.805056 2 122.2400 1.82 -0.009945
kernel-2.6.25-random.10k 10000 7.982797 0 128.3138 0.19 0.019648
kernel-2.6.25-random.100k 100000 7.998234 0 127.4827 0.20 -0.003984
kernel-2.6.25-urandom.1k 1000 7.809595 2 127.0670 5.85 -0.014163
kernel-2.6.25-urandom.10k 10000 7.983436 0 126.2238 2.18 0.002686
kernel-2.6.25-urandom.100k 100000 7.998327

0 127.5812 0.42 0.000916
kernel-2.6.25-urandom.1m 1000000 7.999830 0 127.4333 0.02 -0.000883
kernel-2.6.32-random.1k 1000 7.793478 2 128.0320 2.01 -0.012834
kernel-2.6.32-random.10k 10000 7.981851 0 126.4048 0.04 -0.010927
kernel-2.6.32-urandom.1k 1000 7.816192 2 125.9590 4.31 0.018713
kernel-2.6.32-urandom.10k 10000 7.981499 0 127.1751 1.03 -0.004903
kernel-2.6.32-urandom.100k 100000 7.998210 0 127.8787 1.20 0.002980
kernel-2.6.32-urandom.1m 1000000 7.999809 0 127.4078 0.04 0.001283
kernel-3.2.0-random.1k 1000 7.821790 2 129.0400 3.36 0.039336
kernel-3.2.0-random.10k 10000 7.983251 0 127.3707 1.72 -0.008163
kernel-3.2.0-random.100k 100000 7.998081 0 127.1491 0.23 -0.005288
kernel-3.2.0-urandom.1k 1000 7.786879 2 127.8330 5.85 -0.067891
kernel-3.2.0-urandom.10k 10000 7.981188 0 127.8730 3.02 -0.006596
kernel-3.2.0-urandom.100k 100000 7.998690 0 127.6806 0.09 -0.001264
kernel-3.2.0-urandom.1m 1000000 7.999840 0 127.4574 0.17 -0.000390
ekg.1k 1000 7.829743 2 127.0100 4.89 -0.021692
ekg.10k 10000 7.982590 0 128.5926 0.96 -0.018205
ekg.100k 100000 7.998119 0 127.3629 0.52 0.000735
ekg.1m 1000000 7.999819 0 127.4686 0.22 -0.001522

So as a first conclusion, there is not much difference between /dev/urandom, /dev/random and an EKG. Up to now I would not mind to use /dev/urandom for example to generate short living session keys in an https-connection. But I am sure that there are more sophisticated tests to get the quality of randomness, so expect more to come in this blog. Of course any hints and tipps are always welcome.