This is just another reminder for my future self:
Do not forget /usr/sbin/rsmtp in /etc/uucp/sys:commands line in case you activate rsmtp in exim4 config.
As time goes by …
This is just another reminder for my future self:
Do not forget /usr/sbin/rsmtp in /etc/uucp/sys:commands line in case you activate rsmtp in exim4 config.
I write this article mainly to be able to remember all steps in a few months.
– Get kernel: git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
This is the main kernel repository. Other repositories can be found at http://git.kernel.org/
For example the development of /dev/random takes place in git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random.git
– As I don’t want to manually handle entries in grub, I want to use make-kpkg to build Debian packages.
– The package needs to be built as root, ‘–rootcmd fakeroot’ does not always work. Maybe there is some time to check this issue.
– Working with original sources instead of the Debian source package results in a plus sign added to the kernel version. This is done by one of the kernel scripts (scripts/setlocalversion)
For more info see comment at end of that script. I avoid this ‘+’ by doing something like: ‘export LOCALVERSION=”-ta-1″
– As /tmp nowadays is a bit small, you need to do somehting like ‘export TEMPDIR=/home/tmp’ or whatever suits your system
– Target ‘buildpackage’ calls ‘clean’ + ‘binary’
‘binary’ -> ‘binary-indep’ + ‘binary-arch’
‘binary-indep’ -> ‘kernel_source’, ‘kernel_manual’, ‘kernel_doc’
‘binary-arch’ -> ‘kernel_headers’, ‘kernel_image’
So for a normal builds, call ‘make-kpkg –initrd binary-arch’ or at least ‘make-kpkg –initrd kernel_image’
– In case of several cores call ‘make-kpkg -j 4 –initrd binary-arch’
(the blank between ‘j’ and ‘4’ is important)
The best results will be obtained if the given number equals the number of cores
– The old Laptop needs the following times to build the package:
make-kpkg –initrd binary-arch 91 minutes
make-kpkg -j 3 –initrd binary-arch 78 minutes
make-kpkg -j 4 –initrd binary-arch 59 minutes
make-kpkg -j 4 –initrd binary-arch &> log 57 minutes
– Check version of package in automatically build debian/changelog
The debian-directory can be rebuild by ‘make-kpkg debian’
– Move kernel package to Xen VM (grub-legacy should be installed, otherwise pygrub on wheezy dom0 is not able to start domU)
– dpkg -i
– edit /boot/grub/menu.lst (again a pygrub issue with some entries)
I just registered at ORCID (Open Researcher and Contributor ID) and got the ID 0000-0002-9570-7046. Importing my few publications was rather easy …
The Linux kernel provides two devices that create some random data:
In this article I would like to sched some light on their usability.
In a first step I totally neglect their mode of operation but just concentrate on their output. Doing something like
dd if=/dev/random of=random.1k bs=1 count=1000
for 1k, 10k and 100k with kernel 2.6.25, 2.6.32 and 3.2.0 on /dev/random and /dev/urandom should result in 18 files with data.
Unfortunately creating these files takes some time:
------------------------------------------------------------------
Kernel 2.6.25
/dev/urandom
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 0.00850279 seconds, 118 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 0.0789053 seconds, 127 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 0.804752 seconds, 124 kB/s
/dev/random
1000+0 records in
1000+0 records out
1000 bytes (1,0 kB) copied, 188,884 seconds, 0,0 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 6109,87 seconds, 0,0 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 80509,7 seconds, 0,0 kB/s
------------------------------------------------------------------
Kernel 2.6.32
/dev/urandom
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 0.0110714 s, 90.3 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 0.111313 s, 89.8 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 1.10515 s, 90.5 kB/s
1000000+0 records in
1000000+0 records out
1000000 bytes (1.0 MB) copied, 11.1315 s, 89.8 kB/s
/dev/random
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 62.7861 s, 0.0 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 41183.7 s, 0.0 kB/s
------------------------------------------------------------------
Kernel 3.2.0
/dev/urandom
1000+0 records in
1000+0 records out
1000 bytes (1.0 kB) copied, 0.00949647 s, 105 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 0.127453 s, 78.5 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 0.988632 s, 101 kB/s
1000000+0 records in
1000000+0 records out
1000000 bytes (1.0 MB) copied, 10.3351 s, 96.8 kB/s
/dev/random
1000+0 records in
1000+0 records out
1000 bytes (1,0 kB) copied, 188,692 s, 0,0 kB/s
10000+0 records in
10000+0 records out
10000 bytes (10 kB) copied, 1238,26 s, 0,0 kB/s
100000+0 records in
100000+0 records out
100000 bytes (100 kB) copied, 49876,2 s, 0,0 kB/s
So as a first result, not every machine is able to create enough random numbers from /dev/random within a reasonable timeframe. /dev/urandom is able to deliver much more data.
In order to analyse the quality of all those data, I am using software from the Debian package ent. The output from this program is:
I will compare the output of the device files with data extracted from a real random number generator. It is an entropy key from Simtec Electronics. This device is very well supported in Linux and has an own package in Debian.
filename | bytes of random data | entropy | compression rate [%] | mean value | Pi error [%] | serial correlation |
kernel-2.6.25-random.1k | 1000 | 7.805056 | 2 | 122.2400 | 1.82 | -0.009945 |
kernel-2.6.25-random.10k | 10000 | 7.982797 | 0 | 128.3138 | 0.19 | 0.019648 |
kernel-2.6.25-random.100k | 100000 | 7.998234 | 0 | 127.4827 | 0.20 | -0.003984 |
kernel-2.6.25-urandom.1k | 1000 | 7.809595 | 2 | 127.0670 | 5.85 | -0.014163 |
kernel-2.6.25-urandom.10k | 10000 | 7.983436 | 0 | 126.2238 | 2.18 | 0.002686 |
kernel-2.6.25-urandom.100k | 100000 | 7.998327 | 0 | 127.5812 | 0.42 | 0.000916 |
kernel-2.6.25-urandom.1m | 1000000 | 7.999830 | 0 | 127.4333 | 0.02 | -0.000883 |
kernel-2.6.32-random.1k | 1000 | 7.793478 | 2 | 128.0320 | 2.01 | -0.012834 |
kernel-2.6.32-random.10k | 10000 | 7.981851 | 0 | 126.4048 | 0.04 | -0.010927 |
kernel-2.6.32-urandom.1k | 1000 | 7.816192 | 2 | 125.9590 | 4.31 | 0.018713 |
kernel-2.6.32-urandom.10k | 10000 | 7.981499 | 0 | 127.1751 | 1.03 | -0.004903 |
kernel-2.6.32-urandom.100k | 100000 | 7.998210 | 0 | 127.8787 | 1.20 | 0.002980 |
kernel-2.6.32-urandom.1m | 1000000 | 7.999809 | 0 | 127.4078 | 0.04 | 0.001283 |
kernel-3.2.0-random.1k | 1000 | 7.821790 | 2 | 129.0400 | 3.36 | 0.039336 |
kernel-3.2.0-random.10k | 10000 | 7.983251 | 0 | 127.3707 | 1.72 | -0.008163 |
kernel-3.2.0-random.100k | 100000 | 7.998081 | 0 | 127.1491 | 0.23 | -0.005288 |
kernel-3.2.0-urandom.1k | 1000 | 7.786879 | 2 | 127.8330 | 5.85 | -0.067891 |
kernel-3.2.0-urandom.10k | 10000 | 7.981188 | 0 | 127.8730 | 3.02 | -0.006596 |
kernel-3.2.0-urandom.100k | 100000 | 7.998690 | 0 | 127.6806 | 0.09 | -0.001264 |
kernel-3.2.0-urandom.1m | 1000000 | 7.999840 | 0 | 127.4574 | 0.17 | -0.000390 |
ekg.1k | 1000 | 7.829743 | 2 | 127.0100 | 4.89 | -0.021692 |
ekg.10k | 10000 | 7.982590 | 0 | 128.5926 | 0.96 | -0.018205 |
ekg.100k | 100000 | 7.998119 | 0 | 127.3629 | 0.52 | 0.000735 |
ekg.1m | 1000000 | 7.999819 | 0 | 127.4686 | 0.22 | -0.001522 |
So as a first conclusion, there is not much difference between /dev/urandom, /dev/random and an EKG. Up to now I would not mind to use /dev/urandom for example to generate short living session keys in an https-connection. But I am sure that there are more sophisticated tests to get the quality of randomness, so expect more to come in this blog. Of course any hints and tipps are always welcome.
Of course there are millions of posts with similar content. But instead of storing a bookmark in one browser, I prefer to collect such knowledge at a central place.
In case you are working with Debian Wheezy and exim4 and want to create a mailbox, that gets all emails to unknown addresses, the following has to be done:
Disadvantage:
I am taking care of several dedicated servers hosted at different providers. As these servers are running 24/7 and have lots of things to write to and read from disk, from time to time a disk fails and has to be replaced. As there are RAIDs in these servers, this is no problem. Quite accidentally three disks at three different providers failed within a short time, and this is the story of their replacement:
(both are really almost identical)
Maybe there are good reasons for such a procedure. From the customers point of view this is a total desaster. I think you can guess who will not rent out the next servers.
I would like to anounce the Debian Med advent calendar 2012. Just like last year the Debian Med team starts a bug squashing event from the December 1st to 24th. Every day at least one bug from the Debian BTS should be closed. Especially RC bugs for the oncoming Debian release (Wheezy) or bugs in one of the packages maintained by Debian Med shall be closed. Anyone shall be called upon to fix a bug or send a patch. Don’t hestitate, start to squash :-).
Since I first looked at the list of orphaned Debian packages (available at http://www.debian.org/devel/wnpp/orphaned) some time ago, the package a56 has been the lonely leader of the list.
This package contains a freeware assembler for the 56000 architecture. These chips have been very popular in the 1980s (used in NeXT, Atari Falcon and SGI Indigo Workstations).
Updated versions are still used in today’s devices like some mobile phones (-> http://www.freescale.com/webapp/sps/site/homepage.jsp?code=563XXGPDSP)
So, being a bit nostalgic, I adopted this package and brought it to shape. There was even a small bug that I was able to close.
Recently I got a bug report for package ent. The internal counter of processed bytes has just type long. In case you feed enough bytes to ent, there will be an overflow after about half an hour (of course that depends on your type of CPU, the bug was reported on architecture i386).
As modern C (C99) introduced a new type long long, I changed the type of some variables from simple long to unsigned long long. The overflow disappeared for now, but it will reappear just some trillion bytes later.
So, are there any recommendations on how to handle such a situation better?
Just recently I heard of a change of the maintainer of chktex. Apparently development goes on as there is already version 1.7.1 available. Even for the old 1.6 branch there have been some bugfixes.
So I uploaded 1.6.6 to unstable and 1.7.1 to experimental. Keep on testing :-).