Yum on Red Hat 5 hangs when using proxy

I had a machine that started hanging when running yum update on it. And the only way to actually stop yum then was to kill the process from another shell, pretty strange.

I noticed that it was trying to look up and old proxy server which was not in use anymore, the machine itself got online without issues on other services but then i realised that the file “/etc/sysconfig/rhn/up2date” had a entry for yum proxy settings.

It is also worth checking all files in your “/etc/yum” folder and make sure that none of your repo files have dedicated proxy settings defined since that will override anything from up2date.

When commenting out that proxy setting, yum started behaving again.

Wrong charset for cron

I got a issue on a export job i had that was running daily on which the charset of the data exported wer wrong, and this did not happen when i ran the job manually. I noticed that Cron used the charset “POSIX” instead of en_US.UTF-8 which i used. I fixed this by adding “LANG=en_US.UTF-8” to the file /etc/default/locale.

It appears that when the LANG variable is not set in that file, Cron will fall back to using the POSIX charset.

Expanding a kVM disk image

Had to expand a KVM virtual machine today. Luckily, that`s pretty straight forward. You simply create a new disk image with the extra size needed, merge it into the original disk and voila. Then you just need to partition in the extra space and you are good to go.

How-to:

1: Halt your virtual machine.

You need to stop your virtual machine before going wild with the drive. Virsh stop <vm name>, or virsh destroy <vm name> if it somehow wont stop.

2: Create a disk with the extra space needed:

qemu-img create -f raw 5gig.img 5G

3: Merge it into the disk you are working with

cat 5gig.img >> yourdisk.img

4: Boot up and and partition your drive.

Then start up your virtual machine again with virsh start <vm name>. If you use Windows server, all you need to do is to visit disk managent, right click your drive with little free space and choose “extend partition”. The job takes seconds and does not require any reboot.

Tuning Ubuntu mdadm RAID5/6

If you are using mdadm RAID 5 or 6 with Ubuntu, you might notice that the performance is not all uber all the time. Reason for this is that the default tuning settings for Ubuntu is set to rather motdest values. These can lucikly easily be tuned. I will in this article increase some settings until my read and write performance against my RAID 6 has been improved a lot.

My setup:
CPU: Intel(R) Core(TM)2 Quad CPU Q9300
RAM: 16G
Drives: 11 drives in one RAID6 with drives split over two cheap PCI-E x4 controllers and the motherboard`s internal controller.

I will test my system between each tuning by using dd for read and write testing. Since i have a nice amount of RAM available, i will use a test file of 36G. (bs=16k) Between each test (both read and write), i clear the OS disk cache with the command:

sync;echo 3 > /proc/sys/vm/drop_caches

Tuning stripe_cache_size

stripe_cache_size affects RAM used by mdadm to writing of data. Ubuntu`s default value is 256, you can verify your value by doing:

cat /sys/block/md0/md/stripe_cache_size

And changing it with:

echo *number* > /sys/block/md0/md/stripe_cache_size

Test results with stripe_cache_size=256
– Write performance: 174 MB/s

Not to good, i therefore increased it some levels, each level with result is described below:

Test results with stripe_cache_size=512
– Write performance: 212 MB/s

Test results with stripe_cache_size=1024
– Write performance: 237 MB/s

Test results with stripe_cache_size=2048
– Write performance: 254 MB/s

Test results with stripe_cache_size=4096
– Write performance: 295 MB/s

Test results with stripe_cache_size=8192
– Write performance: 362 MB/s

Test results with stripe_cache_size=16384
– Write performance: 293 MB/s

Test results with stripe_cache_size=32768
– Write performance: 326 MB/s

So, going from 256 to 32K ~doubled my write performance, not bad! 🙂

Tuning Read Ahead

Time to change a bit on read ahead, which should impact read performance. Default read ahead value is “1536”, and you can change it with the command:

blockdev --setra *number* /dev/md0

Test results with Read Ahead @ 1536
– Read performance: 717 MB/s

Test results with Read Ahead @ 4096
– Read performance: 746 MB/s

Test results with Read Ahead @ 32768
– Read performance: 731 MB/s

Test results with Read Ahead @ 262144
– Read performance: 697 MB/s

Test results with Read Ahead @ 524288
– Read performance: 630 MB/s

So oposite of the write performance tuning, this actually became worse for most of the settings. So 4096 is the best for my system.

In conclution

This is just an example on how different settings can have rather large impact on a system, both for the better and for the worse. If you are going to tune your system you have to test different setting for yourself and see what works best for your setup.  Higher values does not automaticly mean better results. I ended up with “stripe_cache_size=8192” and “Read Ahead @ 4096” for my system.

If you want to make sure that your changes is saved when rebooting the system, remember to add these commands (with your values) in /etc/rc.local.

Change hostname on Linode VPS

Linode explains pretty well how to change the hostname of your VPS. But they do not mention that in the latest Ubuntu it is set via Linode`s own DHCP server. So even if you set it via /etc/hostname and in hosts, it will still be overwritten by Linode`s own hostname given to your server.

The solution is to kindly thell DHCPCD to *not* override the hostname you have set, open /etc/default/dhcpcd and alter the following:

1
SET_HOSTNAME='yes'

to

1
SET_HOSTNAME='no'

Reboot and voila! 🙂

Clear disk cache on Linux

I have been doing a bit benchmarking in the previous days, and have then needed to clear the disk cache from RAM without wanting to reboot each time. The command i use for that is:

1
sync;echo 3 > /proc/sys/vm/drop_caches

Which tells the kernel to free pagecache, dentries and inodes.

jpackages error: Missing Dependency: /usr/bin/rebuild-security-providers

Jpackages on Red Hat has a nifty bug that causes dependency errors.

Luckily, somebody has created a fix as a rpm package 🙂

wget http://plone.lucidsolutions.co.nz/linux/centos/images/jpackage-utils-compat-el5-0.0.1-1.noarch.rpm
rpm -ivh jpackage-utils-compat-el5-0.0.1-1.noarch.rpm

And then jpackages works.

ImportError: No module named trac

When working with a new Trac installation you can bump into the error message “ImportError: No module named trac”. This is usually caused by Trac installation not unzipping all the needed files.

The following one liner should fix the issue:

1
cd /usr/lib/python2.4/site-packages;unzip Trac-0.12.2-py2.4.egg

howto: Create a RAID6 with mdadm

Note: This guide discusses RAID6 but the same will work for a RAID5,  the only difference is that RAID6 uses two disks for parity data while RAID5 only uses one disk.

1) Make sure that all the hard drives you are going to use is connected and available.

I will use the drives /dev/sdb to/dev/sdi in this example.

root@ubuntu:/home/thu# ls -l /dev/sd*
brw-rw—- 1 root disk 8,   0 2011-01-05 21:47 /dev/sda
brw-rw—- 1 root disk 8,   1 2011-01-05 21:47 /dev/sda1
brw-rw—- 1 root disk 8,   2 2011-01-05 21:47 /dev/sda2
brw-rw—- 1 root disk 8,   5 2011-01-05 21:47 /dev/sda5
brw-rw—- 1 root disk 8,  16 2011-01-05 21:47 /dev/sdb
brw-rw—- 1 root disk 8,  32 2011-01-05 21:47 /dev/sdc
brw-rw—- 1 root disk 8,  48 2011-01-05 21:47 /dev/sdd
brw-rw—- 1 root disk 8,  64 2011-01-05 21:47 /dev/sde
brw-rw—- 1 root disk 8,  80 2011-01-05 21:47 /dev/sdf
brw-rw—- 1 root disk 8,  96 2011-01-05 21:47 /dev/sdg
brw-rw—- 1 root disk 8, 112 2011-01-05 21:47 /dev/sdh
brw-rw—- 1 root disk 8, 128 2011-01-05 21:47 /dev/sdi

2) Ask mdadm to create the RAID

root@ubuntu:/home/thu# mdadm ––create /dev/md0 ––level=6 ––raid-devices=8 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi

mdadm: array /dev/md0 started.

Note –level and –raid-devices followed by a list of all the drives to use in the RAID.

3) Create a file system for the RAID device

root@ubuntu:/home/thu# mkfs.ext3 /dev/md0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=16 blocks, Stripe width=96 blocks
1966080 inodes, 7864224 blocks
393211 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
240 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

And you will now be able to mount the device and start using it 🙂

4) Check that the RAID is being built in the background.

Note that the construction will restart every time you restart your server unless it has completed, so please do not reboot unless necessary 🙂

You can check the status of the RAID through two ways:

root@ubuntu:/home/thu# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdi[7] sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
31456896 blocks level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
[====>…………….]  resync = 20.0% (1050880/5242816) finish=4.5min speed=15392K/sec
root@ubuntu:/home/thu# mdadm ––detail /dev/md0
/dev/md0:
Version : 00.90
Creation Time : Wed Jan  5 22:03:27 2011
Raid Level : raid6
Array Size : 31456896 (30.00 GiB 32.21 GB)
Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Wed Jan  5 22:06:15 2011
State : active, resyncing
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Chunk Size : 64K
Rebuild Status : 54% complete
UUID : d09dd686:b5f57b0b:e368bf24:bd0fce41 (local to host ubuntu)
Events : 0.10
Number   Major   Minor   RaidDevice State
0       8       16        0      active sync   /dev/sdb
1       8       32        1      active sync   /dev/sdc
2       8       48        2      active sync   /dev/sdd
3       8       64        3      active sync   /dev/sde
4       8       80        4      active sync   /dev/sdf
5       8       96        5      active sync   /dev/sdg
6       8      112        6      active sync   /dev/sdh
7       8      128        7      active sync   /dev/sdi

SVN gives “attempt to write a readonly database” error

You can get a meeting with this error message when trying to commit something to a SVN repo, this is caused by wrong permissions on a file on the SVN server. The file “rep-cache.db” will most likely have wrong permissions like the group not having write access to the file. A simple chmod g+w on the file will be enough for the error message not appearing again.