Adding SSH support to PHP on Ubuntu

I have previously written a post on how to add SSH support to PHP, but that post is old, and it has now become even easier to get it up and running. And therefore making it much easier to auto upgrade WordPress via SSH. (automatic, yay!)

As root, do the following:

1: apt-get install libssh2-1-dev libssh2-php

2: Check that is installed: php -m |grep ssh2

3: Restart apache: service apache2 restart

And you should now have SSH support in PHP.

Linux: Finding motherboard model

Just had to locate the name of the motherboard on one of my servers, here comes dmidecode to the rescue! 🙂

To list all info it can find, simply run dmidecode. I executed the following command to find my motherboard model:

thu@dom0:~$ sudo dmidecode|grep "Product Name: "
Product Name: P5Q-E   
Product Name: P5Q-E  
thu@dom0:~$

Tuning Ubuntu mdadm RAID5/6

If you are using mdadm RAID 5 or 6 with Ubuntu, you might notice that the performance is not all uber all the time. Reason for this is that the default tuning settings for Ubuntu is set to rather motdest values. These can lucikly easily be tuned. I will in this article increase some settings until my read and write performance against my RAID 6 has been improved a lot.

My setup:
CPU: Intel(R) Core(TM)2 Quad CPU Q9300
RAM: 16G
Drives: 11 drives in one RAID6 with drives split over two cheap PCI-E x4 controllers and the motherboard`s internal controller.

I will test my system between each tuning by using dd for read and write testing. Since i have a nice amount of RAM available, i will use a test file of 36G. (bs=16k) Between each test (both read and write), i clear the OS disk cache with the command:

sync;echo 3 > /proc/sys/vm/drop_caches

Tuning stripe_cache_size

stripe_cache_size affects RAM used by mdadm to writing of data. Ubuntu`s default value is 256, you can verify your value by doing:

cat /sys/block/md0/md/stripe_cache_size

And changing it with:

echo *number* > /sys/block/md0/md/stripe_cache_size

Test results with stripe_cache_size=256
– Write performance: 174 MB/s

Not to good, i therefore increased it some levels, each level with result is described below:

Test results with stripe_cache_size=512
– Write performance: 212 MB/s

Test results with stripe_cache_size=1024
– Write performance: 237 MB/s

Test results with stripe_cache_size=2048
– Write performance: 254 MB/s

Test results with stripe_cache_size=4096
– Write performance: 295 MB/s

Test results with stripe_cache_size=8192
– Write performance: 362 MB/s

Test results with stripe_cache_size=16384
– Write performance: 293 MB/s

Test results with stripe_cache_size=32768
– Write performance: 326 MB/s

So, going from 256 to 32K ~doubled my write performance, not bad! 🙂

Tuning Read Ahead

Time to change a bit on read ahead, which should impact read performance. Default read ahead value is “1536”, and you can change it with the command:

blockdev --setra *number* /dev/md0

Test results with Read Ahead @ 1536
– Read performance: 717 MB/s

Test results with Read Ahead @ 4096
– Read performance: 746 MB/s

Test results with Read Ahead @ 32768
– Read performance: 731 MB/s

Test results with Read Ahead @ 262144
– Read performance: 697 MB/s

Test results with Read Ahead @ 524288
– Read performance: 630 MB/s

So oposite of the write performance tuning, this actually became worse for most of the settings. So 4096 is the best for my system.

In conclution

This is just an example on how different settings can have rather large impact on a system, both for the better and for the worse. If you are going to tune your system you have to test different setting for yourself and see what works best for your setup.  Higher values does not automaticly mean better results. I ended up with “stripe_cache_size=8192” and “Read Ahead @ 4096” for my system.

If you want to make sure that your changes is saved when rebooting the system, remember to add these commands (with your values) in /etc/rc.local.

Ubuntu: apt-get update gives 404 Not Found error

If you recieve “404 Not found” during a apt-get update / apt-get upgrade, the problem can be one of two things:

1)  Your Ubuntu installation is no longer supported.
You can check this by comparing the output of the command:
against the list of Ubuntu releases here:
https://wiki.ubuntu.com/Releases (Notice “End of Life” date)

If your release has reached end of life, you can do upgrade to a new release by following the guide here:
http://www.ubuntu.com/desktop/get-ubuntu/upgrade

2) Temporary problems
The mirror(s) you are using can have temporary problems, in such case you should simply try again later.

pwrstat: Daemon service is not found.

When trying to use the pwrstat program you may get the error message “Daemon service is not found”. Here is a simple check list to follow in order to fix it:

1) Make sure that pwrstatd is running
2) Open /etc/pwrstatd.conf, and make sure that “prohibit-client-access” is set to no.

The last one has fooled me, i cannot remember changing it, and yet it was set to off on my server. So after i corrected the setting and restarted the daemon, my pwrstat finally gave some sane data again:

root@bais:/var/log# pwrstat -status

The UPS information shows as following:

Properties:
Model Name………………. UPS VALUE
Rating Voltage…………… 230 V
Rating Power…………….. 480 Watt

Current UPS status:
State ………………….. Normal
Power Supply by …………. Utility Power
Utility Voltage …………. 230 V
Output Voltage…………… 230 V
Battery Capacity ………… 100 %
Load …………………… 41 %
Remaining Runtime ……….. 10 min.
Line Interaction…………. None

sh: phpize: not found

When using pecl, or something else that require phpize, you may get a warning saying that phpize is not found, even when you have PHP installed on your server.  In order to use phpize you need to install the PHP development packages, normally named php-devel.

For Debian/Ubuntu, you can fix this by running:

sudo apt-get install php5-dev

Howto: Increase disk space in a mdadm raid

I currently have a Ubuntu Linux server running two mdadm RAID`s. One of the RAID sets is set up using 6 x 500 GB SATA drives. Now i have purchased 6 x 1500 GB SATA drives that will replace the old disks, but the challenge is to increase the RAID and filesystem without loosing any data or having downtime. (Note: not having downtime is possible since i use a system that supports hot swapping of drives)

In summary, this can be achieved by doing the following:
1) Replace all disks in the RAID (one by one)
2) Grow the RAID
3) Expand the filesystem

In this guide i will be working on /dev/md1

Now, let`s get to work!

Part one: Replace the disks

PS: If your system does not support hot swap, you have to turn of/restart your machine for each disk you are replacing.

Remove a disk in the RAID, then insert a new (bigger) drive.
Check dmesg (or similiar) to get the name of the last drive.

[14522870.380610] scsi 15:0:0:0: Direct-Access ATA WDC WD15EARS-00Z 80.0 PQ: 0 ANSI: 5
[14522870.381589] sd 15:0:0:0: [sdm] 2930277168 512-byte hardware sectors: (1.50 TB/1.36 TiB)
[14522870.381622] sd 15:0:0:0: [sdm] Write Protect is off
[14522870.381626] sd 15:0:0:0: [sdm] Mode Sense: 00 3a 00 00
[14522870.381673] sd 15:0:0:0: [sdm] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
[14522870.381845] sd 15:0:0:0: [sdm] 2930277168 512-byte hardware sectors: (1.50 TB/1.36 TiB)
[14522870.381870] sd 15:0:0:0: [sdm] Write Protect is off
[14522870.381875] sd 15:0:0:0: [sdm] Mode Sense: 00 3a 00 00
[14522870.381918] sd 15:0:0:0: [sdm] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
[14522870.381926] sdm: unknown partition table
[14522870.397752] sd 15:0:0:0: [sdm] Attached SCSI disk
[14522870.397878] sd 15:0:0:0: Attached scsi generic sg9 type 0

Now, tell mdadm to add your new drive to the RAID you removed a drive from by doing:

mdadm –manage /dev/md1 –add /dev/sdm

Mdadm will then start syncing data to your new drive, to get a ETA of when it`s done (and when you can replace the next drive) check the mdadm status.

root@bais:/home/samba/raid1/test# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdl[2] sdi[4] sdf[3] sde[1] sdd[0]
5860553728 blocks level 5, 128k chunk, algorithm 2 [5/5] [UUUUU]

md1 : active raid5 sdm[6] sdg[1] sdk[5] sdj[7](F) sdh[2] sdc[3] sda[0]
2441932480 blocks level 5, 64k chunk, algorithm 2 [6/5] [UUUU_U]
[==>………………] recovery = 14.2% (69439012/488386496) finish=155.8min speed=44805K/sec

unused devices:

So after around 155 minutes  the drive is active. (And the next one can be replaced)

Repeat this process for each disk in the RAID.

When you have changed all disks, run the command “mdadm –manage /dev/mdX –remove failed” to remove any devices listes as failed for the given RAID.

Part two: Increase the space available for the RAID

This is done by simply issuing the command:

mdadm –grow /dev/md1 –size=max

And the RAID size is increased, note that this has caused the RAID to start a resync (again):

root@bais:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdl[2] sdi[4] sdf[3] sde[1] sdd[0]
5860553728 blocks level 5, 128k chunk, algorithm 2 [5/5] [UUUUU]

md1 : active raid5 sdc[0] sdj[3] sdh[5] sdg[2] sdn[1] sdm[4]
7325692480 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
[======>…………..]  resync = 34.6% (508002752/1465138496) finish=247.0min speed=64561K/sec

PS: note that the resync speed has increased by around 20MB/s after all the drives was replaced 🙂

You will now also notice that the RAID reports it`s new size:

root@bais:~# mdadm –detail /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Sat Jun 13 01:55:27 2009
Raid Level : raid5
Array Size : 7325692480 (6986.32 GiB 7501.51 GB)
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Fri Mar  5 08:03:47 2010
State : active, resyncing
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Rebuild Status : 35% complete

UUID : ed415534:2925f54a:352a6ad4:582f9bd3 (local to host bais)
Events : 0.247

Number   Major   Minor   RaidDevice State
0       8       32        0      active sync   /dev/sdc
1       8      208        1      active sync   /dev/sdn
2       8       96        2      active sync   /dev/sdg
3       8      144        3      active sync   /dev/sdj
4       8      192        4      active sync   /dev/sdm
5       8      112        5      active sync   /dev/sdh

Part three: resize file system

Start off by unounting the file system in question and perform a file system check to make sure everything is a-ok

root@bais:/home/torhenning# umount /home/samba/raid1
root@bais:/home/torhenning# fsck /dev/md1
fsck 1.41.4 (27-Jan-2009)
e2fsck 1.41.4 (27-Jan-2009)
/dev/md1 has gone 188 days without being checked, check forced.
Pass 1: Checking inodes, blocks, and sizes

Be warned: The fsck CAN take quite a time to finish.

When it`s complete, you are ready for the last step, which is to resize the filesystem:

root@bais:/home/torhenning# resize2fs /dev/md1 6986G
resize2fs 1.41.4 (27-Jan-2009)
Resizing the filesystem on /dev/md1 to 1831337984 (4k) blocks.

And voila! Mount up the filesystem again and you are finished! 🙂

Locate duplicate files under Ubuntu (And some ranting)

Edit: Post updated due to obvious user error 🙂

Looking for a program to find dublicate files? Then fdupe saves the day!

root@bais:/home/samba/raid0# apt-cache search fdupe
fdupes – identifies duplicate files within given directories

By the way, how come the apt search gives to random results?

torhenning@bais:~$ sudo apt-cache search duplicate
….
mirror – keeps FTP archives up-to-date
vlc – multimedia player and streamer
(and a lot of other irrelevant crap)

Can`t exactly say that having to grep in a search results is a good “feature”..

Getting Groupwise to work on Ubuntu 64bit (9.04)

Just a quick post on what i did to get Groupwise up and running on Ubuntu 64bit:

1) Installed a 32bit virtual Ubuntu machine, and converted the Groupwise installer to a .deb package with “alien -c filename”.

2) Moved the .deb package back to the host machine.

3) Installed 32bit java libraries “sudo apt-get install ia32-sun-java6-bin”

4) Forced installation of the Groupwise client “sudo dpkg -i –force-architecture novell-groupwise-gwclient_8.0.0-84911_i386.deb”

5) And it works 🙂