Howto: Increase disk space in a mdadm raid

I currently have a Ubuntu Linux server running two mdadm RAID`s. One of the RAID sets is set up using 6 x 500 GB SATA drives. Now i have purchased 6 x 1500 GB SATA drives that will replace the old disks, but the challenge is to increase the RAID and filesystem without loosing any data or having downtime. (Note: not having downtime is possible since i use a system that supports hot swapping of drives)

In summary, this can be achieved by doing the following:
1) Replace all disks in the RAID (one by one)
2) Grow the RAID
3) Expand the filesystem

In this guide i will be working on /dev/md1

Now, let`s get to work!

Part one: Replace the disks

PS: If your system does not support hot swap, you have to turn of/restart your machine for each disk you are replacing.

Remove a disk in the RAID, then insert a new (bigger) drive.
Check dmesg (or similiar) to get the name of the last drive.

[14522870.380610] scsi 15:0:0:0: Direct-Access ATA WDC WD15EARS-00Z 80.0 PQ: 0 ANSI: 5
[14522870.381589] sd 15:0:0:0: [sdm] 2930277168 512-byte hardware sectors: (1.50 TB/1.36 TiB)
[14522870.381622] sd 15:0:0:0: [sdm] Write Protect is off
[14522870.381626] sd 15:0:0:0: [sdm] Mode Sense: 00 3a 00 00
[14522870.381673] sd 15:0:0:0: [sdm] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
[14522870.381845] sd 15:0:0:0: [sdm] 2930277168 512-byte hardware sectors: (1.50 TB/1.36 TiB)
[14522870.381870] sd 15:0:0:0: [sdm] Write Protect is off
[14522870.381875] sd 15:0:0:0: [sdm] Mode Sense: 00 3a 00 00
[14522870.381918] sd 15:0:0:0: [sdm] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
[14522870.381926] sdm: unknown partition table
[14522870.397752] sd 15:0:0:0: [sdm] Attached SCSI disk
[14522870.397878] sd 15:0:0:0: Attached scsi generic sg9 type 0

Now, tell mdadm to add your new drive to the RAID you removed a drive from by doing:

mdadm –manage /dev/md1 –add /dev/sdm

Mdadm will then start syncing data to your new drive, to get a ETA of when it`s done (and when you can replace the next drive) check the mdadm status.

# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdl[2] sdi[4] sdf[3] sde[1] sdd[0]
5860553728 blocks level 5, 128k chunk, algorithm 2 [5/5] [UUUUU]

md1 : active raid5 sdm[6] sdg[1] sdk[5] sdj[7](F) sdh[2] sdc[3] sda[0]
2441932480 blocks level 5, 64k chunk, algorithm 2 [6/5] [UUUU_U]
[==>………………] recovery = 14.2% (69439012/488386496) finish=155.8min speed=44805K/sec

unused devices:

So after around 155 minutes  the drive is active. (And the next one can be replaced)

Repeat this process for each disk in the RAID.

When you have changed all disks, run the command “mdadm –manage /dev/mdX –remove failed” to remove any devices listes as failed for the given RAID.

Part two: Increase the space available for the RAID

This is done by simply issuing the command:

mdadm –grow /dev/md1 –size=max

And the RAID size is increased, note that this has caused the RAID to start a resync (again):

~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdl[2] sdi[4] sdf[3] sde[1] sdd[0]
5860553728 blocks level 5, 128k chunk, algorithm 2 [5/5] [UUUUU]

md1 : active raid5 sdc[0] sdj[3] sdh[5] sdg[2] sdn[1] sdm[4]
7325692480 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
[======>…………..]  resync = 34.6% (508002752/1465138496) finish=247.0min speed=64561K/sec

PS: note that the resync speed has increased by around 20MB/s after all the drives was replaced 🙂

You will now also notice that the RAID reports it`s new size:

~# mdadm –detail /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Sat Jun 13 01:55:27 2009
Raid Level : raid5
Array Size : 7325692480 (6986.32 GiB 7501.51 GB)
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Fri Mar  5 08:03:47 2010
State : active, resyncing
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Rebuild Status : 35% complete

UUID : ed415534:2925f54a:352a6ad4:582f9bd3 (local to host)
Events : 0.247

Number   Major   Minor   RaidDevice State
0       8       32        0      active sync   /dev/sdc
1       8      208        1      active sync   /dev/sdn
2       8       96        2      active sync   /dev/sdg
3       8      144        3      active sync   /dev/sdj
4       8      192        4      active sync   /dev/sdm
5       8      112        5      active sync   /dev/sdh

Part three: resize file system

Start off by unounting the file system in question and perform a file system check to make sure everything is a-ok

# umount /home/samba/raid1
# fsck /dev/md1
fsck 1.41.4 (27-Jan-2009)
e2fsck 1.41.4 (27-Jan-2009)
/dev/md1 has gone 188 days without being checked, check forced.
Pass 1: Checking inodes, blocks, and sizes

Be warned: The fsck CAN take quite a time to finish.

When it`s complete, you are ready for the last step, which is to resize the filesystem:

# resize2fs /dev/md1 6986G
resize2fs 1.41.4 (27-Jan-2009)
Resizing the filesystem on /dev/md1 to 1831337984 (4k) blocks.

And voila! Mount up the filesystem again and you are finished! 🙂