Tuning Ubuntu mdadm RAID5/6

If you are using mdadm RAID 5 or 6 with Ubuntu, you might notice that the performance is not all uber all the time. Reason for this is that the default tuning settings for Ubuntu is set to rather motdest values. These can lucikly easily be tuned. I will in this article increase some settings until my read and write performance against my RAID 6 has been improved a lot.

My setup:
CPU: Intel(R) Core(TM)2 Quad CPU Q9300
RAM: 16G
Drives: 11 drives in one RAID6 with drives split over two cheap PCI-E x4 controllers and the motherboard`s internal controller.

I will test my system between each tuning by using dd for read and write testing. Since i have a nice amount of RAM available, i will use a test file of 36G. (bs=16k) Between each test (both read and write), i clear the OS disk cache with the command:

sync;echo 3 > /proc/sys/vm/drop_caches

Tuning stripe_cache_size

stripe_cache_size affects RAM used by mdadm to writing of data. Ubuntu`s default value is 256, you can verify your value by doing:

cat /sys/block/md0/md/stripe_cache_size

And changing it with:

echo *number* > /sys/block/md0/md/stripe_cache_size

Test results with stripe_cache_size=256
– Write performance: 174 MB/s

Not to good, i therefore increased it some levels, each level with result is described below:

Test results with stripe_cache_size=512
– Write performance: 212 MB/s

Test results with stripe_cache_size=1024
– Write performance: 237 MB/s

Test results with stripe_cache_size=2048
– Write performance: 254 MB/s

Test results with stripe_cache_size=4096
– Write performance: 295 MB/s

Test results with stripe_cache_size=8192
– Write performance: 362 MB/s

Test results with stripe_cache_size=16384
– Write performance: 293 MB/s

Test results with stripe_cache_size=32768
– Write performance: 326 MB/s

So, going from 256 to 32K ~doubled my write performance, not bad! 🙂

Tuning Read Ahead

Time to change a bit on read ahead, which should impact read performance. Default read ahead value is “1536”, and you can change it with the command:

blockdev --setra *number* /dev/md0

Test results with Read Ahead @ 1536
– Read performance: 717 MB/s

Test results with Read Ahead @ 4096
– Read performance: 746 MB/s

Test results with Read Ahead @ 32768
– Read performance: 731 MB/s

Test results with Read Ahead @ 262144
– Read performance: 697 MB/s

Test results with Read Ahead @ 524288
– Read performance: 630 MB/s

So oposite of the write performance tuning, this actually became worse for most of the settings. So 4096 is the best for my system.

In conclution

This is just an example on how different settings can have rather large impact on a system, both for the better and for the worse. If you are going to tune your system you have to test different setting for yourself and see what works best for your setup.  Higher values does not automaticly mean better results. I ended up with “stripe_cache_size=8192” and “Read Ahead @ 4096” for my system.

If you want to make sure that your changes is saved when rebooting the system, remember to add these commands (with your values) in /etc/rc.local.

16 thoughts on “Tuning Ubuntu mdadm RAID5/6

  1. Thanks for this nice and helpful evaluation.

    I would like to improve my bad performances with software raid and LVM. Are these tweaks save in production mode? mythtv: LVM on RAID tells it might crash your FS.

    Sebastian

  2. Hi Sebastian,

    They should be, but the only way to know for sure is to test how it works for your setup, if you are short on RAM it might be an issue. The safest would be to only increase it a bit at the time and see how the system reacts to it.

    If you have a general bad performance, you might have more success with looking into the general setup for your server. If you for example have the whole RAID on PCI-ports you will only have 133MB/s for all PCI slots. While PCI-x has a much higher bandwith per slot.

    Another key factor is enough RAM to keep as much as possible in the OS disk cache, at least if you have much read and not too much write.

    Just an example on how effective the OS disk cache is.

    To make sure that the OS know that this file should be cached in RAM, simply cat it to /dev/null (access the file)

    “root@bais:~# cat testfile > /dev/null”

    If you then try to read it with dd, the read performance should be “nice” 🙂

    “root@bais:~# dd if=testfile of=/dev/null bs=8k
    361680+0 records in
    361680+0 records out
    2962882560 bytes (3.0 GB) copied, 0.92471 s, 3.2 GB/s”

    Since the 3GB test file is in the OS disk cache, the harddrive itself is actually not used at all, therefore the rather extreme performance.

  3. Great post! Really helped me get some more performance out of my setup – went from about 100MB/s writes to 170MB/s! Read speeds went up about 20% too!

  4. Pingback: IO Test for EBS Volumes: RAID5′s performance

  5. So why did you go with 32K and not 8K?

    From your numbers above 8K seems to be your sweet spot and you got diminishing returns after that

  6. Good question!
    It is some time ago i did this, but i do believe that i went a notch down based on RAM usage etc. 🙂

  7. Pingback: Mdadm – tuning Linux RAID options.. | TooMeeK

  8. Pingback: timor's site » LVM na RAID5 i dysku z sektorami 4KB

  9. Nice post, just be sure that ‘dd’ isn’t maxing our your CPU as it is single threaded. A tool like bonnie++ is probably a better bechmark.

    The –report flag on blockdev is a handy flag too, showing human readable values, and all devices in one.

    blockdev –report

  10. Amazing!
    I increased write speed of my 5 x 2TB SATA2 Raid5 from 126 to 230 by changing write_cache_size from 256 to 4096. read speed already at 342!
    (Atom D510 @1,66GHz – 2GB RAM)

  11. Pingback: Improve software RAID speeds on Linux | LucaTNT's

  12. Pingback: Jans Blog » Blog Archiv » Linux Raid5 lahm

  13. Pingback: Problémy s IO operacemi na RAID poli | blog

  14. Pingback: DIY Home NAS Upgrade – From Unraid to MDADM | senk9@wp