How to adjust Linux SW RAID Disk Rebuild Speed
Overview
The process of rebuilding the RAID can be a slow process. Viewing the console and repeatedly running cat /proc/mdstat only to see a few points of a percentage increase can be tedious (although, you can use watch in this case). There is of course a reason for the ‘slowness’, particularly if you are using older hardware that relies on the stability of a lighter load. But if you are using large disks for high storage capacity or have large backup drives with several terabytes of data being configured with RAID, then you may wish to increase the process speed. Here is the method we use at Fraction Servers to adjust Linux SW RAID rebuild speed.
1) /proc/sys/dev/raid/{speed_limit_max,speed_limit_min} kernel variables
To see current limits, enter:
sysctl dev.raid.speed_limit_min sysctl dev.raid.speed_limit_max
To increase speed, enter:
### echo value > /proc/sys/dev/raid/speed_limit_min ### echo 50000 > /proc/sys/dev/raid/speed_limit_min
OR:
### sysctl -w dev.raid.speed_limit_min=value ### sysctl -w dev.raid.speed_limit_min=5
Usually, the speed_limit_max variable is high enough that it won’t affect performance but it is usually possible to speed up RAID rebuilds by increasing the speed_limit_min value. It’s worth noting that while this can speed up RAID rebuilds it may have an effect on the system as a whole. Faster rebuilds take up more system resources like disk bandwidth, CPU, and RAM. Larger arrays are usually able to support higher read/write minimum values as the disk bandwidth can be shared between the disks.
If the speed_limit_max variable does need to be changed it can be updated in a similar way.
### echo value > /proc/sys/dev/raid/speed_limit_max ### echo 50000 > /proc/sys/dev/raid/speed_limit_max
OR:
### sysctl -w dev.raid.speed_limit_min=value ### sysctl -w dev.raid.speed_limit_min=50000
Do the same for dev.raid.speed_limit_max if necessary. 50000 is an example, so adjust value to what is necessary for the server you are applying this to.
It should reset back to the default after rebooting.
2) Set read-ahead option
Set readahead (in 512-byte sectors) per raid device. The syntax is:
### Set read-ahead to 32 MiB ### blockdev --setra 65536 /dev/md0 blockdev --setra 65536 /dev/md1
3) Disable NCQ on all disks
Determine which disks are part of the array and adjust this accordingly
### Sample for loop ### for i in sd[abcde] # make sure you have the right disks do echo 1 > /sys/block/$i/device/queue_depth done
4) Bitmap Option
Bitmaps optimize rebuild time after a crash, or after removing and re-adding a device. Turn it on by typing the following command:
mdadm --grow --bitmap=internal /dev/md0
Once array rebuild or fully synced, disable bitmaps:
mdadm --grow --bitmap=none /dev/md0
5) Set stripe-cache_size for RAID5 or RAID 6
This is only available on RAID5 and RAID6 and boost sync performance by 3-6 times. It records the size (in pages per device) of the stripe cache which is used for synchronising all write operations to the array and all read operations if the array is degraded. The default is 256. Valid values are 17 to 32768. Increasing this number can increase performance in some situations, at some cost in system memory. Note, setting this value too high can result in an “out of memory” condition for the system. Use the following formula:
memory_consumed = system_page_size * nr_disks * stripe_cache_size
To set stripe_cache_size to 16 MiB for /dev/md0, type:
echo 16384 > /sys/block/md0/md/stripe_cache_size
To set stripe_cache_size to 32 MiB for /dev/md3, type:
echo 32768 > /sys/block/md3/md/stripe_cache_size
Source: https://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html