Size: 563
Comment:
|
← Revision 11 as of 2021-11-01 21:59:06 ⇥
Size: 2385
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 2: | Line 2: |
<<TableOfContents>> |
|
Line 5: | Line 7: |
{{{#!highlight sh | See also [[CheatSheet/Partitioning]]. {{{#!highlight sh numbers=off # New Linux RAID encompassing the entire disk sudo sgdisk --largest-new=1 --typecode=fd /dev/disk-dev-device |
Line 7: | Line 14: |
sudo mdadm --create /dev/md5 --level=5 --raid-devices=3 /dev/sd[def]1 | sudo mdadm --create /dev/md5 --level=5 --assume-clean --raid-devices=3 /dev/sd{d,e,f}1 |
Line 10: | Line 17: |
sudo mdadm --create /dev/md6 --level=10 -p f2 --raid-devices=2 /dev/sd[ik]1 | sudo mdadm --create /dev/md6 --level=10 -p f2 --assume-clean --raid-devices=2 /dev/sd{i,k}1 # Save configuration (not necessary) /usr/share/mdadm/mkconf > /new/etc/mdadm/mdadm.conf }}} == Performance == === Stripe cache size === Increase 1 MiB default to 16 MiB (№ 4K blocks): {{{#!highlight sh numbers=off echo 4096 | sudo tee /sys/block/md0/md/stripe_cache_size echo 4096 | sudo tee /sys/block/md1/md/stripe_cache_size }}} === Write-intent bitmaps === [[https://raid.wiki.kernel.org/index.php/Bitmap|Write-intent bitmaps]] speed-up RAID resyncs significantly, at the expense of a little write performance. {{{#!highlight sh # Create write-intent bitmap mdadm --grow --bitmap=internal /dev/mdX # Remove write-intent bitmap mdadm --grow --bitmap=none /dev/mdX |
Line 19: | Line 52: |
{{{#!highlight sh | {{{#!highlight sh numbers=off |
Line 22: | Line 55: |
== Recovery == === Inactive array === If an array comes out as inactive, e.g.: {{{ md8 : inactive sdp1[0](S) sds1[4](S) sdx1[2](S) sdj1[1](S) 7814054094 blocks super 1.2 }}} Stop the array before we continue to work on it. {{{#!highlight sh numbers=off mdadm --stop /dev/md8 }}} First, use examine the array components and figure out which component is out of sync (it's best to do this manually). Then, with the remaining components, reassemble the array. E.g. if /dev/sds1 was the bad device: {{{#!highlight sh numbers=off mdadm --assemble /dev/md8 /dev/sd{p,x,j}1 --force }}} to restart the array. If supposedly bad device is fine, go ahead and re-add it: {{{#!highlight sh numbers=off mdadm --manage /dev/md8 --add /dev/sds1 }}} == btrfs == {{{#!highlight sh numbers=off # btrfs RAID0 mkfs.btrfs -d raid0 /dev/sd{b,c}1 }}} |
Contents
Creation
See also CheatSheet/Partitioning.
# New Linux RAID encompassing the entire disk
sudo sgdisk --largest-new=1 --typecode=fd /dev/disk-dev-device
# RAID 5 array creation
sudo mdadm --create /dev/md5 --level=5 --assume-clean --raid-devices=3 /dev/sd{d,e,f}1
# Create a high-performance RAID10 array (2x read speed compared to RAID1)
sudo mdadm --create /dev/md6 --level=10 -p f2 --assume-clean --raid-devices=2 /dev/sd{i,k}1
# Save configuration (not necessary)
/usr/share/mdadm/mkconf > /new/etc/mdadm/mdadm.conf
Performance
Stripe cache size
Increase 1 MiB default to 16 MiB (№ 4K blocks):
echo 4096 | sudo tee /sys/block/md0/md/stripe_cache_size
echo 4096 | sudo tee /sys/block/md1/md/stripe_cache_size
Write-intent bitmaps
Write-intent bitmaps speed-up RAID resyncs significantly, at the expense of a little write performance.
Diagnosis
Examine array component
Displays superblock information, including last event, RAID UUID, other components, etc.
mdadm --examine /dev/sda1
Recovery
Inactive array
If an array comes out as inactive, e.g.:
md8 : inactive sdp1[0](S) sds1[4](S) sdx1[2](S) sdj1[1](S) 7814054094 blocks super 1.2
Stop the array before we continue to work on it.
mdadm --stop /dev/md8
First, use examine the array components and figure out which component is out of sync (it's best to do this manually). Then, with the remaining components, reassemble the array. E.g. if /dev/sds1 was the bad device:
mdadm --assemble /dev/md8 /dev/sd{p,x,j}1 --force
to restart the array. If supposedly bad device is fine, go ahead and re-add it:
mdadm --manage /dev/md8 --add /dev/sds1
btrfs
# btrfs RAID0
mkfs.btrfs -d raid0 /dev/sd{b,c}1