Difference between revisions of "linux software raid 5 quick guide"
(→prepare disks) |
(→checking status) |
||
(7 intermediate revisions by one user not shown) | |||
Line 2: | Line 2: | ||
Number of drives: 3 | Number of drives: 3 | ||
+ | |||
Number of spare drives: 0 | Number of spare drives: 0 | ||
+ | |||
All drives same size | All drives same size | ||
+ | |||
All space used on all drives | All space used on all drives | ||
+ | |||
devices to be used in this example: sdb sdc sdd | devices to be used in this example: sdb sdc sdd | ||
+ | |||
+ | |||
+ | == Versions == | ||
+ | Versions in use for the kernel and mdadm on the linux device in this example... | ||
+ | |||
+ | kernel: 3.10.17 | ||
+ | mdadm: v3.2.6 - 25th October 2012 | ||
Line 24: | Line 35: | ||
# '''fdisk -l /dev/sd[bcd]''' | # '''fdisk -l /dev/sd[bcd]''' | ||
− | Assumming you have 3 | + | Assumming you have 3 drives with no data to preserve or they are new, delete any existing partions in preparation for creating new partitions of type "Linux raid autodetect" |
'''create three raid partions''' | '''create three raid partions''' | ||
Line 78: | Line 89: | ||
== Creating an array == | == Creating an array == | ||
+ | # '''mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1''' | ||
+ | |||
+ | verify the creation | ||
+ | |||
+ | # '''cat /proc/mdstat''' | ||
+ | Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] | ||
+ | md0 : active raid5 sdd1[3] sdc1[1] sdb1[0] | ||
+ | 1953260544 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_] | ||
+ | [>....................] recovery = 0.1% (1143808/976630272) finish=312.6min speed=51991K/sec | ||
+ | |||
+ | if you see something similar to above you are OK. the recover = xx.x% is the initial building of the parity data I believe. | ||
+ | |||
+ | if you care to watch / monitor the sync in a terminal | ||
+ | # while true; do cat /proc/mdstat | grep recovery; sleep 300; done | ||
+ | |||
+ | == create and mount filesystem == | ||
+ | |||
+ | # mkfs.ext4 /dev/md0 | ||
+ | |||
+ | # mount /dev/md0 /mnt/mysuperraidvolume | ||
+ | |||
+ | add the /etc/fstab entry line to mount on boot | ||
+ | /dev/md0 /vol1 ext4 defaults 0 0 | ||
+ | |||
+ | ==checking status== | ||
+ | |||
+ | # '''mdadm --detail /dev/md127''' | ||
+ | /dev/md127: | ||
+ | Version : 1.2 | ||
+ | Creation Time : Tue Aug 5 07:35:04 2014 | ||
+ | Raid Level : raid5 | ||
+ | Array Size : 1953260544 (1862.77 GiB 2000.14 GB) | ||
+ | Used Dev Size : 976630272 (931.39 GiB 1000.07 GB) | ||
+ | Raid Devices : 3 | ||
+ | Total Devices : 3 | ||
+ | Persistence : Superblock is persistent | ||
+ | <br>Update Time : Sat Jun 11 16:37:45 2016 | ||
+ | State : clean | ||
+ | Active Devices : 3 | ||
+ | Working Devices : 3 | ||
+ | Failed Devices : 0 | ||
+ | Spare Devices : 0 | ||
+ | <br>Layout : left-symmetric | ||
+ | Chunk Size : 512K | ||
+ | <br>Name : nasser:0 (local to host nasser) | ||
+ | UUID : 3733be79:728f962f:ffe10ec0:ce451bb3 | ||
+ | Events : 28933 | ||
+ | <br>Number Major Minor RaidDevice State | ||
+ | 0 8 17 0 active sync /dev/sdb1 | ||
+ | 1 8 33 1 active sync /dev/sdc1 | ||
+ | 3 8 49 2 active sync /dev/sdd1 | ||
== Links == | == Links == |
Latest revision as of 14:43, 12 June 2016
Here will create a raid 5 setup with three drives for redundant storage using mdadm (not raidtools). This example does not include putting your O.S. root partion on the raid device. It is only for a volume to be mounted by Linux which boots off of another device.
Number of drives: 3
Number of spare drives: 0
All drives same size
All space used on all drives
devices to be used in this example: sdb sdc sdd
Contents |
Versions
Versions in use for the kernel and mdadm on the linux device in this example...
kernel: 3.10.17 mdadm: v3.2.6 - 25th October 2012
kernel config
Check for support If your system has RAID support, you should have a file called /proc/mdstat. If you do not have that file, maybe your kernel does not have RAID support. For more info on configuring your linux kernel for software raid, go here
install mdadm
prepare disks
check existing partition tables
this command will dump the partitions for all three drives
# fdisk -l /dev/sd[bcd]
Assumming you have 3 drives with no data to preserve or they are new, delete any existing partions in preparation for creating new partitions of type "Linux raid autodetect"
create three raid partions
assuming they don't already exist...
linux # fdisk /dev/sdb
The device presents a logical sector size that is smaller than the physical sector size. Aligning to a physical sector (or optimal I/O) size boundary is recommended, or performance may be impacted. Welcome to fdisk (util-linux 2.22.2).
Changes will remain in memory only, until you decide to write them. Be careful before using the write command.
Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): Using default response p Partition number (1-4, default 1): Using default value 1 First sector (2048-1953525167, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-1953525167, default 1953525167): Using default value 1953525167 Partition 1 of type Linux and of size 931.5 GiB is set
Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w The partition table has been altered!
Calling ioctl() to re-read partition table. Syncing disks.
Duplicate the sdb partition tables to sdc and sdd. You can re-run fdisk two more times, or just run this.
# sfdisk -d /dev/sdb | sfdisk /dev/sdc; sfdisk -d /dev/sdb | sfdisk /dev/sdd
Verify the partition layout
Partitions should look similar to the following:
# fdisk -l /dev/sd[bcd] | grep -A 1 Device Device Boot Start End Blocks Id System /dev/sdb1 2048 1953525167 976761560 fd Linux raid autodetect -- Device Boot Start End Blocks Id System /dev/sdc1 2048 1953525167 976761560 fd Linux raid autodetect -- Device Boot Start End Blocks Id System /dev/sdd1 2048 1953525167 976761560 fd Linux raid autodetect
Creating an array
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
verify the creation
# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] md0 : active raid5 sdd1[3] sdc1[1] sdb1[0] 1953260544 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_] [>....................] recovery = 0.1% (1143808/976630272) finish=312.6min speed=51991K/sec
if you see something similar to above you are OK. the recover = xx.x% is the initial building of the parity data I believe.
if you care to watch / monitor the sync in a terminal
# while true; do cat /proc/mdstat | grep recovery; sleep 300; done
create and mount filesystem
# mkfs.ext4 /dev/md0
# mount /dev/md0 /mnt/mysuperraidvolume
add the /etc/fstab entry line to mount on boot
/dev/md0 /vol1 ext4 defaults 0 0
checking status
# mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Tue Aug 5 07:35:04 2014 Raid Level : raid5 Array Size : 1953260544 (1862.77 GiB 2000.14 GB) Used Dev Size : 976630272 (931.39 GiB 1000.07 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent
Update Time : Sat Jun 11 16:37:45 2016 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 512K
Name : nasser:0 (local to host nasser) UUID : 3733be79:728f962f:ffe10ec0:ce451bb3 Events : 28933
Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 3 8 49 2 active sync /dev/sdd1
Links
this guide is recommended by the linux kernel help... [Software RAID HOWTO]