Advertisements
What is RAID?
RAID is redundant array of independent or inexpensive disks. Raid is a storage technology that combines multiple disk drive components into a logical unit. Data is distributed across the drives in one of several ways called "RAID levels", depending on what level of redundancy and performance is required. It is mainly used for data protection. It protects our data storage from failures and data loss. All storage units now use raid technology.
It has following uses.
1. Data protection
2. Increasing performance
Types of RAIDs:
There are alot of levels of RAIDs. But the main levels are
1. Level 0 or Striping
2. Level 1 or Mirroring
3. Level 5 or Striping + Parity
Level 0:
It is also known as striping. You know hard disk is a block device. Datas are read and written from/to it by blocks.
Suppose we have data block as below
1 0 1 1
Suppose each bit takes one cpu clock cycles for writing into a disk. Total, it will take 4 cpu clock cycles.
With stripping:
In striping we use "N" number of hard disks. RAID devides the data block by "N" and writes each part to each disk in parallel.
If we have 4 hard disks, It'll take only one cpu clock cycle if we use Level 0 RAID.
Raid 0 is best for where more writing take place than reading. But is insecure as there is no recovery mechanisms.
Level 1:
Also known as Mirroring. One is the exact replica of other. Whatever writing to master disk will be written in to mirror disk also. Reading can be performed fro each disk simultaneously, thus increasing the read performance.
But can be utilize only 50% of the total size.
Level 5:
It is a combination of striping and parity. Need at least three hard disks. Both parity and data are distributed in all. If one hard disk fails, data on that can be regenerated by the data and parity information in the other two hard disks.
Disk or Partition requirement
Raid 5 :need 3 disks
Raid 0 :need 2 disks
Raid 1 :need 2 disks
Implementation of raid level 5
Here we'll show how to create a Level 5 raid device. Here we use three partitions /dev/sdb /dev/sdc /dev/sdd. Keep in mind that, in real industry it'll be three different hard disks.
This following command will create a raid device /dev/md0 with level 5
#mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd{b,c,d}
mdadm: array /dev/md0 started.
[root@server ~]# watch cat /proc/mdstat
Every 2.0s: cat /proc/mdstat Sat Mar 3 19:21:31 2012
Personalities : [raid6] [raid5] [raid4]md0 : active raid5 sdd[3] sdc[1] sdb[0]
10485632 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[======>..............] recovery = 33.5% (1759128/5242816) finish=1.4min speed=39024K/sec
unused devices: <none>
Status when finished.
[root@server ~]# watch cat /proc/mdstat
Every 2.0s: cat /proc/mdstat Sat Mar 3 19:23:27 2012
Personalities : [raid6] [raid5] [raid4]md0 : active raid5 sdd[2] sdc[1] sdb[0]
10485632 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
To see the details of the created raid device.
[root@server ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sat Mar 3 19:20:43 2012
Raid Level : raid5
Array Size : 10485632 (10.00 GiB 10.74 GB)
Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Mar 3 19:23:18 2012
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 4e4f2828:3bbe8227:61435180:8ac962cf
Events : 0.2
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
[root@server ~]#
You have to create one configuration file for the raid device created by mdadm. Else it wont be able to survive a reboot.
[root@server ~]# mdadm --detail --verbose --scan >> /etc/mdadm.conf
[root@server ~]# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=3 UUID=4e4f2828:3bbe8227:61435180:8ac962cf
devices=/dev/sdb,/dev/sdc,/dev/sdd
[root@server ~]#
Formatting the raid device
[root@server ~]# mke2fs -j /dev/md0or
[root@server ~]# mkfs.ext3 /dev/md0
#mkdir /data
Mounting the raid device to the created mount point
#mount /dev/md0 /data
Making the mount permanent by adding it in fstab
#vi /etc/fstab
/dev/md0 /data ext3 defaults 0 0
:wq
#mount -a
Now we will create a file of size 100mb in /data
[root@server data]# dd if=/dev/zero of=bf bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 1.85005 seconds, 55.3 MB/s
[root@server data]#
[root@server data]# mdadm /dev/md0 --fail /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md0
[root@server data]#
Now the status is
root@server ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sat Mar 3 19:20:43 2012
Raid Level : raid5
Array Size : 10485632 (10.00 GiB 10.74 GB)
Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Mar 3 19:56:08 2012
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 4e4f2828:3bbe8227:61435180:8ac962cf
Events : 0.6
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 0 0 1 removed
2 8 48 2 active sync /dev/sdd
3 8 32 - faulty spare /dev/sdc
[root@server ~]#
Now we can remove the faulty one by
[root@server data]# mdadm /dev/md0 --remove /dev/sdc
mdadm: hot removed /dev/sdc
[root@server data]#
Now the status is
[root@server ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sat Mar 3 19:20:43 2012
Raid Level : raid5
Array Size : 10485632 (10.00 GiB 10.74 GB)
Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Mar 3 20:04:16 2012
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 4e4f2828:3bbe8227:61435180:8ac962cf
Events : 0.8
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 0 0 1 removed
2 8 48 2 active sync /dev/sdd
[root@server ~]#
Now we add a new disk replacement for the one we removed.
[root@server data]# mdadm /dev/md0 --add /dev/sde
mdadm: added /dev/sde
[root@server data]#
See the data is rebuilding using the data and parity information from the other two disk.
[root@server ~]# watch cat /proc/mdstat
Every 2.0s: cat /proc/mdstat Sat Mar 3 20:08:19 2012
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde[3] sdd[2] sdb[0]
10485632 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
[===>.................] recovery = 18.9% (994220/5242816) finish=1.8min speed=38239K/sec
unused devices: <none>
Also check
[root@server data]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sat Mar 3 19:20:43 2012
Raid Level : raid5
Array Size : 10485632 (10.00 GiB 10.74 GB)
Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Mar 3 20:04:16 2012
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 77% complete
UUID : 4e4f2828:3bbe8227:61435180:8ac962cf
Events : 0.8
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
3 8 64 1 spare rebuilding /dev/sde
2 8 48 2 active sync /dev/sdd
[root@server data]#
This is how we can create a Raid device with level 1
#mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda{5,6}This is how we can create a Raid device with level 0
#mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sda{5,6}
Stopping mdadm
*Unmount the md0 before stopping mdadm
[root@server ~]# umount /data/
[root@server ~]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
[root@server ~]#
If you want to create additional devices[ie there exists a /dev/md0] you may need to add an "-a yes" option to the mdadm command.
For example,
#mdadm --create /dev/md1 -a yes --level=0 --raid-devices=2 /dev/sda{5,6}
Adding Spare disk
We can also specify the spare device at the time of creating the raid array.
[root@server ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 --spare-devices=1 /dev/sd{b,c,d}
See
[root@server ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sat Mar 3 20:36:58 2012
Raid Level : raid1
Array Size : 5242816 (5.00 GiB 5.37 GB)
Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
Raid Devices : 2
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Mar 3 20:37:25 2012
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
UUID : 17b820a4:61fc941e:267bf6c8:8adff61a
Events : 0.2
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 - spare /dev/sdd
[root@server ~]#
Configuring lvm in Linux