RAID

RAID

RAID, which stands for Redundant Array of Inexpensive Disks, is a technology employed to enhance both the performance and reliability of data storage systems.

Through the implementation of RAID, the system gains the ability to recover data in the event of a disk failure by retrieving it from other disks within the array.

RAID technology proves highly advantageous for system administrators and individuals tasked with managing substantial volumes of data.

The key reasons to opt for RAID include:

Enhanced Data Reliability: RAID configurations offer increased data reliability, reducing the risk of data loss due to disk failures.

Improved Performance: RAID setups often result in improved data access speeds and overall system performance, which is particularly valuable for systems dealing with extensive data processing.

Fault Tolerance: RAID arrays provide fault tolerance, ensuring uninterrupted data availability even when individual disks fail.

Scalability: RAID technology allows for easy scalability, enabling users to expand storage capacity as needed.

Cost-Efficiency: RAID arrays can be constructed using inexpensive disks, making it a cost-effective solution for data storage needs.

These are the following RAID levels:

  • RAID 0?– striping (most usable)
  • RAID 1?– mirroring (most usable)
  • RAID 5?– striping with parity (most usable)
  • RAID 6?– striping with double parity (not much)
  • RAID 10?– combining mirroring and striping (usable)

RAID level 0 – Striping

Stripe is a technique that involves distributing data randomly across multiple disks, resulting in each disk containing only a portion of the data. When using three disks, for example, roughly half of the data will be stored on each disk.

RAID level 0, known as "striping," operates on this principle. In RAID 0, data is divided into blocks and then written into strips across the disks. While RAID 0 can deliver high I/O performance, it does not provide any redundancy. The size of a RAID 0 array is equal to the sum of the capacities of the disks in the array. It's important to note that if one drive in a RAID 0 configuration fails, all data in the array is indeed lost.

Advantages

  • RAID 0 offers great performances, both in read and writes operations. There is no overhead caused by parity controls.
  • All storage capacity is used, there is no overhead.
  • The technology is easy to implement.

Disadvantages

  • RAID 0 is not fault-tolerant. If one drive fails, all data in the RAID 0 array are lost. It should not be used for mission-critical systems.

Configuration in RAID 0:

1) First I will take one raw disk to the system. Apart from this disk I will create PV, VG and two logical volumes.

[root@rams ~]# pvcreate /dev/sdb
 Physical volume "/dev/sdb" successfully created
[root@rams ~]# vgcreate srinuvg /dev/sdb
 Volume group "srinuvg" successfully created
[root@rams ~]# lvcreate -L +1G -n srinulv1 srinuvg
 Logical volume "srinulv1" created
[root@rams ~]# lvcreate -L +1G -n srinulv2 srinuvg
 Logical volume "srinulv2" created
[root@rams ~]# pvs
? PV???????? VG????? Fmt? Attr PSize? PFree
? /dev/sdb?? srinuvg lvm2 a--? ?30.00g 28.00g        

2) Now I need to create RAID 0 level with two partitions using follow command

[root@rams ~]# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/srinuvg/srinulv1 /dev/srinuvg/srinulv2
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.        

3) To see the information after creation of raid 0 we will use by following commands

[root@rams ~]# Cat /proc/mdstat

Personalities: [raid0]

Md0: active raid0 dm-1[1] dm-0[0]

????? 2096128 blocks super 1.2 512k chunks

Unused devices: <none>

[root@rams ~]# mdadm -D /dev/md0        

4) After that I need to make a file system and mount that file system with a mount point and make its permanent using /etc/fstab configuration file

[root@rams ~]# mkfs.ext3 /dev/md0

[root@rams ~]# mkdir /stripped

[root@rams ~]# mount /dev/md0 /stripped

[root@rams ~]# vi /etc/fstab

/dev/md0?????????? /stripped???????????? ext3?????? defaults?????????????? 0????????????? 0

[root@rams ~]# df -hP

Filesystem??????????? Size? Used Avail Use% Mounted on

/dev/sda3????????????? 16G? 2.5G?? 13G? 17% /

tmpfs???????????????? 238M?? 80K? 238M?? 1% /dev/shm

/dev/sda1???????????? 291M?? 37M? 240M? 14% /boot

/dev/sr0????????????? 3.5G? 3.5G???? 0 100% /media/RHEL_6.4 x86_64 Disc 1

/dev/md0????????????? 2.0G?? 36M? 1.9G?? 2% /stripped        

5) Now I need to check that full information of disk

If I want to add and increase size with new partition to that disk then following command can be use

[root@rams ~]# mdadm --grow --raid-devices=4 /dev/md0        

It will be added as a spare disk to RAID level 0

[root@rams ~]# mdadm --grow --raid-devices=3 /dev/md0 -a /dev/srinuvg/srinulv9        

Marking a RAID device as faulty and removing it from the array

[root@rams ~]# mdadm --manage /dev/md0 --fail? /dev/srinuvg/srinulv7
[root@rams~]#mdadm? --manage /dev/md0? --remove /dev/srinuvg/srinulv7an        

Replace a Raid device with a specific disk

[root@rams~]#mdadm? --manage /dev/md0? --replace? /dev/srinuvg/srinulv1? --with/dev/srinuvg/srinulv2ove v/sdb1        

Add a device to the RAID array

You will typically add a new device when replacing a faulty one, or when you have a spare part that you want to have handy in case of a failure:

[root@rams~]#mdadm --manage /dev/md0 --add /dev/srinuvg/srinulv5        

TO increase or decrease the file system

[root@rams ~]#resize2fs /dev/md0 --> in this action that remove the fail disks and also works resize the disks        

RAID level 1 – Mirroring :

Mirroring?is used in RAID 1 and RAID 10. Mirroring is making a copy of same data. In RAID 1 it will save the same content to the other disk too.

RAID Level 1 is based on Mirroring technique. Level 1 provides redundancy by writing identical data to each member disk of the array. The storage capacity of the level 1 array is equal to the capacity of one of the mirrored hard disks in a Hardware RAID or one of the mirrored partitions in a Software RAID. If a drive fails, the controller uses either the data drive or the mirror drive for data recovery and continues operation. You need at least 2 drives for a RAID 1 array. In RAID 1 write speed is slow but read speed is good.

Advantages

  • RAID 1 offers excellent read speed and a write-speed that is comparable to that of a single drive.
  • In case a drive fails, data do not have to be re build, they just have to be copied to the replacement drive.
  • RAID 1 is a very simple technology.

Disadvantages

  • The main disadvantage is that the effective storage capacity is only half of the total drive capacity because all data get written twice.

Configuration:

[root@rams ~]# lvcreate -L +1G -n srinulv3 srinuvg
 Logical volume "srinulv3" created

[root@rams ~]# lvcreate -L +1G -n srinulv4 srinuvg
 Logical volume "srinulv4" created

[root@rams ~]# mdadm --create /dev/md10 --level=1 --raid-devices=2 /dev/srinuvg/srinulv3 /dev/srinuvg/srinulv4

mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.

[root@rams ~]# Cat /proc/mdstat
Personalities: [raid0] [raid1]
md10: active raid1 dm-3[1] dm-2[0]
????? 1048000 blocks super 1.2 [2/2] [UU]
md0: active raid0 dm-1[1] dm-0[0]
????? 2096128 blocks super 1.2 512k chunks        

Check that full information of disk:

RAID level 5(Distributed parity):

RAID 5 is the most common secure RAID level. It requires at least 3 drives but can work with up to 16. Data blocks are striped across the drives and on one drive a parity checksum of all the block data is written. The parity data are not written to a fixed drive, they are spread across all drives, as the drawing below shows. Using the parity data, the computer can recalculate the data of one of the other data blocks, should those data no longer be available. That means a RAID 5 array can withstand a single drive failure without losing data or access to data.

Advantages

  • Read data transactions are very fast while write data transactions are somewhat slower (due to the parity that has to be calculated).
  • If a drive fails, you still have access to all data, even while the failed drive is being replaced and the storage controller rebuilds the data on the new drive.

Disadvantages

  • Drive failures have an effect on throughput, although this is still acceptable.

Configuration:

[root@rams ~]# lvcreate -L +1G -n srinulv5 srinuvg
Logical volume "srinulv5" created
[root@rams ~]# lvcreate -L +1G -n srinulv6 srinuvg
Logical volume "srinulv6" created
[root@rams ~]# lvcreate -L +1G -n srinulv7 srinuvg
Logical volume "srinulv7" created
[root@rams ~]# mdadm --create /dev/md5 --level=raid5 --raid-devices=3 /dev/srinuvg/srinulv5 /dev/srinuvg/srinulv6 /dev/srinuvg/srinulv7        

RAID level 10 – combining RAID 1 &; RAID 0

It is possible to combine the advantages (and disadvantages) of RAID 0 and RAID 1 in one single system. This is a nested or hybrid RAID configuration. It provides security by mirroring all data on secondary drives while using striping across each set of drives to speed up data transfers.

Advantages

  • If something goes wrong with one of the disks in a RAID 10 configuration, the rebuild time is very fast since all that is needed is copying all the data from the surviving mirror to a new drive. This can take as little as 30 minutes for drives of? 1 TB.

Disadvantages

  • Half of the storage capacity goes to mirroring, so compared to large RAID 5? or RAID 6 arrays, this is an expensive way to have redundancy.

?

In this topic the most important question is that

***How could you replace the failed disk in RAID 0 or RAID 1 or RAID 5? (or)

***How could you replace the disk in RAID 0 or 1 or 5?

The answer is first I need to check whether the disk is failed or not . Also should check if there is any spare disk available or not by using below command

#mdadm –D raid level

If it is failed then remove that disk using

#mdadm ??--manage /dev/md0 –remove /dev/vg/lv or /dev/sdb

After that add spare disk or new disk to that system using below command

#mdadm? --grow?? --raid devices=3 /dev/md5? --a /dev/srinuvg/srinulv

?

?

要查看或添加评论,请登录

Ramesh Ramineni的更多文章

社区洞察

其他会员也浏览了