The RAID is a technology that combines multiple hard drives into a single logical hard drive. There are different types of RAID levels available, in that RAID1 uses two hard drives and keep identical copies of data. Its also called by the name Raid Mirroring. So using RAID1, if one hard disk fails, the system will continue to run from the second disk because both disk keeps exact same data. One of the downside of RAID 1 is that if we have two disk with 500GB, the total disk space we can utilise after RAID1 setup is still 500GB. In short will can only use 50% of total hard disk space available.
We can achieve RAID setup either in Hardware level or through Software. Hardware raid is done through RAID controller device ( PCIe card), that attach to our server mother board. Software RAID is done through Operating System level. Linux OS has built-in support for software RAID and can setup the RAID at the time of OS install.
However, suppose if we already have a functional Operating system Centos7 with out RAID and you would like to attach an extra disk and convert this existing Centos7 install into RAID1. That’s what we are discussing in this blog article. Specifically means convert existing single disk CentOS 7 system into a two disk RAID1 system without loosing data or reinstall the system. RAID 1 produces a mirrored set, which can tolerate a single disk failure. The GRUB2 bootloader is also configured in such a way that the system will still be able to boot if any of the hard drives fails.
Before starting lets familiar with our current system Configurations. Feel free to refer this article if you own a centos 7, centos 6 or even the newer version centos 8.
We are adding a second identical disk /dev/sdb for raid 1 setup. The raid will be a Linux software raid managed by “mdadm” . If the “mdadm” command is not available, install it using yum command. So attach the new disk in to the server mother board before proceeding with below steps.
Now lets get started. Create identical partition scheme as current disk /dev/sda
Check changes or see the difference using below command.
Changes partition type to Linux RAID autodetect. To do so, “fdisk” the specific hard disk.
Verify the result using below command.
Now we are going to create a degraded RAID1. Which means we Create RAID1 devices in degraded state because the one disk is missing. The /dev/sda which is currently installed with OS and used to boot up.
In our case we have three partitions. So we are doing it for all partitions in new disk /dev/sdb
Check the result with:
Create the file-systems on the new RAID devices partitions.
Now we need to Manually replicate the existing data on existing /dev/sda partitions to new software RAID partitions. For that Mount both / and /boot
Copy Existing data using rsync command
During rsync we got error like below
Noticed similar errors reported as bug in below link.
https://bugzilla.redhat.com/show_bug.cgi?id=873467
What I understood is the error status rsync command is trying to synchronise the attributes on destination partition to match to attributes on the source partition by removing “extra” attributes. But rsync command has no privilege to perform it even if we run it as root user.
In my case the permission denied are for files inside /run folder. In my understanding /run/ is often implemented as a tmpfs ( mount | fgrep run ) and data in those directories won’t survive a reboot (which is a good thing). Which means /run include as a temporary filesystem (tmpfs) which stores volatile runtime data.
So I ignored the rsync errors and proceed with the next steps.
Now mount System information
Chroot into /mnt partition for /dev/sdb disk
Edit fstab with new drive UUID information
Now Open /etc/fstab file and add entries like below.
Now create a mdadm.conf from current RAID configuration:
Backup,Update the current initrd and rebuild the initramfs with the new mdadm.conf:
Now edit grub and add some default parameters to grub.
Make new grub config
Install grub on new disk /dev/sdb
At this point we are going to reboot the system choosing new disk /dev/sdb from bios. If all worked out system will boot with new disk /dev/sdb. After that check mount points and raid status using below commands.
In our case, we switched the disk ports so after reboot new disk become /dev/sda and old disk become /dev/sdb. So don’t get confuse with the screenshot results, if you opted to boot from new disk from bios instead of disk port swap, in that case the mdstat command results will show the new disk as /dev/sdb itself.
Now we need add old disk to the raid array. So Change partition type to “Linux raid autodetect” for old disk.
Now Add old disk to raid 1 array. (In our case its sdb)
Check rebuild status using below command and see if it running.
Reinstall grub on old disk ( In our case its /dev/sdb)
This concluded the raid1 setup on existing centos 7.4 install. Once the rebuild completed, the mdstat result will show like below and from that we can confirm its completed and running fine. Try to reboot it again and see if there any issues encountering. Put your suggestions and thoughts in the comment box.
Thank you very, much, very usefull !
If i am not mistaken, this does not apply to LVM2 installations of CentOS, correct?
Yes, you are correct. Not in an exact way. There is additional tasks and changes need to be performed for LVM partitions.
Thank you very much for the excellent outline and understanding of the process. I particularly like the clever chroot step.
I’ve applied this process with only minor variations to migrate a Fedora 35 installation. The destination for the single drive was into a pre-existing 2-drive raid-1 array, to become a 3-drive array, without loss in transfer.
Total down-time was less than 10 minutes. Míle buíochas!
Very usefull guide.
Could you please to explain how to do the same, but with LVM2 installation?
In my setup /boot is on sda1 partition and sda2 is LMV2 with 5 lvm members: /, /home, /swap, /var/log and /opt.
okay, i will try to create a blog article for that.
By the way, I`ve completed my task and succesfully converted my setup of simple ssd with LVM2 partitions to bootable RAID1 pair of ssd with LVM2 partitions.
Your article saved me a lot of time. So I’d like to thank you again and again!
And I can say that most tricky was exactly the way to properly clone data on LVM2 partitions.
I think if you create an article about such setup it will be extremely useful.
I learned a lot from your write-up even though I’m running Ubuntu 20.04 using UEFI. although I haven’t got a successful raid1 setup. My lack of understanding in booting in general isn’t helping here.
Would it be possible for you to show us how to build raid1 on a running Ubuntu system (with UEFI boot)?