How to Set Up Software RAID 1 on an Existing Linux Distribution

In this tutorial, we’ll be talking about RAID, specifically we will set up software RAID 1 on a running Linux distribution.

What is RAID?

RAID stands for Redundant Array of Inexpensive Disks. RAID allows you to turn multiple physical hard drives into a single logical hard drive. There are many RAID levels such as RAID 0, RAID 1, RAID 5, RAID 10 etc.

Here we will discuss about RAID 1 which is also known as disk mirroring. RAID 1 creates identical copies of data. If you have two hard drives in RAID 1, then data will be written to both drives. The two hard drives have the same data.

The nice part about RAID 1 is that if one of your hard drive fails, your computer or server would still be up and running because you have a complete, intact copy of the data on the other hard drive. You can pull the failed hard drive out while the computer is running, insert a new hard drive and it will automatically rebuilds the mirror.

The downside of RAID 1 is that you don’t get any extra disk space. If your two hard drives are both 1TB, then the total usable volume is 1TB instead of 2TB.

Hardware RAID vs Software RAID

To set up RAID, you can either use a hard drive controller, or use a piece of software to create it. A hard drive controller is a PCIe card that you put into a computer. Then you connect your hard drives to this card. When you boot up the computer, you are going to see an option that allows you to configure the RAID. You can install an operating system on top of hardware RAID which can increase uptime.

Software RAID requires you already installed an operating system. It’s good for storing data.

Basic Steps to Create Software RAID 1 on Linux

  • First you need to have a Linux distribution installed on your hard drive. In this tutorial we will name it /dev/sda.
  • Then you are going to grab two hard drives which will be named /dev/sdb and /dev/sdc in this post. These two hard drives can be of different sizes. Remember to back up your existing data before formating your hard drives.
  • Next, we will create special file systems on /dev/sdb and /dev/sdc.
  • And finally create the RAID 1 array using the mdadm utility.

Step 1: Format Hard Drive

Insert two hard drives into your Linux computer, then open up a terminal window. Run the following command to check the device name.

sudo fdisk -l

linux fdisk partition

You can see mine is /dev/sdb and /dev/sdc.

Then run the following 2 commands to make new MBR partition table on the two hard drives. (Note: this is going to wipe out all existing partitions and data from these two hard drives. Make sure your data is backed up.)

sudo parted /dev/sdb mklabel msdos

sudo parted /dev/sdc mklabel msdos

You can create GPT partition table by replacing msdos with gpt, but for the sake of compatibility, this tutorial will create MBR partition table.

Next, use the fdisk command to create a new partition on each drive and format them as a Linux raid autodetect file system. First do this on /dev/sdb.

sudo fdisk /dev/sdb

Follow these instructions.

  1. Type n to create a new partition.
  2. Type p to select primary partition.
  3. Type 1 to create /dev/sdb1.
  4. Press Enter to choose the default first sector
  5. Press Enter to choose the default last sector. This partition will span across the entire drive.
  6. Typing p will print information about the newly created partition. By default the partition type is Linux.
  7. We need to change the partition type, so type t.
  8. Enter fd to set partition type to Linux raid autodetect.
  9. Type p again to check the partition type.
  10. Type w to apply the above changes.

software raid 1 linux raid autodetect

Follow the same instruction to create a Linux raid autodetect partition on /dev/sdc.

Now we have two raid devices /dev/sdb1 and /dev/sdc1.

Step 2: Install mdadm

mdadm is used for managing MD (multiple devices) devices, also known as Linux software RAID.

Debian/Ubuntu:     sudo apt install mdadm

CentOS/Redhat:     sudo yum install mdadm

SUSE:              sudo zypper install mdadm

Arch Linux         sudo pacman -S mdadm

Let’s examine the two devices.

sudo mdadm --examine /dev/sdb /dev/sdc

Software RAID 1

You can see that both are the type fd (Linux raid autodetect). At this stage, there’s no RAID setup on /dev/sdb1 and /dev/sdc1 which can be inferred with this command.

sudo mdadm --examine /dev/sdb1 /dev/sdc1

md superblock

Step 3: Create RAID 1 Logical Drive

Execute the following command to create RAID 1. The logical drive will be named /dev/md0.

sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1

linux raid 1 array disk mirroring

Note: If you see this message: “Device or resource busy”, then you may need to reboot the OS.

Now we can check it with:

cat /proc/mdstat

linux software raid 1 set up

You can see that md0 is active and is a RAID 1 setup. To get more detailed information about /dev/md0, we can use the below commands:

sudo mdadm --detail /dev/md0

mdadm details

To obtain detailed information about each raid device, run this command:

sudo mdadm --examine /dev/sdb1 /dev/sdc1

linux raid devices

Step 4: Create File System on the RAID 1 Logical Drive

Let’s format it to ext4 file system.

sudo mkfs.ext4 /dev/md0

Then create a mount point /mnt/raid1 and mount the RAID 1 drive.

sudo mkdir /mnt/raid1

sudo mount /dev/md0 /mnt/raid1

You can use this command to check how much disk space you have.

df -h /mnt/raid1

raid 1 size vs volume

Step 5: Test

Now let’s go to /mnt/raid1 and create a text file.

cd /mnt/raid1

sudo nano raid1.txt

Write something like

This is raid 1 device.

Save and close the file. Next, remove one of your drive out from your computer and check the status RAID 1 device again.

sudo mdadm --examine /dev/sdb1 /dev/sdc1

raid 1 disk mirroring

You can see that /dev/sdc1 is not available. If we check /dev/md0, we can see that one RAID device is removed.

sudo mdadm --detail /dev/md0

raid device removed

However, the text file is still there.

cat /mnt/raid1/raid1.txt

raid 1 disk failure

To add the failed drive (in this case /dev/sdc1) back to the RAID, run the following command.

sudo mdadm --manage /dev/md0 --add /dev/sdc1

raid recovery

It’s very important to save our RAID1 configuration with the below command.

sudo mdadm --detail --scan --verbose | sudo tee -a /etc/mdadm/mdadm.conf

Output:

ARRAY /dev/md/0 level=raid1 num-devices=2 metadata=1.2 spares=1 name=xenial:0 UUID=c7a2743d:f1e0d872:b2ad29cd:e2bee48c
      devices=/dev/sdb1,/dev/sdc1

On some Linux distribution such as CentOS, the config file for mdadm is /etc/mdadm.conf. You should run the following command to generate a new initramfs image after running the above command.

sudo update-initramfs -u

To automatically mount the RAID 1 logical drive on boot time, add an entry in /etc/fstab file like below.

/dev/md0   /mnt/raid1   ext4   defaults   0   0

You may want to use the x-gvfs-show option, will let you see your RAID1 in the sidebar of your file manager.

/dev/md0  /mnt/raid1   ext4    defaults,x-gvfs-show   0   0

How to Remove the RAID

If you don’t want to use the RAID anymore, run the following command to remove the RAID.

sudo mdadm --remove /dev/md0

Then edit the mdadm.conf file and comment out the RAID definition.

#ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 spares=1 name=bionic:0 UUID=76c80bd0:6b1fe526:90807435:99030af9
#  devices=/dev/sda1,/dev/sdb1

Also, edit /etc/fstab file and comment out the line that enables auto-mount of the RAID device.

Wrapping Up

I hope this tutorial helped you create software RAID 1 on Linux. As always, if you found this post useful,  subscribe to our free newsletter or follow us on Google+Twitter or like our Facebook page.

Rate this tutorial
[Total: 33 Average: 4.2]

25 Responses to “How to Set Up Software RAID 1 on an Existing Linux Distribution

  • Brandon Totel
    4 years ago

    I already have Debian installed to one 128GB SSD and I already have another 128GB SSD in the box as well. Can I create a Raid1 mirror using the current OS drive and the other blank drive or will this process wipe out the OS drive?

    • Xiao Guo-An (Admin)
      4 years ago

      This tutorial is not applicable to your situation, it will wipe out your OS drive.

  • Brandon Totel
    4 years ago

    Is there a way to enable Raid1 if you already has the OS installed on one of the drives? I want to create a Raid1 setup for my OS drive but this tutorial will wipe that out so how do you have the OS installed and setup a Raid1?

    • Menoo320
      10 months ago

      Have you found out how to do this? There is literally no guide on this use case.

      • Matthew Najmon
        6 months ago

        To boot off of a RAID, you need a RAID defined by a hardware RAID controller, not a software-defined one like this tutorial is for, because a RAID’s contents are not accessible without its RAID controller, a controller that takes the form of software running within the OS’s scope can’t start before the OS does, and you can’t boot an OS off of a resource that requires that OS to already be running before that resource becomes available.

        • khaleah mallory
          6 months ago

          so how was this raid built if it shows their is an OS in place?

        • Vinicius Fonseca
          6 months ago

          Not true. You can have OS booting on a soft RAID:
          https://help.ubuntu.com/community/Installation/SoftwareRAID

      • There’s a guide here on how to set up RAID 1 from scratch on a new install: https://feeding.cloud.geek.nz/posts/installing-ubuntu-bionic-on-encrypted-raid1/

        It’s fairly complicated but well explained.

        I would expect you could dd your system volume over to a disk image on a backup drive and then dd it back after you had rebuilt your system drive array using the above procedure. Though I guess your resulting filesystem will be slightly smaller than the original one so that might not entirely work.

  • David Tobar
    2 years ago

    will like to know the steps to RAID 1 two external 4T drives. thanks

    • Ted Tibbetts
      3 months ago

      I just did this with 2 8TB USB3 drives. I followed the procedure as given except that I had to use `gpt` instead of `msdos` when issuing the boot record creation command with `parted`. This also meant that the filesystem type was `29` instead of `FD`.

  • Hi, great article, easy to read and follow. Thank you for this. Everything worked flawlessly, and I am quite proud of my self being able to follow Linux CLI instructions where everything works without any hiccups as I am new to Linux (ubuntu) and the learning curve is challenging for someone with zero programming knowledge.

    My question is now that my drives are working and they are mounted, how do I actually use them?
    When I try to open the raid 1 folder on /mnt, it says I have no permission.

    Also, instead of having to go to home/mnt/raid1, how to I create a shortcut so it shows up in file explorer on the left hand side (like my computer, downloads, pictures, or when you plug in a USB stick, it shows up on the left hand side )?

    And finally, how do I rename it from raid1 to something else like “1TB NAS RAID1”?

    • If you can’t access the raid1 folder, run the following command to grant read and write permission to your user account. Replace username with your real username.

      sudo setfacl -R -m u:username:rwx /mnt/raid1

      You can go to the raid 1 folder in your file explorer and press Ctrl+D to bookmark this folder.

      Once you bookmarked this folder, you can rename it by right-clicking the name on the left sidebar.

      • i tried
        sudo setfacl -R -m u:username:rwx /mnt/raid1

        got this in return
        setfacl: Option -m: Invalid argument near character 3

        checked man setfacl and -m is a valid argument, dont know why it didnt work
        also tried
        sudo setfacl -m u:username:rwx /mnt/raid1
        got
        setfacl: Option -m: Invalid argument near character 1

        also tried without -R, same result …. it doesnt like -m for some reason

    • Replace username with your real username.

  • I enjoyed your article but have one question. If I want to create a Raid 0 what would I change? I assume it would be at this stage:

    Step 3: Create RAID 1 Logical Drive
    Execute the following command to create RAID 1. The logical drive will be named /dev/md0.

    sudo mdadm –create /dev/md0 –level=mirror –raid-devices=2 /dev/sdb1 /dev/sdc1

  • Hi,

    Love the article.

    What if the OS crashes, how do you remount this RAID elsewhere?

    Thx

  • khaleah mallory
    6 months ago

    Hi,
    The article seems helpful and very detailed, but I notice that you state “Software RAID requires you already installed an operating system.” However, if I go forward with creating the raid it will wipe my OS drive? How were you able to create the RAIDs with an OS already installed. Was there a way you backed up the boot data or partition? Also in another comment you stated that creating the raid would wipe any existing OS. I see in the output from sudo fdisk -l that your sdb drive has a boot partition. So did it not wipe your whole drive after creating the raids? I have not found another article this detailed I feel I am closer to successfully creating the raids but this is a big hiccup. Please let me know any thoughts on this. Thank You!

    • Besides the OS drive, you need at least two other drives. The OS drive can’t be added to the RAID.

      My /dev/sdb in this article was a bootable USB stick with a live Linux system on it.

  • Hello,
    What should used instead of update-initramfs in CentoOS 6?

    Thanks!

  • Chuck H Sinclair
    5 months ago

    Hello,

    I used this to setup 3 4TB drives and it works great, and I have 3.6TB available. Thanks for this page and all you do.

    Chuck

  • Excellent tutorial – best I have seen.

    Having trouble though. Creation of MD0 does survive reboot. I think I see reference to MD127. I can’t seem to be able to get the mirror assigned back to MD0 after a reboot which hangs on trying to assign MD0. After I clear lines out of fstab and mdadm.conf – I get a boot – but cannot re-establish MD0. What am I missing?

  • Awesome, just perfect, thanks a million:)
    Curious if it is safe for me to delete the lost+found directory that is in the array as soon as it was created?
    Cheers

  • I tried to give this tutorial 5 stars but it crashed the browser. Three times. Weird.

    There is an easy way to restore a partition to pre-RAID condition: use Gparted, delete the partition, then re-make the partition in the normal way.

  • Great guide, very clear and detailed. One minor thing that might be an issue is that apparently partition type 0xFD is deprecated and it’s recommended to use 0xDA when creating RAID partitions under MBR (unless, as I understand it, you’re not using initrd booting, which I don’t think has happened in a very long time). I guess 0xFD can be confusing for live distros? There’s more information here: https://raid.wiki.kernel.org/index.php/Partition_Types

    It’s also worth mentioning that anyone using a 2TB or larger drive will have to use GPT. In `fdisk`, you’ll want to select partition type 29 as the codes are different for GPT.

    For anyone seeking further information, the Arch guide to RAID is very informative: https://wiki.archlinux.org/index.php/RAID#GUID_Partition_Table

    • JellyCock
      1 month ago

      Ted, this comment is gold. I was banging my head against the desk as I couldn’t figure out why the fd thing wouldn’t work. The author needs to update this.

Leave a Comment

  • Comments with links are moderated by admin before published.
  • Your email address will not be published.
  • Use <pre> ... </pre> HTML tag to quote the output from your terminal/console.
  • Please use the community (https://community.linuxbabe.com) for questions unrelated to this article.
  • I don't have time to answer every question. Making a donation would incentivize me to spend more time answering questions.


The maximum upload file size: 2 MB.
You can upload: image.