HTGWA: Create a RAID array in Linux with mdadm

This is a simple guide, part of a series I'll call 'How-To Guide Without Ads'. In it, I'm going to document how I create and mount a RAID array in Linux with mdadm.

In the guide, I'll create a RAID 0 array, but other types can be created by specifying the proper --level in the mdadm create command.

Prepare the disks

You should have at least two drives set up and ready to go. And make sure you don't care about anything on them. They're gonna get erased. And make sure you don't care about the integrity of the data you're going to store on the RAID 0 volume. RAID 0 is good for speed... and that's about it. Any drive fails, all your data's gone.

Note: Other guides, like this excellent one on the Unix StackExchange site, have a lot more detail. This is just a quick and dirty guide.

List all the devices on your system:

$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0  7.3T  0 disk 
└─sda1        8:1    0  7.3T  0 part /mnt/mydrive
sdb           8:16   0  7.3T  0 disk 
sdc           8:32   0  7.3T  0 disk 
sdd           8:48   0  7.3T  0 disk 
sde           8:64   0  7.3T  0 disk 
nvme0n1     259:0    0  7.3T  0 disk 
└─nvme0n1p1 259:1    0  7.3T  0 part /

I want to RAID together sda through sde (crazy, I know). I noticed that sda already has a partition and a mount. We should make sure all the drives that will be part of the array are partition-free:

$ sudo umount /dev/sda?; sudo wipefs --all --force /dev/sda?; sudo wipefs --all --force /dev/sda
$ sudo umount /dev/sdb?; sudo wipefs --all --force /dev/sdb?; sudo wipefs --all --force /dev/sdb
...

Do that for each of the drives. If you didn't realize it yet, this wipes everything. It doesn't zero the data, so technically it could still be recovered at this point!

Check to make sure nothing's mounted (and make sure you have removed any of the drives you'll use in the array from /etc/fstab if you had persistent mounts for them in there!):

$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0  7.3T  0 disk 
sdb           8:16   0  7.3T  0 disk 
sdc           8:32   0  7.3T  0 disk 
sdd           8:48   0  7.3T  0 disk 
sde           8:64   0  7.3T  0 disk 
nvme0n1     259:0    0  7.3T  0 disk 
└─nvme0n1p1 259:1    0  7.3T  0 part /

Looking good, time to start building the array!

Partition the disks with sgdisk

You could interactively do this with gdisk, but I like more automation, so I use sgdisk. If it's not installed, and you're on a Debian-like distro, install it: sudo apt install -y gdisk.

sudo sgdisk -n 1:0:0 /dev/sda
sudo sgdisk -n 1:0:0 /dev/sdb
...

Do that for each of the drives.

WARNING: Entering the wrong commands here will wipe data on your precious drives. You've been warned. Again.

Verify there's now a partition for each drive:

pi@taco:~ $ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0  7.3T  0 disk 
└─sda1        8:1    0  7.3T  0 part 
sdb           8:16   0  7.3T  0 disk 
└─sdb1        8:17   0  7.3T  0 part 
sdc           8:32   0  7.3T  0 disk 
└─sdc1        8:33   0  7.3T  0 part 
sdd           8:48   0  7.3T  0 disk 
└─sdd1        8:49   0  7.3T  0 part 
sde           8:64   0  7.3T  0 disk 
└─sde1        8:65   0  7.3T  0 part 
...

Create a RAID 0 array with mdadm

If you don't have mdadm installed, and you're on a Debian-like system, run sudo apt install -y mdadm.

$ sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=5 /dev/sd[a-e]1
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

You can specify different RAID levels with the --level option above. Certain levels require certain numbers of drives to work correctly!

Verify the array is working

For RAID 0, it should immediately show State : clean when running the command below. For other RAID levels, it may take a while to initially resync or do other operations.

$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Nov 10 18:05:57 2021
        Raid Level : raid0
        Array Size : 39069465600 (37259.55 GiB 40007.13 GB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Wed Nov 10 18:05:57 2021
             State : clean 
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : taco:0  (local to host taco)
              UUID : a5043664:c01dac00:73e5a8fc:2caf5144
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       4       8       65        4      active sync   /dev/sde1

You observe the progress of a rebuild (if choosing a level besides RAID 0, this will take some time) with watch cat /proc/mdstat. Ctrl-C to exit.

Persist the array configuration to mdadm.conf

$ sudo mdadm --detail --scan --verbose | sudo tee -a /etc/mdadm/mdadm.conf

If you don't do this, the RAID array won't come up after a reboot. That would be sad.

Format the array

$ sudo mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/md0
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done                            
Creating filesystem with 9767366400 4k blocks and 610461696 inodes
Filesystem UUID: 5d3b012c-e5f6-49d1-9014-1c61e982594f
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
    102400000, 214990848, 512000000, 550731776, 644972544, 1934917632, 
    2560000000, 3855122432, 5804752896

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done 

In this example, I used lazy initialization to avoid the (very) long process of initializing all the inodes. For large arrays, especially with brand new drives that you know aren't full of old files, there's no practical reason to do it the 'normal'/non-lazy way (at least, AFAICT).

Mount the array

Checking on our array with lsblk now, we can see all the members of md0:

$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0  7.3T  0 disk  
└─sda1        8:1    0  7.3T  0 part  
  └─md0       9:0    0 36.4T  0 raid0 
sdb           8:16   0  7.3T  0 disk  
└─sdb1        8:17   0  7.3T  0 part  
  └─md0       9:0    0 36.4T  0 raid0 
sdc           8:32   0  7.3T  0 disk  
└─sdc1        8:33   0  7.3T  0 part  
  └─md0       9:0    0 36.4T  0 raid0 
sdd           8:48   0  7.3T  0 disk  
└─sdd1        8:49   0  7.3T  0 part  
  └─md0       9:0    0 36.4T  0 raid0 
sde           8:64   0  7.3T  0 disk  
└─sde1        8:65   0  7.3T  0 part  
  └─md0       9:0    0 36.4T  0 raid0 

Now make a mount point and mount the volume:

$ sudo mkdir /mnt/raid0
$ sudo mount /dev/md0 /mnt/raid0

Verify the mount shows up with df

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
...
/dev/md0         37T   24K   37T   1% /mnt/raid0

Make the mount persist

If you don't add the mount to /etc/fstab, it won't be mounted after you reboot!

First, get the UUID of the array (the value inside the quotations in the output below):

$ sudo blkid
...
/dev/md0: UUID="5d3b012c-e5f6-49d1-9014-1c61e982594f" TYPE="ext4"

Then, edit /etc/fstab (e.g. sudo nano /etc/fstab) and add a line like the following to the end:

UUID=5d3b012c-e5f6-49d1-9014-1c61e982594f /mnt/raid0 ext4 defaults 0 0

Save that file and reboot.

Note: If genfstab is available on your system, use it instead. Much less likely to asplode things: genfstab -U /mnt/mydrive >> /mnt/etc/fstab.

Verify the mount persisted.

After reboot:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
...
/dev/md0         37T   24K   37T   1% /mnt/raid0

Drop the array

If you'd like to drop or remove the RAID array and reset all the disk partitions so you could use them in another array, or separately, you need to do the following:

  1. Edit /etc/fstab and delete the line for the /mnt/raid0 mount point.
  2. Edit /etc/mdadm/mdadm.conf and delete the lines you added earlier via mdadm | tee.
  3. Unmount the volume: sudo umount /mnt/raid0
  4. Wipe the ext4 filesystem: sudo wipefs --all --force /dev/md0
  5. Stop the RAID volume: sudo mdadm --stop /dev/md0
  6. Zero the superblock on all the drives: sudo mdadm --zero-superblock /dev/sda1 /dev/sdb1 ...

At this point, you should have back all the drives that were part of the array and can do other things with them.

Comments

Wouldn't it be better (where available) to use a systemd mount unit, instead of hardcoding /etc/fstab?

Potentially. systemd automatically converts the fstab entries to mounts, and if you really need the ordering/dependency definitions, it is definitely better to go that route.

But for legacy reasons and simplicity, I still usually use fstab (old habits die hard, especially if they work fine for the majority of use cases).

Where things do fall apart with fstab is trying to define a dependency chain for something like bind mounting from a ZFS volume! For that, I would definitely go with systemd mounts

In my view RAID 0 these days is good for fast read cache and stuff like that. For examplea proxy can benefit from RAID 0 on the cache. It makes for a faster cache and a cache on a proxy can be lost without catastrophic results.

I followed this tutorial with the disks I had available at the time (1TB, 500GB, 500GB). I now want to add a 4th disk (1TB) to the array but it appears mdadm cannot add disks to an existing RAID0 array. Does anyone know a way around this?

mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri May 24 15:26:18 2024
Raid Level : raid0
Array Size : 1953139200 (1862.66 GiB 2000.01 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Fri May 24 15:26:18 2024
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : original
Chunk Size : 512K

Consistency Policy : none

Name : raspberrypi5:0 (local to host raspberrypi5)
UUID : ab4f9a40:932a0d83:48769582:3e122656
Events : 0

Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd

The disk I want to add is /dev/sda

You type too much ;)
Instead of; sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=5 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Try; sudo mdadm -Cv /dev/md0 -l0 -n5 /dev/sd[abcde]1

That's definitely a good thing to add in a note on the page—I like to spell things out more verbose in these kind of guides, so it's obvious what each flag is doing.

In the section format the array you are mentioning the lazy option. Anyway the activation of lazy options is done setting it to 1, not to 0. So if you want to keep it "lazy" change from:

$ sudo mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/md0

change to:

$ sudo mkfs.ext4 -m 0 -E lazy_itable_init=1,lazy_journal_init=1 /dev/md0

and this change will avoid the (very) long process of initializing all the inodes.

Thank you for a useful tutorial.

When I followed it to set up my pi, the resulting RAID is owned by Root with restricted permissions.

How would your tutorial change to give 'pi' ownership and geneeral permissions?

Thank you, Alan

Thank you so much for this! I appreciate your 'long winded' method of expanding each flag to help me learn this process better. I was booting off a stripe, but with this help I now also have two mirror arrays for a total of six drives. The best way for me to lean is practice, so I will have to 'experience a drive failure' so as to get recovery experience in my lab box. Thank you so much for taking the time to assemble this treasure trove for me.

Best raid tutorial ever, even for beginner in mdadm command. Thanks jeff!

Nice guide, but I'm wondering. What is the benefit of partitioning the RAID members beforehand instead of just using the whole devices (/dev/sda, /dev/sdb, etc.) as array members? I've always just used the "whole disk" device.

Flexibility; especially rebuilding an array using RAID5 or RAID1-configuration, where you can't get a hold of the exact model/size of the original disk being replaced. The replacement disk size have to be precisely the same as the failed drive being replaced. Sometimes the same manufacturer would reuse the same model name but, on closer inspection there would be a slight difference in size which would cause problems.

The partitioning method allows you to bypass this problem and give you the option to use drives from another manufacturer. All you have to do is sacrifice a little bit of disk space at the end of each disk (like 100 MB each), to ensure the partitions can be made to have the exact same size across all different hard disk models and manufacturers.

For a simple RAID0-configuration, there's no real reason to use partitions instead of physical disks.

I found it necessary to run update-initramfs after, other wise /dev/md0 was being renamed to /dev/md127 after reboot

If you are following this guide for a second time to add an additional array, make sure that you delete the existing config in mdadm.conf before running sudo mdadm --detail --scan --verbose | sudo tee -a /etc/mdadm/mdadm.conf again.

The command just appends the new config to the old one, so it will try to load the array using the old config and might get confused.

Hello there :) i have raspberry pi 5 combined with radxa pentra sata hat, i have tried everything that i have found how to create a raid but, everything that i start
sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdd1 /dev/sde1
after i do
cat /proc/mdstat the output is
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sde1[4] sdd1[2] sdb1[1] sda1[0]
11720655360 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
[>....................] recovery = 0.0% (19676/3906885120) finish=66173.1min speed=983K/sec
bitmap: 0/8 pages [0KB], 65536KB chunk

the last drive is getting faulty, in this case is sde1, if i remove it, then again the last drive will be faulty. So if i start the creation of the array with 3 disks it will be /dev/sdd1. I have tried with partition without everything that is available in the internet and everytime the last drive is becoming faulty.... The only thing that i see is that on every tutorial, the drives are alphabet available, so sda sdb sdc sdd, but in my case i have already one that is mounted and it`s doing his job. Could this be the reason ? I doubt, but if this is not the reason, what else can be

Left out target...

Wrong:
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=5 /dev/sd[a-e]1

Right:
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=5 /dev/md0 /dev/sd[a-e]1