Preamble

We all know how easy it is to set up software RAID1 during install… but what if you had to do it afterward?  Say for instance you only had one HDD at the time and decided later on you wanted to add a second drive and make it into a Mirror.  Well, this tutorial will show you how to add a second drive of the same size (very important that they be the same size!) and create a RAID on Ubuntu 16 after installation.

For this example, I used Ubuntu 16.04.1 without LVM and without custom partitioning.   And I have already added the second drive to the system. Here is what my drives looks like:

As you can see, my /dev/sdb drive is blank, however, it does not have to be. I will explain more on this as we go along.

expand-codePlease note, some of the code you will see will go past the boundary of my blog, to view all the code in its entirety, when you mouse over the code a header bar will appear, just click the “Expand Code” icon to view everything.

Note: During this process, you may need access to the server via Keyboard and Monitor as during the boot process after creating the raid, it will ask you at boot time if you wish to boot from a degraded raid. You need to be able to press yes. So if you do not have a remove KVM type system connected to your server, you will need to be at the screen to type yes.

Getting Ready

Before we dive into the guts of this tutorial, you will need to install a few things.

The above command will install the tools we need to create and manage our RAID1 as well as create a bootable initram image to boot from.

To prevent having to reboot, you can run the following commands to load the modules we will need as well:

Now you should see that the system is RAID capable, but does not contain any RAID setting:

Now that our system is ready to begin.  Let’s start by prepping the second hard drive (/dev/sdb)

Preparing the Hard Drive

We are going to leave /dev/sda alone for a while, after all, it has our operating system on it and we do not want to jeopardize that.  So we will now start to prep /dev/sdb to be joined to the RAID.

To do this, we need it to have an exact copy of the partition table that /dev/sda has, to do this we can run the following command:

Now if you compare the two drives using fdisk -l you will see that /dev/sda and /dev/sdb have the same table. I have truncated the results so you can see just the table and the sizes:

Now we need to change the partition type for each of the partitions (sdb1 and sdb5). There are different ways to go about this, the quickest and easiest way is via the following command:

If you do an fdisk -l again you will see the partition types have now changed to Linux Raid Autodetect:

,pre>Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 39845887 39843840 19G fd Linux raid autodetect
/dev/sdb2 39847934 41940991 2093058 1022M 5 Extended
/dev/sdb5 39847936 41940991 2093056 1022M fd Linux raid autodetect

To make sure there are no previous raid configurations on the disk (say you used this disk previously) you can run the following commands:

If there were no previous settings you will get an error (shown bellow) if there was a previous configuration, there will be no output to the commands.

Creating RAID arrays

Now that we have our /dev/sdb prepped its time to create our RAID1 using the mdadm command. Note: When it asks you if you want to continue, type Y:

You should see something similar to the bellow output:

You can verify the raid was started via the /proc/mdstat file:

You will see that the raid drives have been created, however, they are not complete raids as they are missing a drive each as designated by the [_U] at the end of the lines.

Now the raid has been created, but they have not been formatted. We can do this via the following commands:

We now ned to adjust the mdadm configuration which currently does not contain any raid information.

Now you can cat the file to see that it has the raid information added:

The two last lines show that the two raids have been successfully added to the configuration.

Now that we have the RAID created, we need to make sure they system sees them

Adjusting The System To RAID1

Let go ahead and mount the root raid partition /dev/md0

And verify its mounted

At the bottom of the list you should see the entry we are looking for:

Now we need to add the UUIDs of the raid partitions to the Fstab file, replacing the UUIDs of the /dev/sda entries:

Now that we have the UUIDs, replace them in the fstab file:

Make sure you match the type, EXT4 to EXT4 and Swap to Swap.

Next we need to replace /dev/sda1 with /dev/md0 in /etc/mtab:

and to verify

Look to make sure there are no entried for sda1, the following outbut has been truncated to show only the modifed entries:

Now we have to set up the boot loader to boot to the raid drive.

Setup the GRUB2 boot loader

In order to boot properly during the raid setup, we will need to create a temporary grub config file. FOr this you will need to know what your kernel verion is.

Now we will copy the custom file and edit it to add our temporary configuration:

Add the following at the bottom of the file, making sure to replace all instance of the kernel version with the version you found in the previous command:

Now you need to update grub and modify the ramdisk for the new configuration:

Your output should look similar to this:

Copy files to the new disk

Now we need to copy all the files from the file system root to the raid partition.

If you are doing this remotely via SSH, you can add the verbose option to the command to see that the process is still running. I would not recommend doing this at the server as the refresh rate on the monitor will slow down the process.

Preparing GRUB2

It’s now time to reboot to the Raid partition, but before we do that we ned to make sure both drives have grub installed:

Now you will need to reboot. For this part, you should be at the server as it may ask you if you wish to boot from a degraded RAID, you will need to type yes for it to complete the boot process so that it can be accessible remotely again.

IMPORTANT NOTE
It is important to note that in my situation, testing on a VM (Virtual machine) although I named my raids md0 and md1, Ubuntu took the liberty of renaming these to md126 and md127. If you get any errors and it drops you to a initramfs prompt, check and see what your raid partitions are by typing in “ls /dev/md*” If you are like me, and they have changed… boot back into Ubuntu (using second menu option) and adjust your temporary config file. I had to adjust the “linux” line

needed to be changed to

Then you need to run “update-grub”, “update-initramfs -u”, and install grub on /dev/sda and sdb just to be sure. WHen you boot you can see the changes Ubuntu did when you cat /proc/mdstat

If your raid partitions changed, for the rest of the tutorial, swap md0 and md1 with the appropriate changes.

Preparing sda for the raid

You can verify that you are now on the raid by typing “df -h”

You will see (in my case) that the root is not on /dev/sda1 but on /dev/md126

Now we are going to change the paritition types of sda1 and sda5 just like we did for sdb1 and sdb5

If you do an “fdisk -l” you will see the partition types have changed

Now we will add the drives tot he raid partions (in my case md126 and md127)

If you “cat /proc/mdstat” you will see that the drive is now syncing:

Now we are going to make sure that mdadm.conf has all the right changes:

 ! Before continuing, make sure your raid is fully rebuilt – this may take a while ! 

Cleaning up GRUB

Now its time to do some house cleaning. We need to remove the temporary config, update grum and the initramfs files and re-write grub to /dev/sda and sdb.

Next, Reboot and you should be on a fully Raided system in a RADI1 configuration.