Creating a software RAID5 array

This is Part 2 of a series of posts where I describe building a DIY external RAID5 array.  In Part 1 I talked about the hardware components that I used.   In this post  I will show you how to configure the software RAID5.

To implement RAID5 you need a minimum of three drives.  I begin with three 1TB drives.  Each drive is 1TB in size.  When we are finished we will have a device that is 2TB in size and is fault tolerant.   If one drive should fail, we will be able to replace the failed drive and rebuild our array without losing data and continue on.

  • /dev/sdb
  • /dev/sdc
  • /dev/sdd

The first step is to partition your drives.  Linux software raid uses a special type of partition called a “Linux raid autodetect” partition (type 0xfd).  More about the “autodetect” partition later, but for now lets begin partitioning the drives.  I’ll show you how to the first drive /dev/sdb.  Repeat the same steps for drives /dev/sdc and /dev/sdd.

$ sudo fdisk /dev/sdb
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-133674, default 1);
Using default value 1
Last cylinder, +cylinder or +size{K,M,G} (1-133674, default 133674):
Using default value 133674

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Repeat for drives /dev/sdc and /dev/sdd.  When you are done you should have three drives which look like the following.

$ sudo fdisk -l
Disk /dev/sdb: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xbe84c178

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      133674  1073736373+  fd  Linux raid autodetect
Disk /dev/sdc: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xf70d7ba7

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      133674  1073736373+  fd  Linux raid autodetect
Disk /dev/sdd: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x681afb7b

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      133674  1073736373+  fd  Linux raid autodetect

I can now combine these three partitions to create the RAID5 device /dev/md0.


$ sudo mdadm --create --verbose /dev/md0  --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1

mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 64K
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: size set to 1073736256K
mdadm: array /dev/md0 started.

The previous command merely started the process of combining the three drives into a single RAID5 device.  This process will take a while to complete.  You can check it’s progress by displaying the special file /proc/mdstat.

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
21474725512 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[========>.............]  recovery = 35.0% (376037544/1073736256) finish=128.6min speed=90402K/sec

usused devices: <none>

From the information above, you can see that the process can take a long time depending on the size and speed of your drives.  It is 35% complete, and the ETA is 128.6 minutes.  You can continue to “cat” the /proc/mdstat file to check the progress.  When it is finished, you should see something like the following.

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[2] sdc1[1] sdb1[0]
      2147472512 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

The raid device /dev/md0 is now ready for use.  I can now format it with a filesystem.  I choose to format it as an ext4 filesystem.  For this example I will mount it as /srv/d1.  You can mount it however you wish (i.e. /usr/local, /opt).

$ sudo mkfs.ext4 /dev/md0
$ sudo mkdir /srv/d1
$ sudo mount /dev/md0 /srv/d1

I’ll edit my /etc/fstab file and add the following line.

/dev/md0     /srv/d1   ext4      defaults      0 0

I am almost done.  However we must now save the configuration information of our array so that it can be discovered at boot time.  The following command can be used to examine your raid array and also to save its configuration to a file to be used at boot time.

$ sudo mdadm --detail --scan | sudo tee /etc/mdadm.conf
ARRAY /dev/md0 metadata=0.90 UUID=622146c2:61b0872d:6bbacb7a:b6d31587

We are now done.  The raid device has been created and mounted.  We saved its configuration so that it can be discovered and remounted after each boot.

$ df -h
Filesystem               Size  Used Avail Use% Mounted On
/dev/mapper/vg0-lv_root  9.7G  2.2G  7.5G  23% /
tmpfs                    501M  348K  501M   1% /dev/shm
/dev/sda1                194M   23M  162M  13% /boot
/dev/md0                 2.0T  199M  1.9T   1% /srv/d1

In Part 3, I will deal with growing the array by adding a disk to the array to make it a four disk array.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s