Erase an Ubuntu server with a software raid disk managed by MDADM

MDADM is the software used on various Linux distribution capable of managing a raid. it is not important if the raid was setup during the operative system installation or later, the raid on MDADM can be seen as follows. UPDATE: Check out the PDF version at the bottom of this post ;)

/dev/md0
that can contain the swap or the root(/) of /dev/sda1 and /dev/sdb1

/dev/md1
which is usually reserved for the /home partition, therefore /dev/sda2 and /dev/sdb2

If /dev/md0 is setup to contain the root(/), the best choice is to leave /dev/md1 for the /home and eventually /dev/md2 for the swap area.

The case where /home is inside the root(/) and they both share a raid volume like /dev/md0 and /dev/md1 being the swap, there could be serious problem when one wants to split the raid designed for the user’s file, because splitting a raid that contain the OS might not be as easy as one would think.

In a good environment, let’s suppose our raid is setup with:
/dev/md0 being the root(/) that includes /dev/sda1 and /dev/sdb1
/dev/md1 is /home on /dev/sda2 and /dev/sdb2
the swap area that can have its own raid of /dev/md3

There is a consideration here to make:
The swap area is the part of the disk dedicated by the system as portion of RAM used to store processes that are not currently ran by the system itself, when the system asks for it, the processes stored on the swap will be moved to the RAM for faster execution.

Having a RAID1 on the swap might be a good practice, and might not.
It might slow down the system, and might not.
This, because a RAID1 writes slower than having a single disk.
And this is not even true.
If the RAID1 is on the same IDE controller, a RAID1 might be not the good choice for the swap, but, if the disks are accessed at the same time and one does not slow down the other, then a RAID1 might be of less or no impact at all.

Usually, there is not a significant impact on the performance and this difference is negligible.
Another option is to have 2 swaps partition set to the same priority.

So, how to create a raid on a system that is installed already?
Obviously, you need at least 2 disks, best if same capacity and RPMs, even better if same model as well but this does not really matter.

After having the second disk partitioned in size just like the main, use this command to create a raid1 for the root(/):

mdadm --create --verbose /dev/md0 --level=1 /dev/sda1 /dev/sdb1

There would be also a compact notation of the same command:

mdadm -cv /dev/md0 -l1 –n2 /dev/sd[ab]1

The file /etc/mdadm/mdadm.conf is the configuration file for our software raid, after the creation of the new raid it is required to run the below command to store the configuration into this file.

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

At this point, you might have forgotten how you created the raid if the last time you setup it was a couple of years before.

We can both check the whole system’s raid status or the status of the single raid volume.

mdadm --detail
mdadm --detail /dev/md0

It might happen that one of the disks may break.
We cannot remove a disk from the array directly, we have to fail it and then remove (of course if the drive is broken, might already be seen as a failed drive).

Suppose our main disk break:

mdadm --fail /dev/md0 /dev/sda1

and then we remove it:

mdadm --remove /dev/md0 /dev/sda1

But it is also possible to run both commands at once:

mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1

Now we need to replace this disk with a new one.
We’ll plug the disk on the system and partition it with fdisk to match the size of the other disk used. Then:

mdadm --add /dev/md0 /dev/sdd1

(or whatever disk name the system sees).

We are now going to verify it:
We can both open /proc/mdstat for reading:

cat /proc/mdstat

or run the command:

mdadm --detail /dev/md0
The output can be similar to:
root@localhost# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sdd1[0]
19542976 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sdd2[0]
2097152 blocks [2/2] [UU]

Where /dev/md0 is the root(/) and /dev/md1 our swap.
Which is WRONG because, as said before, the best practice is to have a raid volume dedicated for the /home partition too.

The UU shows that both drives are running fine.
 A letter F is sign of Failed drive.
 A degraded array will miss the second disk.

We can monitor the status of the raid while rebuilding using:

watch cat /proc/mdstat
And now, how to stop an array RAID.
Let’s suppose now that our raid matches this configuration:
root (/):
/dev/md0
/dev/sda1 & /dev/sdb1

/home:
/dev/md1
/dev/sda2 & /dev/sdb2

swap:
/dev/md3
/dev/sda3 & /dev/sdb3

We want to stop the /home array.

mdadm --stop /dev/md1

We might want to remove it too, if so:

mdadm --remove /dev/md1

Else, we want to stop the raid and be able to mount it to explore it as a single disk.
After stopping it, proceed with:

mdadm --assemble --run /dev/md1 /dev/sda2
mount /dev/md1 /mnt/home

where /mnt/home is a folder we created for this mount.

We probably stopped and mounted the disk because we wanted to erase the file inside.

We can run 3 commands to execute a total erase on the disk: shred, wipe and dd.

Shred:
Shred works on files only, we need to give it a loop where it can search and shred file one by one:

find /mnt/home -depth -type f -exec shred –fuzv -n X {} \;

where X is expressed in numbers, meaning the number of times the file should be processed. If 0 is chosen, the file would be overwritten with zeroes and then removed. Else, it will be overwritten with random data X times.

Shred will take time, a lot of it. It is best if run with the output redirected to /dev/null, or used in crontab, or in screen or with nohup:

nohup find /mnt/home -depth -type f -exec shred –fuzv -n X {} \; &

Wipe:
When shred completes, we can run the wipe.
Basically, it would do the same job but wipe erases directory too. This command will be of fast execution:

wipe –rcf /mnt/home

Dd:
dd works directly on the disk, we can both feed it the whole disk or the whole partition, our case:

dd if=/dev/zero of=/dev/sda2

we have the possibility to choose if zeroing the disk, or random write on it.
About the random, there are various type of random:
Frandom: More or less same as the zero. Creates a stream of zeroes and it’s fast.
Erandom: Frandom uses to consume lots of resources, Economic random solves this issue.
Urandom: Fast and does not block waiting for more entropy but theoretically vulnerable to cryptographic attacks, thus makes it not ultra-safe. But at least it is fast.
Random: This is the one widely used, it is slow but it does its job.

After the data is destroyed on both disks, we can continue stopping and removing the raid.

mdadm --stop /dev/md1
mdadm --remove /dev/md1

Deleting the superblock will revert the drives to be seen as 2 single disks and not raid members anymore:

mdadm --zero-superblock /dev/sda2
mdadm --zero-superblock /dev/sdb2

We might now try to mount both devices as ext4 and see if they are really empty.

About the OS raid:
the command remains the same and theoretically we are able to stop it and at least working on the second disk to delete the OS’s file.
Due to the fact that there might be services working on background and cannot be stopped, probably the best way would be to shred the important configuration file such as database, shares or other application that were installed as part of our business needs.

The cleanest way instead, and if one have the possibility, is to boot the system from a live linux distro, best if the same used in our OS, and install mdadm to try splitting the raid at cold.
From there, one can mount the single OS disks and begin the shred, wipe and dd procedure.

Do you prefer the PDF version? Trust me, you do ;)
Get it HERE.