This site is now 100% read-only, and retired.

XML logo

Migrating To RAID1 Mirror on Sarge (Updated)
Posted by philcore on Fri 25 Nov 2005 at 15:27
Tags: ,
Until sarge was released with the new installer, it wasn't really easy - or maybe even possible to install Debian on a RAID1 mirror. If you installed woody and don't want to reinstall the OS and wipe your existing partitions, you can migrate to RAID by adding another disk and without losing your existing data. This is a guide aimed at getting RAID1 working on an existing sarge system.

I suggest reading the following documents:
http://www.doorbot.com/guides/linux/x86/grubraid/
http://www.linuxsa.org.au/mailing-list/2003-07/1270.html

First, of course, you need to get a second hard drive installed and recognized on your system. Installing a second drive is beyond the scope of this document

The second drive ideally should be identical to the original drive, although it doesn't need to be. It should be the same size or bigger than the original.

For my setup, I have two identical Seagate 73GB Ultra320 SCSI drives. They appear on my system as:

/dev/sda == original drive with data
/dev/sdb == new 2nd drive.

If you have IDE drives, your device will be /dev/hda and /dev/hd[bc] depending on how you install it. You should definitely install the second drive on a second controller if possible.

Before you start, back up your system! I assume no responsibility for any data loss! Also, if you are modifying a system with multiple users, you will probably want to reboot into single user mode.

OK, so let's get started. First install the raid tools.

apt-get install mdadm
Now change the system types on partitions you want to mirror on the old drive to fd (raid autodetect) using your favorite disk tool. Make note of which partition is your swap partition. We'll need that later.
fdisk /dev/sda

Command (m for help): t
Partition number (1-8): 1
Hex code (type L to list codes): fd
Command (m for help): t
Partition number (1-8): 2
Hex code (type L to list codes): fd
and repeat for all your partitions including swap. You can view the disk info by using the "p" command from within fdisk. When you are done, your disk should look something like this.
Command (m for help): p

Disk /dev/sda: 73.5 GB, 73557090304 bytes
255 heads, 63 sectors/track, 8942 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
   /dev/sda1   *           1         243     1951866   fd  Linux raid autodetect
   /dev/sda2             244         486     1951897+  fd  Linux raid autodetect
   /dev/sda3             487         608      979965   fd  Linux raid autodetect
   /dev/sda4             609        8924    66798270    5  Extended
   /dev/sda5             609        1824     9767488+  fd  Linux raid autodetect
   /dev/sda6            1825        4256    19535008+  fd  Linux raid autodetect
   /dev/sda7            4257        4378      979933+  fd  Linux raid autodetect
   /dev/sda8            4379        8924    36515713+  fd  Linux raid autodetect

When everything looks good. Write the changes with the "w" command.
Command (m for help): w
It will complain about disk in use and partition table will change at next boot or something. That's OK. Now we need to copy the partition information to the new disk. sfdisk makes it really easy. Remember to substitute your drive information for the below command if you are not using /dev/sda and /dev/sdb!
sfdisk -d /dev/sda | sfdisk /dev/sdb
Now view the partition layouts to see if they match.
sfdisk -l /dev/sda

Disk /dev/sda: 8942 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
   /dev/sda1   *      0+    242     243-   1951866   fd  Linux raid autodetect
   /dev/sda2        243     485     243    1951897+  fd  Linux raid autodetect
   /dev/sda3        486     607     122     979965   fd  Linux raid autodetect
   /dev/sda4        608    8923    8316   66798270    5  Extended
   /dev/sda5        608+   1823    1216-   9767488+  fd  Linux raid autodetect
   /dev/sda6       1824+   4255    2432-  19535008+  fd  Linux raid autodetect
   /dev/sda7       4256+   4377     122-    979933+  fd  Linux raid autodetect
   /dev/sda8       4378+   8923    4546-  36515713+  fd  Linux raid autodetect

sfdisk -l /dev/sdb

Disk /dev/sdb: 8924 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
   /dev/sdb1   *      0+    242     243-   1951866   fd  Linux raid autodetect
   /dev/sdb2        243     485     243    1951897+  fd  Linux raid autodetect
   /dev/sdb3        486     607     122     979965   fd  Linux raid autodetect
   /dev/sdb4        608    8923    8316   66798270    5  Extended
   /dev/sdb5        608+   1823    1216-   9767488+  fd  Linux raid autodetect
   /dev/sdb6       1824+   4255    2432-  19535008+  fd  Linux raid autodetect
   /dev/sdb7       4256+   4377     122-    979933+  fd  Linux raid autodetect
   /dev/sdb8       4378+   8923    4546-  36515713+  fd  Linux raid autodetect

OK. If things look good, we're ready to create the raid arrays. Things to watch out for here: Be sure to match up your physical drives correctly, and change the physical drive letters to match your setup. What I want to do here is create raid1 device /dev/md0 with the two partitions that will make up that array: /dev/sda1 and /dev/sdb1. To avoid destroying the good data on /dev/sda1, I tell mdadm to create the array initially using only the new empty drive.
mdadm --create /dev/md0 --level 1 --raid-devices=2 missing /dev/sdb1
repeat for the remaining raid volumes md1,md2, etc....
mdadm --create /dev/md1 --level 1 --raid-devices=2 missing /dev/sdb2
mdadm --create /dev/md2 --level 1 --raid-devices=2 missing /dev/sdb5
mdadm --create /dev/md3 --level 1 --raid-devices=2 missing /dev/sdb6
mdadm --create /dev/md4 --level 1 --raid-devices=2 missing /dev/sdb7
mdadm --create /dev/md5 --level 1 --raid-devices=2 missing /dev/sdb8
mdadm --create /dev/md6 --level 1 --raid-devices=2 missing /dev/sdb3
now create filesystems for the raid devices. My example shows using ext3, but pick the fs of your choice, assuming you have kernel support for the fs type. Also make a swap partition. On my system, swap lives on /dev/md6, which is currently made up of /dev/sdb3.

You did make a note above about where your swap partition was, right?

mkfs.ext3 /dev/md0
mkfs.ext3 /dev/md1
mkfs.ext3 /dev/md2
mkfs.ext3 /dev/md3
mkfs.ext3 /dev/md4
mkfs.ext3 /dev/md5
mkswap    /dev/md6
Now we're ready to mount the raid devices and copy data over to them. Before you do this, you need to make sure you understand which filesystems are currently mounted on which partitions. Use the df command to get a picture of your layout.

df

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda1              1921036    341452   1482000  19% /
/dev/sda2              1921100    293104   1530408  17% /var
/dev/sda5              9614052   3900804   5224880  43% /usr
/dev/sda6             19524672    729020  18795652   4% /home
/dev/sda7               964408     16444    898972   2% /tmp
/dev/sda8             36497820  12779348  23718472  36% /data

So for example, I have my / filesystem mounted on /dev/sda1. I have sda1 as part of md0. (Well, only sdb1 is currently in the mirror, but sda1 is the other half and will be added after we get our data off of it). So we mount /dev/md0 somewhere and copy everything from the / filesystem over to it.
mount /dev/md0 /mnt
cp -dpRx / /mnt
Now copy the remaining partitions. Be careful to match your md devices with your filesystem layout! This example is for my particular setup.
mount /dev/md1 /mnt/var
cp -dpRx /var /mnt
mount /dev/md2 /mnt/usr
cp -dpRx /usr /mnt/
mount /dev/md3 /mnt/home
cp -dpRx /home /mnt
mount /dev/md4 /mnt/tmp
cp -dpRx /tmp /mnt
mount /dev/md5 /mnt/data
cp -dpRx /data /mnt
Now edit /mnt/etc/fstab to use the md devices instead of the raw drive partitions.
proc         /proc           proc    defaults                   0  0
/dev/md0     /               ext3    defaults,errors=remount-ro 0  1
/dev/md1     /var            ext3    defaults                   0  2
/dev/md2     /usr            ext3    defaults                   0  2
/dev/md3     /home           xfs     defaults                   0  2
/dev/md4     /tmp            ext3    defaults,noexec            0  2
/dev/md5     /data           xfs     defaults                   0  2
/dev/md6     none            swap    defaults                   0  0
/dev/hda     /media/cdrom0   iso9660 ro,user,noauto             0  0
/dev/fd0     /media/floppy0  auto    rw,user,noauto             0  0
now edit /mnt/boot/grub/menu.lst and add an entry to boot using raid. and a recovery mode in case the first drive fails.
title       Custom Kernel 2.6.11.7
root        (hd0,0)
kernel      /boot/vmlinuz-2.6.11.7 root=/dev/md0 md=0,/dev/sda1,/dev/sdb1 ro
boot

title       Custom Kernel 2.6.11.7 (RAID Recovery)
root        (hd1,0)
kernel      /boot/vmlinuz-2.6.11.7 root=/dev/md0 md=0,/dev/sdb1 ro
boot
install grub on the second drive so if the first drive fails we can still boot.
grub-install /dev/sda

grub
grub> device (hd0) /dev/sdb
grub> root (hd0,0)
grub> setup (hd0)
grub> quit
Copy the GRUB configuration and fstab files to the old drive:
cp -dp /mnt/etc/fstab /etc/fstab
cp -dp /mnt/boot/grub/menu.lst /boot/grub
At this point, cross your fingers and reboot. once the system comes up, you should see the mounted md devices.
df

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md0               1921036    304552   1518900  17% /
tmpfs                   193064         4    193060   1% /dev/shm
/dev/md1               1921100    206768   1616744  12% /var
/dev/md2               9614052   2948620   6177064  33% /usr
/dev/md3              19524672    741140  18783532   4% /home
/dev/md4                964408     16448    898968   2% /tmp
/dev/md5              36497820   6683308  29814512  19% /data
So now your system is running raid1 off of the new drive. Verify that you have data in all your partitions. If so, you can safely add the original drive to the array. Again - pay attention to what you are doing. You need to add the correct partition to the correct array!
mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2
... repeat for remaining partitions.
check /proc/mdstat for the skinny on what's done and what's not.. when everything is done, all the devices should show [UU]. Don't reboot until it's done synching the drives.
cat /proc/mdstat

Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
      1951744 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
      1951808 blocks [2/2] [UU]

md2 : active raid1 sdb5[1] sda5[0]
      9767424 blocks [2/2] [UU]

md3 : active raid1 sdb6[1] sda6[0]
      19534912 blocks [2/2] [UU]

md4 : active raid1 sdb7[1] sda7[0]
      979840 blocks [2/2] [UU]

md5 : active raid1 sdb8[1] sda8[0]
      36515648 blocks [2/2] [UU]

And that's it.

If you are running a stock debian kernel with initrd, I've heard that you will have problems after reboots with adding the first drive to the array. Some issue with mkinitrd. If that is the case, read this article.

http://piirakka.com/misc_help/Linux/raid_starts_degraded.txt

 

Comments on this Entry

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by Anonymous (194.126.xx.xx) on Fri 9 Dec 2005 at 13:04
Now edit /etc/fstab to use the md devices instead of the raw drive partitions.

Must be:

Now edit /mnt/etc/fstab to use the md devices instead of the raw drive partitions.

[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by philcore (216.54.xx.xx) on Fri 9 Dec 2005 at 13:20
[ View Weblogs ]
Thanks. Fixed it.

[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by Anonymous (84.236.xx.xx) on Mon 12 Dec 2005 at 22:15
You say install grub to the second drive and then grub-install /dev/sda
shouldn'it be /dev/sdb instead?? Coz sda already has grub installed.

[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by philcore (70.161.xx.xx) on Tue 13 Dec 2005 at 01:08
[ View Weblogs ]
yeah, I install it on sda again. Doesn't hurt. But then inside the grub command, you'll see I set sdb to be hd0 and run setup.

[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by jeffrey1681 (141.157.xx.xx) on Tue 20 Dec 2005 at 15:54
Hi. After following your instructions I'm having a problem with my system. md0 seems to mount fine, but when any of the other partitions are try to be mounted I get an error:

wrong fs type, bad option, bad superblock on /dev/md*
missing codepage or other error

I have got my system to boot by mounting my original partition (/hda5) to /var, but I'm at a loss as to what to do next to fix this. Another website suggested that I try running "mdadm -A /dev/md1", but when I did that I got the error "mdadm: /dev/md1 not identified in config file". If you need any more information, just let me know, I'm very new at this so I'm not sure what to give you. Thanks for your help with this and thanks for the guide, even despite this error it was very useful.

[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by Anonymous (216.54.xx.xx) on Tue 20 Dec 2005 at 16:05
hmm. Did the mdadm --create command output any errors when creating the raid devices? What does /etc/fstab look like? and also /proc/mdstat?

Sounds like you still have your data on the original drives. That's a good thing(tm).

[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by jeffrey1681 (141.157.xx.xx) on Tue 20 Dec 2005 at 16:25
mdadm --create output no errors and when I looked at /proc/mdstat (on another site's advice) immediately after running mdadm --create it looked fine.
/etc/fstab:

/dev/md0        /         ext3   defaults, errors=remount-ro  0     1
/dev/md5        /home     ext3   defaults         0       2
/dev/md2        /tmp      ext3   defaults         0       2
/dev/md4        /usr      ext3   defaults         0       2
/dev/md1        /var      ext3   defaults         0       2
/dev/md3        none      swap   sw               0       0
/dev/hdb        /media/dvd auto  user,users,noauto,ro,exec    0      0
/dev/fd0        /media/floppy0 auto  rw, user,noauto          0      0
and I have the original /hda stuff commented out at the bottom.
/proc/mdstat:

Personalities : [raid1]
md0 : active raid1 hdc1[1]
      289024 blocks [2/1] [_U]
Yes, I'm very happy that my data is still there on the original drive (though I did burn the critical stuff to a DVD before I started this). Thanks for your help.

[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by jeffrey1681 (141.157.xx.xx) on Tue 20 Dec 2005 at 18:04
Forgot to add, in /proc/mdstat after the md0 info there is another line:

unused devices: <none>

[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by Anonymous (85.148.xx.xx) on Sat 10 Nov 2007 at 16:24
Same problem here,

md0 is coming up on boot, but md1 fails,

running debian etch 2.6.8-2-386

[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by Anonymous (195.98.xx.xx) on Sat 28 Jan 2006 at 13:53
I get during boot this:

md: couldn't update array info -22.
md: could not bd_claim sda1.
md: md_import_device returned -16.
md: could not bd_claim sdb1.
md: md_import_device returned -16.
md: starting md0 failed.

then I alter grub menu.lst
from:
kernel /boot/vmlinuz-2.6.15.1 root=/dev/md0 md=0,/dev/sda1,/dev/sdb1 ro

to:
kernel /boot/vmlinuz-2.6.15.1 root=/dev/md0 ro

and I don't get boot errors more,

ns1:/home/admin# cat /proc/mdstat
Personalities : [linear] [raid1]
md1 : active raid1 sdb5[1] sda5[0]
2650624 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
153637504 blocks [2/2] [UU]

unused devices: <none>

[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by jaro80 (212.2.xx.xx) on Tue 11 Jul 2006 at 23:46
Hi All

I've read several times this article but always when my linux starts I've had something like this:

...
md: md0 stopped.
md: bind<hdb1>
raid1: raid set md0 active with 1 out of 2 mirrors
mdadm: /dev/md0 has been started with 1 drive (out of 2).
VFS: Can't find ext3 filesystem on dev md0.
VFS: Can't find ext2 filesystem on dev md0.
ReiserFS: md0: found reiserfs format "3.6" with standard journal
ReiserFS: md0: using ordered data mode
ReiserFS: md0: journal params: device md0, size 8192, journal first block 18, m
x trans len 1024, max batch 900, max commit age 30, max trans age 30
ReiserFS: md0: checking transaction log (md0)
ReiserFS: md0: Using r5 hash to sort names
INIT: version 2.86 booting
assuming iso-8859-2 eogonek
.....
Checking all file systems...
fsck 1.37 (21-Mar-2005)
bread: Cannot read the block (2): (Invalid argument).
reiserfs_open: bread failed reading block 2
bread: Cannot read the block (16): (Invalid argument).
reiserfs_open: bread failed reading block 16

reiserfs_open: the reiserfs superblock cannot be found on /dev/md5.
Failed to open the filesystem.

If the partition table has not been changed, and the partition is
valid  and  it really  contains  a reiserfs  partition,  then the
superblock  is corrupted and you need to run this utility with
--rebuild-sb.

and so on md2, md1
...

fsck failed.  Please repair manually.

CONTROL-D will exit from this shell and continue system startup.

Give root password for maintenance
(or type Control-D to continue):



I don't know where is the problem, but I found two useful articles about software raid1. I suggest read it additionally:

"Installing Debian with SATA based RAID"
http://xtronics.com/reference/SATA-RAID-debian-for-2.6.html

"Convert Root System to Bootable Software RAID1 (Debian)"
http://wildanm.fisika.ui.edu/resource/linuxstuff/etc/RAID/rootrai ddoc.97.htm

At the first (maybe second) attempt my Debian starts with software raid 1 :)

Enjoy

--
Best regards


[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by skaufman (64.167.xx.xx) on Wed 3 Jan 2007 at 02:19
Very helpful thread, as was the original!

I ran into the various kernel panic and other weird problems described above while building RAID1 on a new Dell SC440 (with SATA drives and the Intel IHC7 controllers; Sarge with a 2.6.19.1 kernel). The problem proved to be not creating an initrd image -- which isn't mentioned in the primary write-up here. Such boot images apparently weren't previously necessary with older kernels but must be now. (Though strangely, this box booted to a custom 2.6.19.1 kernel without an initrd image when using the original disk but barfed when trying to use that same kernel to boot to the RAID.)

With new kernels, you have to use mkinitramfs (or yaird) instead of mkinitrd:

# mkinitramfs -o /boot/initrd.img-whatever

Also, it's important to remake the initrd image again after synching the two drives.

So if you're trying to use a new kernel, manage to create the RAID arrays but can't boot to them, this is probably what you need to do.

[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by Anonymous (62.190.xx.xx) on Sat 17 Feb 2007 at 17:19
Excellent article, exactly what I was looking for. I now have an IDE RAID1 working with Ubuntu 6.06. Thanks.

[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by Anonymous (194.72.xx.xx) on Thu 24 Jan 2008 at 15:09
On Debian Etch mkinitrd appears to not be available. To regenerate the initrd use

dpkg-reconfigure mdadm

instead.

[ Parent ]

Re: Migrating To RAID1 Mirror on Sarge (Updated)
Posted by Anonymous (202.125.xx.xx) on Wed 23 Apr 2008 at 13:05
hi dear how you doing?
i m doing the same thing .....mirroring on P 561 IBM Server .
I can change the partitions through sfdisk into "Lunix RAID autodetect "
but i try to copy it into other disks "sdb" using command "/sfdisk -d /dev/sda | sfdisk --force /dev/sdb " it will show me this msg :sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/sda: unrecognized partition
No partitions found
Partitions has been created but no sectors and blocks were defined in it .....



[ Parent ]