RAID on Linux See http://www.sdmachelp.com/linuxraid.html http://www.sdmachelp.com/linuxlvm.html GETTING STATUS!!: mdadm --detail /dev/mdX cat /proc/mdstat (if you don't know device names. See below for details.) [ For Linux RAID + LVM, do a straight RAID1 ext2 partition for /boot, and a straight LVM partition for swap ] http://www.ibiblio.org/pub/Linux/docs/HOWTO/other-formats/html_single/Software-RAID-HOWTO.html Can safely RAID /boot, /. With Solaris and Linux software RAID, you can only "Mirror" /, /usr, /opt. Don't RAID swap. RAID partition types should be 0xFD LVM OVER RAID SWAP. Don't ever RAID. Generally, allocate a straight PV so that you can re-allocate this space in future. /, /usr. Examples use these as straight RAID, but why??? Defeats main purpose of LVM. Try LVMing over mds. mdadm: Best tool to administer kernel md devices. N.b.! Never repartition drivers when RAID is running. According to HOWTO, occurrences of raidstop/raidstart in init scripts should be disabled. AUTODETECTION Must create devices with "persistent-superblock". Partition types must be 0xFD. ("fd" in fdisk). (STOP raid before changing partition types). ROOT FS Choose any of the / mirror underlying partitions: device (hd0) /dev/hdc; root (hd0,0); setup (hd0). Generally easiest to just let your Linux installer install to the RAID device rather than migrating. cat /proc/mdstat # [role], [dev#/?]. role < n: working; >= n: spare. # Failed devices show F. ******************************** I believe that after "mdadm create..."-ing, you must create a config file. (Otherwise you'll have to probe all the devices as shown in the mdadm man page). Once you create /etc/mdadm.conf according to config file, you start up all the array definitions in it with "mdadm --assemble --scan". ******************************** Linear mode mdadm --create --verbose /dev/md0 --level=linear --raid-devices=2 /dev/sdb6 /dev/sdc5 Mirror mdadm --create --verbose /dev/md20 --level=mirror --raid-devices=2 /dev/sda1 /dev/sda2 Unlike Sun Volume Manager, this is destructive. RAID 5 mdadm -S /dev/md0 # Stop it mdadm -R /dev/md0 # Restart it (Optional args to mke2fs are important for making ext2 filesystems for use under RAID 5). Failure emulation mdadm --manage --set-faulty /dev/md1 /dev/sdc2 RECOVER (only possible with RAID level > 0) mdadm /dev/md1 -r /dev/sdc2 # Can only remove Failed devices mdadm /dev/md1 -a /dev/sdc2 # Adds back as active member or spare. MONITORING Do not run this interactively! It's a daemon. mdadm --monitor --mail=root@localhost --delay=1800 /dev/md2 For SuSE, set email addr and MDADM_RAIDDEVICES in /etc/sysconfig/mdadm (latter a space-delimited string like "/dev/md0 /dev/md1"; can get list from /proc/md*); then enable the init script "mdadmd". For some reason, "yas2 runlevel" always hangs when I start the service from there. Simulate a failure mdadm --manage -f /dev/md1 /dev/sdc2 mdadm --detail /dev/mdX # and wait for the disk to show failure mdadm /dev/md1 -r /dev/sdc2 # Remove the failed disk # ... wait for reconstruction, if any. mdadm /dev/md1 -a /dev/sdc2 # If hot-spared, it will become a hot spare.