Talk:Linux RAID

From Webmin Documentation
Jump to: navigation, search

One of the disks of a CentOS6 RAID1 (sdb) was issuing SMART warnings, so I decided to replace the tricky disk. I added drive sdd1 alongside the existing sdc1 and removed the offending sdb2 partition from the RAID1-array, using webmin. Everything seems to work fine, but Linux RAID lists the md0 device having status clean, degraded...

It looks as if the Linux RAID module has messed up the config file /etc/mdadm.conf.

/etc/mdadm.conf contains:

DEVICE /dev/sdc1 /dev/sdd1 /dev/sdc1 
ARRAY /dev/md0 level=raid1 devices=,/dev/sdc1,/dev/sdd1,/dev/sdc1

where it should list (I presume)

DEVICE /dev/sdc1 /dev/sdd1 
ARRAY /dev/md0 level=raid1 devices=/dev/sdc1,/dev/sdd1

Some more details

#mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Sep 25 00:32:25 2013
     Raid Level : raid1
     Array Size : 204664704 (195.18 GiB 209.58 GB)
  Used Dev Size : 204664704 (195.18 GiB 209.58 GB)
   Raid Devices : 3
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Tue Feb 16 12:46:51 2016
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : boromir.lan:0
           UUID : 0c50fa31:574d6147:26ed9210:457f1f9b
         Events : 729

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       2       8       49        1      active sync   /dev/sdd1
       3       8       33        2      active sync   /dev/sdc1

Would it be safe to change the /etc/mdadm.conf file manually? Arent (talk) 07:51, 16 February 2016 (EST)