[NTLUG:Discuss] RAID recovery help!

Thomas Cameron thomas.cameron at camerontech.com
Tue Nov 4 23:10:01 CST 2003


I have seen several cases where the box needs to be rebooted to single user
mode before using raidhotadd.
-- 
Regards,
Thomas Cameron
----- Original Message ----- 
From: "Richard Geoffrion" <ntlug at rain4us.net>
To: <discuss at ntlug.org>
Sent: Tuesday, November 04, 2003 10:19 PM
Subject: [NTLUG:Discuss] RAID recovery help!


> WHAT!?!? WHY!?!?   Why is this not working! What am I missing?
>
> [info]
> root at gwifs:/var/log# cat /proc/mdstat
> Personalities : [raid0] [raid1]
> read_ahead 1024 sectors
> md0 : active raid1 sdb1[1] sda1[0]
>       32000 blocks [2/2] [UU]
>
> md1 : active raid1 sdb5[1] sda5[0](F)
>       4000064 blocks [2/1] [_U]
>
> md2 : active raid1 sdb6[1] sda6[0]
>       1999936 blocks [2/2] [UU]
>
> md3 : active raid1 sdb7[1] sda7[0]
>       4000064 blocks [2/2] [UU]
>
> md4 : active raid1 sdb8[1] sda8[0]
>       4000064 blocks [2/2] [UU]
>
> md5 : active raid1 sdb9[1] sda9[0]
>       57400128 blocks [2/2] [UU]
>
> unused devices: <none>
> root at gwifs:/var/log# raidhotadd /dev/md1 /dev/sdb5
> /dev/md1: can not hot-add disk: disk busy!
>
> [/info]
>
> The server had an unscheduled reboot this past weekend...but the
message.log
> shows no /dev/md1 raid errors.
>
> [/var/log/messages snipet]
> Nov  2 11:30:57 gwifs kernel: md: considering sdb5 ...
> Nov  2 11:30:58 gwifs kernel: md:  adding sdb5 ...
> Nov  2 11:30:58 gwifs kernel: md:  adding sda5 ...
> Nov  2 11:30:58 gwifs kernel: md: created md1
> Nov  2 11:30:58 gwifs kernel: md: bind<sda5,1>
> Nov  2 11:30:58 gwifs kernel: md: bind<sdb5,2>
> Nov  2 11:30:58 gwifs kernel: md: running: <sdb5><sda5>
> Nov  2 11:30:58 gwifs kernel: md: sdb5's event counter: 0000006a
> Nov  2 11:30:58 gwifs kernel: md: sda5's event counter: 0000006a
> Nov  2 11:30:58 gwifs kernel: md: RAID level 1 does not need chunksize!
> Continuing anyway.
> Nov  2 11:30:58 gwifs kernel: md1: max total readahead window set to 124k
> Nov  2 11:30:58 gwifs kernel: md1: 1 data-disks, max readahead per
> data-disk: 124k
> Nov  2 11:30:58 gwifs kernel: raid1: device sdb5 operational as mirror 1
> Nov  2 11:30:58 gwifs kernel: raid1: device sda5 operational as mirror 0
> Nov  2 11:30:58 gwifs kernel: raid1: raid set md1 active with 2 out of 2
> mirrors
> Nov  2 11:30:58 gwifs kernel: md: updating md1 RAID superblock on device
> Nov  2 11:30:58 gwifs kernel: md: sdb5 [events: 0000006b]<6>(write) sdb5's
> sb offset: 4000064
> Nov  2 11:30:58 gwifs kernel: md: delaying resync of md1 until md5 has
> finished resync (they share one or$
> Nov  2 11:30:58 gwifs kernel: md: sda5 [events: 0000006b]<6>(write) sda5's
> sb offset: 4000064
>
> [/snippet]
>
> well....further looking DID turn up this...
>
> [another snippet]
> Nov  2 12:00:01 gwifs kernel: md: syncing RAID array md1
> Nov  2 12:00:01 gwifs kernel: md: minimum _guaranteed_ reconstruction
speed:
> 100 KB/sec/disc.
> Nov  2 12:00:01 gwifs kernel: md: using maximum available idle IO bandwith
> (but not more than 100000 KB/s$
> Nov  2 12:00:01 gwifs kernel: md: using 124k window, over a total of
4000064
> blocks.
> Nov  2 12:00:01 gwifs kernel: md: delaying resync of md2 until md1 has
> finished resync (they share one or$
> Nov  2 12:00:01 gwifs kernel: md: delaying resync of md3 until md1 has
> finished resync (they share one or$
> Nov  2 12:00:58 gwifs kernel: raid1: mirror resync was not fully finished,
> restarting next time.
> Nov  2 12:00:58 gwifs kernel: md: recovery thread got woken up ...
> Nov  2 12:00:58 gwifs kernel: md: updating md1 RAID superblock on device
> Nov  2 12:00:58 gwifs kernel: md: sdb5 [events: 0000006c]<6>(write) sdb5's
> sb offset: 4000064
> Nov  2 12:00:58 gwifs kernel: md: (skipping faulty sda5 )
> Nov  2 12:00:58 gwifs kernel: md: recovery thread finished ...
> Nov  2 12:00:58 gwifs kernel: md: md_do_sync() got signal ... exiting
> Nov  2 12:00:58 gwifs kernel: raid1: mirror resync was not fully finished,
> restarting next time.
> Nov  2 12:00:58 gwifs last message repeated 25 times
> [/another snippet]
>
> so...how can /dev/md1 (sda5) be faulting...but all the other mdXs be
> fine...they are all on the same physical disk!
>
>
> --
> Richard
>
>
> _______________________________________________
> https://ntlug.org/mailman/listinfo/discuss
>




More information about the Discuss mailing list