Under linux, the 3Ware cards can be manipulated through the "tw_cli" command. (The CLI tools can be downloaded for free from 3Ware's support website)
A healthy RAID set looks like this:
dev306:~# /opt/3Ware/bin/tw_cli
//dev306> info c0
Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC
------------------------------------------------------------------------------
u0 RAID-5 OK - 256K 1117.56 ON OFF OFF
Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 372.61 GB 781422768 3PM0Q56Z
p1 OK u0 372.61 GB 781422768 3PM0Q3YY
p2 OK u0 372.61 GB 781422768 3PM0PFT7
p3 OK u0 372.61 GB 781422768 3PM0Q3B7
A failed RAID set looks like this:
dev306:~# /opt/3Ware/bin/tw_cli
//dev306> info c0
Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC
------------------------------------------------------------------------------
u0 RAID-5 DEGRADED - 256K 1117.56 ON OFF OFF
Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 372.61 GB 781422768 3PM0Q56Z
p1 OK u0 372.61 GB 781422768 3PM0Q3YY
p2 OK u0 372.61 GB 781422768 3PM0PFT7
p3 DEGRADED u0 372.61 GB 781422768 3PM0Q3B7
Now I will remove this bad disk from the RAID set:
//dev306> maint remove c0 p3
Exporting port /c0/p3 ... Done.
I now need to physically replace the bad drive. Unfortunately since our vendor wired some of our cables cockeyed, I will usually cause some I/O on the disks at this point, to see which of the four disks is "actually" bad. (Hint: The one with no lights on is the bad one.)
dev306:~# find /opt -type f -exec cat '{}' > /dev/null \;
With the bad disk identified and replaced, now I need to go back into the 3Ware CLI and find the new disk, then tell the array to start rebuilding.
dev306:~# /opt/3Ware/bin/tw_cli
//dev306> maint rescan
Rescanning controller /c0 for units and drives ...Done.
Found the following unit(s): [none].
Found the following drive(s): [/c0/p3].
//dev306> maint rebuild c0 u0 p3
Sending rebuild start request to /c0/u0 on 1 disk(s) [3] ... Done.
//dev306> info c0
Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC
------------------------------------------------------------------------------
u0 RAID-5 REBUILDING 0 256K 1117.56 ON OFF OFF
Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 372.61 GB 781422768 3PM0Q56Z
p1 OK u0 372.61 GB 781422768 3PM0Q3YY
p2 OK u0 372.61 GB 781422768 3PM0PFT7
p3 DEGRADED u0 372.61 GB 781422768 3PM0Q3B7
Note that p3 still shows a status of "DEGRADED" but now the array itself is "REBUILDING". Under minimal IO load, a RAID-5 with 400GB disks such as this one will take about 2.5 hours to rebuild.