db[0](S)
1047552 blocks super 1.2
unused devices: <none>
root@omv30:~# fdisk -l
Disk /dev/sda: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8c9b0fb9
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 12582911 12580864 6G 83 Linux
/dev/sda2 12584958 16775167 4190210 2G 5 Extended
/dev/sda5 12584960 16775167 4190208 2G 82 Linux swap / Solaris
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
raid1变成了inactive,但raid信息本身是保存在磁盘中,不会丢失。
4、结论
- mdadm不是根据盘符/dev/sdx来记录RAID信息的,盘符无论怎么变换,RAID信息不错乱。
- mdadm是使用Device UUID来区分硬盘的。与RAID硬盘盒不一样,硬盘盒是记录硬盘槽位号的。
- 所以mdadm每个硬盘可以使用任意硬盘盒,不用记录位置。
三、RAID降级恢复测试
场景:正常运行的RAID1,突然一块盘失效,进行重建恢复。
方法:可以用模拟fail的方式,也可以用VirtualBox热插拔硬盘的功能。
1、模拟fail的方式
初始信息如下:
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Oct 1 20:29:44 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 31
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 48 1 active sync /dev/sdd
(1)手工fail掉sdd:
root@omv30:~# mdadm /dev/md0 --fail /dev/sdd
mdadm: set /dev/sdd faulty in /dev/md0
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Oct 1 20:29:59 2018
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 33
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
- 0 0 1 removed
1 8 48 - faulty /dev/sdd
如果未移除,要先移除损坏的硬盘:
root@omv30:~# mdadm /dev/md0 -r /dev/sdd
mdadm: hot remove failed for /dev/sdd: No such device or address
(2)增加新盘:
root@omv30:~# mdadm /dev/md0 --add /dev/sdc (新加的盘是2G的,经实测不影响RAID1重建)
mdadm: added /dev/sdc
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Oct 1 20:36:22 2018
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 76% complete
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 48
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
2 8 32 1 spare rebuilding /dev/sdc
可以看到正在重建。
过一会儿再执行:
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB