Failure Description
Expand the Linux file system according to the customer's demand.
During the implementation process, after mapping the hard disk to the host, creating the PV and adding it to the VG,
the system prompts "unknown device".
[root@KMS-Svr cache]# pvs
Incorrect metadata area header checksum on /dev/sdb1 at offset 4096
Couldn't find device with uuid ZPy1sa-fXhe-qrcQ-HFhi-eazU-4Mg3-toY3Xv.
PV VG Fmt Attr PSize PFree
/dev/sda2 vg_testkmssvr lvm2 a-- 149.51g 0
/dev/sdb1 lvm2 a-- 499.99g 499.99g
/dev/sdc1 vg_testkmssvr lvm2 a-- 100.00g 0
/dev/sdd vm2t lvm2 a-- 2.00t 0
/dev/sde vm2t lvm2 a-- 2.20t 0
/dev/sdf1 vm2t lvm2 a-- 2.00t 0
/dev/sdg vm2t lvm2 a-- 2.00t 0
/dev/sdh vm2t lvm2 a-- 2.00t 0
/dev/sdi vm2t lvm2 a-- 2.00t 0
/dev/sdj vm2t lvm2 a-- 2.00t 0
/dev/sdk1 vm2t lvm2 a-- 1.86t 74.36g
/dev/sdk2 vg_testkmssvr lvm2 a-- 140.64g 19.63g
unknown device vm2t lvm2 a-m 2.00t 2.00t
Troubleshooting
2.1 Log Analysis
There are no special error messages in the messages log.
dmesg logs are as follows:
sd 2:0:12:0: [sdl] Very big device. Trying to use READ CAPACITY(16).
sd 2:0:12:0: [sdl] Cache data unavailable
sd 2:0:12:0: [sdl] Assuming drive cache: write through
sdl: sdl1
sd 2:0:12:0: [sdl] Very big device. Trying to use READ CAPACITY(16).
sd 2:0:12:0: [sdl] Cache data unavailable
sd 2:0:12:0: [sdl] Assuming drive cache: write through
sdl: sdl1
EXT4-fs (dm-2): warning: checktime reached, running e2fsck is recommended
EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts:
With the output of the dmesg logs, you can see that the newly added disk was successfully recognized at the OS level. Communicating with the customer,
we learned that this disk was previously used on other hosts, and the old host had been taken offline. The engineer speculates that
the VG/PV information of the old host is stored on the disk and that it needs to be further processed.
2.2 Culling problem disks
Run the command:
vgreduce vm2t --removemissing
[root@KMS-Svr cache]# vgreduce vm2t --removemissing
Incorrect metadata area header checksum on /dev/sdb1 at offset 4096
Couldn't find device with uuid ZPy1sa-fXhe-qrcQ-HFhi-eazU-4Mg3-toY3Xv.
Wrote out consistent volume group vm2t
[root@KMS-Svr cache]# vgs
Incorrect metadata area header checksum on /dev/sdb1 at offset 4096
VG #PV #LV #SN Attr VSize VFree
vg_testkmssvr 3 3 0 wz--n- 390.14g 19.63g
vm2t 8 1 0 wz--n- 16.06t 74.36g
[root@KMS-Svr cache]# pvs
Incorrect metadata area header checksum on /dev/sdb1 at offset 4096
PV VG Fmt Attr PSize PFree
/dev/sda2 vg_testkmssvr lvm2 a-- 149.51g 0
/dev/sdb1 lvm2 a-- 499.99g 499.99g
/dev/sdc1 vg_testkmssvr lvm2 a-- 100.00g 0
/dev/sdd vm2t lvm2 a-- 2.00t 0
/dev/sde vm2t lvm2 a-- 2.20t 0
/dev/sdf1 vm2t lvm2 a-- 2.00t 0
/dev/sdg vm2t lvm2 a-- 2.00t 0
/dev/sdh vm2t lvm2 a-- 2.00t 0
/dev/sdi vm2t lvm2 a-- 2.00t 0
/dev/sdj vm2t lvm2 a-- 2.00t 0
/dev/sdk1 vm2t lvm2 a-- 1.86t 74.36g
/dev/sdk2 vg_testkmssvr lvm2 a-- 140.64g 19.63g
You can find that the disk status is normal at this point, but you can't query it in the pvs command output. Try to query it by fdisk -l command:
Disk /dev/sdl: 2199.0 GB, 2199023255552 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x9706bba2
Device Boot Start End Blocks Id System
/dev/sdl1 1 267349 2147480811 8e Linux LVM
Combined with the lsblk command query, at this time there is sdl disk at the OS level, and there is already partition sdl1,
you need to re-create the PV and add it to the specified VG.
2.3 Re-create PV
Run the command: pvcreate /dev/sdl1 --force
[root@KMS-Svr cache]# pvcreate /dev/sdl1 --force
Physical volume "/dev/sdl1" successfully created
[root@KMS-Svr cache]# pvs
Incorrect metadata area header checksum on /dev/sdb1 at offset 4096
PV VG Fmt Attr PSize PFree
/dev/sda2 vg_testkmssvr lvm2 a-- 149.51g 0
/dev/sdb1 lvm2 a-- 499.99g 499.99g
/dev/sdc1 vg_testkmssvr lvm2 a-- 100.00g 0
/dev/sdd vm2t lvm2 a-- 2.00t 0
/dev/sde vm2t lvm2 a-- 2.20t 0
/dev/sdf1 vm2t lvm2 a-- 2.00t 0
/dev/sdg vm2t lvm2 a-- 2.00t 0
/dev/sdh vm2t lvm2 a-- 2.00t 0
/dev/sdi vm2t lvm2 a-- 2.00t 0
/dev/sdj vm2t lvm2 a-- 2.00t 0
/dev/sdk1 vm2t lvm2 a-- 1.86t 74.36g
/dev/sdk2 vg_testkmssvr lvm2 a-- 140.64g 19.63g
/dev/sdl1 lvm2 a-- 2.00t 2.00t
Run command:vgextend vm2t /dev/sdl1
[root@KMS-Svr cache]# vgs
Incorrect metadata area header checksum on /dev/sdb1 at offset 4096
VG #PV #LV #SN Attr VSize VFree
vg_testkmssvr 3 3 0 wz--n- 390.14g 19.63g
vm2t 9 1 0 wz--n- 18.06t 2.07t
[root@KMS-Svr cache]# pvs
Incorrect metadata area header checksum on /dev/sdb1 at offset 4096
PV VG Fmt Attr PSize PFree
/dev/sda2 vg_testkmssvr lvm2 a-- 149.51g 0
/dev/sdb1 lvm2 a-- 499.99g 499.99g
/dev/sdc1 vg_testkmssvr lvm2 a-- 100.00g 0
/dev/sdd vm2t lvm2 a-- 2.00t 0
/dev/sde vm2t lvm2 a-- 2.20t 0
/dev/sdf1 vm2t lvm2 a-- 2.00t 0
/dev/sdg vm2t lvm2 a-- 2.00t 0
/dev/sdh vm2t lvm2 a-- 2.00t 0
/dev/sdi vm2t lvm2 a-- 2.00t 0
/dev/sdj vm2t lvm2 a-- 2.00t 0
/dev/sdk1 vm2t lvm2 a-- 1.86t 74.36g
/dev/sdk2 vg_testkmssvr lvm2 a-- 140.64g 19.63g
/dev/sdl1 vm2t lvm2 a-- 2.00t 2.00t
Add the disk "/dev/sdl1" in the PV to the VG of the specified vm2t and use PVS to check the success of the addition.
Lesson Learned
When the LVM block device status is abnormal or "unknown device" appears,
you can use vgscan to rescan all supported LVM block devices in the system,
and exclude the problematic pvs from the VGs, so as to get the normal status of the VGs.
Knowledge Expansion
4.1 Vgscan Command Usage
vgscan [ option_args ]
vgscan
[ –cache ] Scan all devices and send metadata to lvmetad daemon
[ –ignorelockingfailure ] Allows the command to continue to execute as a read-only metadata operation after the lock fails.
[ -mknodes ] Also checks for special LVM files in /dev needed for active LVs and creates any missing files and removes unused files.
[ -notifydbus ] Sends a notification to D-Bus. This command exits with an error if LVMs supporting D-Bus notification are not built, or if the notify_dbus configuration setting is disabled.
[ -reportformat basic|json ] Overrides the current output format of the report, which is defined globally by the report/output_format setting in lvm.conf. basic is the original format containing columns and rows.
4.2 LVM File System Size Limits
When the file system is larger than 16TB, the resize2fs command will fail. The reason is that the kernel version of the operating system is too low,
and you need to upgrade the operating system kernel if you need a larger file system.
The following figure shows that the resize2fs command fails:
4.3 LVM and System Kernel Relationships
The maximum space for a single LVM is limited by CPU architecture and Linux kernel version:
a, Linux kernel version 2.4.x limits the maximum file system to 2TB;
b, Some earlier Linux kernels before 2.4.x limited the file system to a maximum of 1TB;
c, The combination of a 32-bit CPU and Linux kernel version 2.6.x limits the file system to a maximum of 16TB;
d, For Linux kernel 2.6.x running on a 64-bit CPU, the maximum file system limit is 8EB.
For more information, please visit Antute's official website:03s8.luvgum.com