Dear All, I am in desperate need for LVM data rescue for my server. I have an VG call vg_hosting consisting of 4 PVs each contained in a separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1). And this LV: lv_home was created to use all the space of the 4 PVs. Right now, the third hard drive is damaged; and therefore the third PV (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1). I have tried with the following: 1. Removing the broken PV: # vgreduce --force vg_hosting /dev/sdc1 Physical volume "/dev/sdc1" still in use # pvmove /dev/sdc1 No extents available for allocation 2. Replacing the broken PV: I was able to create a new PV and restore the VG Config/meta data: # pvcreate --restorefile ... --uuid ... /dev/sdc1 # vgcfgrestore --file ... vg_hosting However, vgchange would give this error: # vgchange -a y device-mapper: resume ioctl on failed: Invalid argument Unable to resume vg_hosting-lv_home (253:4) 0 logical volume(s) in volume group "vg_hosting" now active Could someone help me please??? I'm in dire need for help to save the data, at least some of it if possible. Regards, Khem
On 2/27/2015 4:25 PM, Khemara Lyn wrote:> Right now, the third hard drive is damaged; and therefore the third PV > (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever > left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).your data is spread across all 4 drives, and you lost 25% of it. so only 3 out of 4 blocks of data still exist. good luck with recovery. -- john r pierce 37N 122W somewhere on the middle of the left coast
----- Original Message ----- | Dear All, | | I am in desperate need for LVM data rescue for my server. | I have an VG call vg_hosting consisting of 4 PVs each contained in a | separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1). | And this LV: lv_home was created to use all the space of the 4 PVs. | | Right now, the third hard drive is damaged; and therefore the third PV | (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever | left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1). | | I have tried with the following: | | 1. Removing the broken PV: | | # vgreduce --force vg_hosting /dev/sdc1 | Physical volume "/dev/sdc1" still in use | | # pvmove /dev/sdc1 | No extents available for allocation This would indicate that you don't have sufficient extents to move the data off of this disk. If you have another disk then you could try adding it to the VG and then moving the extents. | 2. Replacing the broken PV: | | I was able to create a new PV and restore the VG Config/meta data: | | # pvcreate --restorefile ... --uuid ... /dev/sdc1 | # vgcfgrestore --file ... vg_hosting | | However, vgchange would give this error: | | # vgchange -a y | device-mapper: resume ioctl on failed: Invalid argument | Unable to resume vg_hosting-lv_home (253:4) | 0 logical volume(s) in volume group "vg_hosting" now active There should be no need to create a PV and then restore the VG unless the entire VG is damaged. The configuration should still be available on the other disks and adding the new PV and moving the extents should be enough. | Could someone help me please??? | I'm in dire need for help to save the data, at least some of it if possible. Can you not see the PV/VG/LV at all? -- James A. Peltier IT Services - Research Computing Group Simon Fraser University - Burnaby Campus Phone : 778-782-6573 Fax : 778-782-3045 E-Mail : jpeltier at sfu.ca Website : http://www.sfu.ca/itservices Twitter : @sfu_rcg Powering Engagement Through Technology "Build upon strengths and weaknesses will generally take care of themselves" - Joyce C. Lock
Thank you, John for your quick reply. That is what I hope. But how to do it? I cannot even activate the LV with the remaining PVs. Thanks, Khem On Sat, February 28, 2015 7:34 am, John R Pierce wrote:> On 2/27/2015 4:25 PM, Khemara Lyn wrote: > >> Right now, the third hard drive is damaged; and therefore the third PV >> (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever >> left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1). > > your data is spread across all 4 drives, and you lost 25% of it. so only 3 > out of 4 blocks of data still exist. good luck with recovery. > > > > -- > john r pierce 37N 122W somewhere on > the middle of the left coast > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos > >
Dear James, Thank you for being quick to help. Yes, I could see all of them: # vgs # lvs # pvs Regards, Khem On Sat, February 28, 2015 7:37 am, James A. Peltier wrote:>> > ----- Original Message ----- > | Dear All, > | > | I am in desperate need for LVM data rescue for my server. > | I have an VG call vg_hosting consisting of 4 PVs each contained in a > | separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1). > | And this LV: lv_home was created to use all the space of the 4 PVs. > | > | Right now, the third hard drive is damaged; and therefore the third PV > | (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever > | left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1). > | > | I have tried with the following: > | > | 1. Removing the broken PV: > | > | # vgreduce --force vg_hosting /dev/sdc1 > | Physical volume "/dev/sdc1" still in use > | > | # pvmove /dev/sdc1 > | No extents available for allocation > > > This would indicate that you don't have sufficient extents to move the > data off of this disk. If you have another disk then you could try > adding it to the VG and then moving the extents. > > | 2. Replacing the broken PV: > | > | I was able to create a new PV and restore the VG Config/meta data: > | > | # pvcreate --restorefile ... --uuid ... /dev/sdc1 > | # vgcfgrestore --file ... vg_hosting > | > | However, vgchange would give this error: > | > | # vgchange -a y > | device-mapper: resume ioctl on failed: Invalid argument > | Unable to resume vg_hosting-lv_home (253:4) > | 0 logical volume(s) in volume group "vg_hosting" now active > > > There should be no need to create a PV and then restore the VG unless the > entire VG is damaged. The configuration should still be available on the > other disks and adding the new PV and moving the extents should be > enough. > > | Could someone help me please??? > | I'm in dire need for help to save the data, at least some of it if > possible. > > Can you not see the PV/VG/LV at all? > > > -- > James A. Peltier > IT Services - Research Computing Group > Simon Fraser University - Burnaby Campus > Phone : 778-782-6573 > Fax : 778-782-3045 > E-Mail : jpeltier at sfu.ca > Website : http://www.sfu.ca/itservices > Twitter : @sfu_rcg > Powering Engagement Through Technology > "Build upon strengths and weaknesses will generally take care of > themselves" - Joyce C. Lock > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos > >
On 2/27/2015 4:37 PM, James A. Peltier wrote:> | I was able to create a new PV and restore the VG Config/meta data: > | > | # pvcreate --restorefile ... --uuid ... /dev/sdc1 > |oh, that step means you won't be able to recover ANY of the data that was formerly on that PV. -- john r pierce 37N 122W somewhere on the middle of the left coast
On Sat, 2015-02-28 at 07:25 +0700, Khemara Lyn wrote:> I have tried with the following: > > 1. Removing the broken PV: > > # vgreduce --force vg_hosting /dev/sdc1 > Physical volume "/dev/sdc1" still in useNext time, try "vgreduce --removemissing <VG>" first. In my experience, any lvm command using --force often has undesirable side effects. Regarding getting the lvm functioning again, there is also a --partial option that is sometimes useful with the various vg* commands with a missing PV (see man lvm). And "vgdisplay -v" often regenerates missing metadata (as in getting a functioning lvm back). Steve
OK It's extremely rude to cross post the same question across multiple lists like this at exactly the same time, and without at least showing the cross posting. I just replied to the one on Fedora users before I saw this post. This sort of thing wastes people's time. Pick one list based on the best case chance for response and give it 24 hours. Chris Murphy
Ok, sorry about that. On Sat, February 28, 2015 9:13 am, Chris Murphy wrote:> OK It's extremely rude to cross post the same question across multiple > lists like this at exactly the same time, and without at least showing the > cross posting. I just replied to the one on Fedora users before I saw this > post. This sort of thing wastes people's time. Pick one list based on the > best case chance for response and give it 24 hours. > > > Chris Murphy > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos > >
https://lists.fedoraproject.org/pipermail/users/2015-February/458923.html I don't see how the VG metadata is restored with any of the commands suggested thus far. I think that's vgcfgrestore. Otherwise I'd think that LVM has no idea how to do the LE to PE mapping. In any case, this sounds like a data scraping operation to me. XFS might be a bit more tolerant because AG's are distributed across all 4 PV's in this case, and each AG keeps its own metadata. But I still don't think the filesystem will be mountable, even read only. Maybe testdisk can deal with it, and if not then debugfs -c rdump might be able to get some of the directories. But for sure the LV has to be active. And I expect modifications (resizing anything, fscking) astronomically increase the chance of total data loss. If it's XFS xfs_db itself is going to take longer to read and understand than just restoring from backup (XFS has dense capabilities). On the other hand, Btrfs can handle this situation somewhat well so long as the fs metadata is raid1, which is the mkfs default for multiple devices. It will permit degraded mounting in such a case so recovery is straightforward. Missing files are recorded in dmesg. Chris Murphy