similar to: Looking for a life-save LVM Guru

Displaying 20 results from an estimated 3000 matches similar to: "Looking for a life-save LVM Guru"

2015 Feb 28
1
Looking for a life-save LVM Guru
Dear James, Thank you for being quick to help. Yes, I could see all of them: # vgs # lvs # pvs Regards, Khem On Sat, February 28, 2015 7:37 am, James A. Peltier wrote: > > > ----- Original Message ----- > | Dear All, > | > | I am in desperate need for LVM data rescue for my server. > | I have an VG call vg_hosting consisting of 4 PVs each contained in a > | separate
2015 Feb 28
0
Looking for a life-save LVM Guru
----- Original Message ----- | Dear All, | | I am in desperate need for LVM data rescue for my server. | I have an VG call vg_hosting consisting of 4 PVs each contained in a | separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1). | And this LV: lv_home was created to use all the space of the 4 PVs. | | Right now, the third hard drive is damaged; and therefore the third PV |
2015 Feb 28
2
Looking for a life-save LVM Guru
On 2/27/2015 4:37 PM, James A. Peltier wrote: > | I was able to create a new PV and restore the VG Config/meta data: > | > | # pvcreate --restorefile ... --uuid ... /dev/sdc1 > | oh, that step means you won't be able to recover ANY of the data that was formerly on that PV. -- john r pierce 37N 122W somewhere on the middle of the left coast
2015 Feb 28
7
Looking for a life-save LVM Guru
On 2/27/2015 4:52 PM, Khemara Lyn wrote: > I understand; I tried it in the hope that, I could activate the LV again > with a new PV replacing the damaged one. But still I could not activate > it. > > What is the right way to recover the remaining PVs left? take a filing cabinet packed full of 10s of 1000s of files of 100s of pages each, with the index cards interleaved in the
2012 Jan 19
1
Wrong PV UUID
Dear All, I have one VG with one LV inside consist of four disk (and four PV) somehow the UUID is changed. I try to restore with the last known good configuration and use pvcreate --uuid xxx --restorefile xxx but I think when I first time do it I did use wrong UUID for two device. but haven't only using command above and haven't used vgcfgrestore. After carefully read the configuration,
2014 Oct 27
3
"No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
I'm trying to extend a logical volume and I'm doing as follow: 1- Run `fdisk -l` command and this is the output: Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier:
2015 Feb 28
0
Looking for a life-save LVM Guru
On 2/27/2015 4:25 PM, Khemara Lyn wrote: > Right now, the third hard drive is damaged; and therefore the third PV > (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever > left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1). your data is spread across all 4 drives, and you lost 25% of it. so only 3 out of 4 blocks of data still exist. good luck with
2018 Dec 05
6
LVM failure after CentOS 7.6 upgrade -- possible corruption
I've started updating systems to CentOS 7.6, and so far I have one failure. This system has two peculiarities which might have triggered the problem. The first is that one of the software RAID arrays on this system is degraded. While troubleshooting the problem, I saw similar error messages mentioned in bug reports indicating that sGNU/Linux ystems would not boot with degraded software
2007 Nov 29
1
RAID, LVM, extra disks...
Hi, This is my current config: /dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1 sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf. What should I do if I
2015 Feb 19
3
iostat a partition
Hey guys, I need to use iostat to diagnose a disk latency problem we think we may be having. So if I have this disk partition: [root at uszmpdblp010la mysql]# df -h /mysql Filesystem Size Used Avail Use% Mounted on /dev/mapper/MysqlVG-MysqlVol 9.9G 1.1G 8.4G 11% /mysql And I want to correlate that to the output of fdisk -l, so that I can feed the disk
2012 Jan 16
4
VirtIO disk 'leakage' across guests?
We are in the process of migrating several stand-alone server hosts onto a CentOS-6 kvm virtual host. We also use Webmin to administer our hosts. All of the guests, without exception, have been cloned brom a prototype guest using virt-manager. All of the additional VirtIO disks assigned to some of the guests have been added through virt-manager as well. Recently I have encountered a situation
2012 Jan 16
4
VirtIO disk 'leakage' across guests?
We are in the process of migrating several stand-alone server hosts onto a CentOS-6 kvm virtual host. We also use Webmin to administer our hosts. All of the guests, without exception, have been cloned brom a prototype guest using virt-manager. All of the additional VirtIO disks assigned to some of the guests have been added through virt-manager as well. Recently I have encountered a situation
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2
2013 Dec 16
2
LVM recovery after pvcreate
Hi all, I had centos 5.9 installed with one of its volumes (non-root) on LVM: ... /dev/vgapps/lvapps /opt/apps ext3 defaults 1 2 ... Then installed centos 6.4 on this servers but without exporting this volume (I wanted to reuse it). After that instead importing it I did: # pvcreate /dev/sddlmac # vgcreate vgapps /dev/sddlmac And then realised that I should have
2014 Feb 20
2
Growing HW RAID arrays, Online
We add disks to an LSI raid array periodically to increase the amount of available space for business needs. It is understood that this process starts with metal, and has many layers that must each adjust to make use of the additional space. Each of these layers also says that it can do that 'online' without interruption or rebooting. But making it happen is not that easy. When the HW
2016 May 18
4
enlarging partition and its filesystem
Hi all! I've got a VM at work running C6 on HyperV (no, its not my fault, that's what the company uses. I'd rather gag myself than own one of th ose things.) I ran out of disk space in the VM, so the admin enlarged the virtual disk. but now I realize I don't know how to enlarge the partition and its filesystem. I'll be googling, but in case I miss it, it'd be great if
2015 Feb 28
0
Looking for a life-save LVM Guru
Dear John, I understand; I tried it in the hope that, I could activate the LV again with a new PV replacing the damaged one. But still I could not activate it. What is the right way to recover the remaining PVs left? Regards, Khem On Sat, February 28, 2015 7:42 am, John R Pierce wrote: > On 2/27/2015 4:37 PM, James A. Peltier wrote: > >> | I was able to create a new PV and restore
2015 Jun 10
2
[PATCH] New API: btrfs_replace_start
Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com> --- daemon/btrfs.c | 40 +++++++++++++++++++++++++++++++++++++++ generator/actions.ml | 19 +++++++++++++++++++ tests/btrfs/test-btrfs-devices.sh | 8 ++++++++ 3 files changed, 67 insertions(+) diff --git a/daemon/btrfs.c b/daemon/btrfs.c index 39392f7..acc300d 100644 --- a/daemon/btrfs.c +++
2015 Mar 02
3
Looking for a life-save LVM Guru
Dear Chris, James, Valeri and all, Sorry to have not responded as I'm still on struggling with the recovery with no success. I've been trying to set up a new system with the exact same scenario (4 2TB hard drives and remove the 3rd one afterwards). I still cannot recover. We did have a backup system but it went bad for a while and we did not have replacement on time until this
2015 Jun 12
2
Re: [PATCH] New API: btrfs_replace_start
在 2015年06月12日 17:12, Pino Toscano 写道: > On Friday 12 June 2015 10:58:34 Pino Tsao wrote: >> Hi, >> >> 在 2015年06月11日 17:43, Pino Toscano 写道: >>> Hi, >>> >>> On Wednesday 10 June 2015 17:54:18 Pino Tsao wrote: >>>> Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com> >>>> --- >>>> daemon/btrfs.c