Displaying 20 results from an estimated 21 matches for "vgreduc".
Did you mean:
vgreduce
2014 Oct 27
3
"No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
...onsole is
this /dev/root:
read failed after 0 of 4096 at 27522957312: Input/output error /dev/root:
read failed after 0 of 4096 at 27523014656: Input/output error Couldn't
find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Cannot change
VG vg_devserver while PVs are missing. Consider vgreduce --removemissing.
Then following the suggestion from the previous command results I run this
other command vgreduce --removemissing vg_devserver but again got this
error: WARNING: Partial LV lv_root needs to be repaired or removed. There
are still partial LVs in VG vg_devserver. To remove them unc...
2015 Feb 28
9
Looking for a life-save LVM Guru
...ll the space of the 4 PVs.
Right now, the third hard drive is damaged; and therefore the third PV
(/dev/sdc1) cannot be accessed anymore. I would like to recover whatever
left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).
I have tried with the following:
1. Removing the broken PV:
# vgreduce --force vg_hosting /dev/sdc1
Physical volume "/dev/sdc1" still in use
# pvmove /dev/sdc1
No extents available for allocation
2. Replacing the broken PV:
I was able to create a new PV and restore the VG Config/meta data:
# pvcreate --restorefile ... --uuid ... /dev/sdc1
# vgcfgres...
2012 Jan 16
4
VirtIO disk 'leakage' across guests?
We are in the process of migrating several stand-alone
server hosts onto a CentOS-6 kvm virtual host. We also
use Webmin to administer our hosts. All of the guests,
without exception, have been cloned brom a prototype guest
using virt-manager. All of the additional VirtIO disks
assigned to some of the guests have been added through
virt-manager as well.
Recently I have encountered a situation
2012 Jan 16
4
VirtIO disk 'leakage' across guests?
We are in the process of migrating several stand-alone
server hosts onto a CentOS-6 kvm virtual host. We also
use Webmin to administer our hosts. All of the guests,
without exception, have been cloned brom a prototype guest
using virt-manager. All of the additional VirtIO disks
assigned to some of the guests have been added through
virt-manager as well.
Recently I have encountered a situation
2014 Jun 24
3
How to remove LVM Physical Volume from Volume Group?
Hi. I have a volume group (let's say) vg_data.
It consists from /dev/sdd5
sdd6
sdd7
I added sdc5
Now I want to remove (free) sdd7 and you is to for RAID partition.
What are the commands (ordered) I need to perform? I failed to find
clear howto.
vg-data has only one partition, total size is over 1TB, free space is
about 500GB so
2016 Jan 22
4
LVM mirror database to ramdisk
...pvcreate /dev/ramdisk
vgextend vg /dev/ramdisk
lvconvert -m 1 --corelog vg/lv_database /dev/ramdisk
Even with lv_database being 35G, it doesn't take long to activate the
mirror.
I haven't decided where to put the commands to turn off the lvm
mirror.
lvconvert -m 0 vg/lv_database
vgreduce vg /dev/ramdisk
pvremove /dev/ramdisk
I haven't put this in real world use, yet.
On it's face, this might speed up database access. Would we expect it
to speed up database access in real world use?
Should I document the process so others could know how to do this? I
realize new...
2015 Feb 28
1
Looking for a life-save LVM Guru
...drive is damaged; and therefore the third PV
> | (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever
> | left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).
> |
> | I have tried with the following:
> |
> | 1. Removing the broken PV:
> |
> | # vgreduce --force vg_hosting /dev/sdc1
> | Physical volume "/dev/sdc1" still in use
> |
> | # pvmove /dev/sdc1
> | No extents available for allocation
>
>
> This would indicate that you don't have sufficient extents to move the
> data off of this disk. If you have...
2015 Jun 24
4
LVM hatred, was Re: /boot on a separate partition?
...ing have to do with VMs.
> Do you have some for straight-on-the-server, non-VM cases?
I've used LVM on servers with hot-swap drives to migrate to new storage
without downtime a number of times. Add new drives to the system,
configure RAID (software or hardware), pvcreate, vgextend, pvmove,
vgreduce, and pvremove (and maybe a lvextend and resize2fs/xfs_growfs).
Never unmounted a filesystem, just some extra disk I/O.
Even in cases where I had to shutdown or reboot a server to get drives
added, moving data could take a long downtime, but with LVM I can
live-migrate from place to place.
LVM sn...
2017 Apr 23
0
Proper way to remove a qemu-nbd-mounted volume usnig lvm
...point>
lvchange -an <all qemu-nbd related 'LV Path's found from lvdisplay
above>
vgchange -an <qemu-nbd related volume>
Now what? How do I get the volume out of the list so I can use
'qemu-nbd -d /dev/nbd0' to dis-associate the image with /dev/nbd0?
vgreduce seems to be for volumes which have multiple underlying
devices. I started to use vgremove but, when it started prompting for
confirmation about removing logical volumes, I wasn't sure exactly what
it was going to do and responded 'no'.
If there is a web reference explaining this s...
2016 Jan 22
0
LVM mirror database to ramdisk
...vconvert -m 1 --corelog vg/lv_database /dev/ramdisk
>
> Even with lv_database being 35G, it doesn't take long to activate the
> mirror.
>
> I haven't decided where to put the commands to turn off the lvm
> mirror.
> lvconvert -m 0 vg/lv_database
> vgreduce vg /dev/ramdisk
> pvremove /dev/ramdisk
>
> I haven't put this in real world use, yet.
>
> On it's face, this might speed up database access. Would we expect it
> to speed up database access in real world use?
>
> Should I document the process so othe...
2015 Feb 28
0
Looking for a life-save LVM Guru
...4 PVs.
|
| Right now, the third hard drive is damaged; and therefore the third PV
| (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever
| left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).
|
| I have tried with the following:
|
| 1. Removing the broken PV:
|
| # vgreduce --force vg_hosting /dev/sdc1
| Physical volume "/dev/sdc1" still in use
|
| # pvmove /dev/sdc1
| No extents available for allocation
This would indicate that you don't have sufficient extents to move the data off of this disk. If you have another disk then you could try adding...
2013 Aug 14
1
Can't remove a physical_volume from a volume_group
Hello,
I have this bit of code that tries to remove a physical volume from the VG
(which consists of /dev/sda5 amd /dev/sdb1):
volume_group {
"system":
ensure => absent,
physical_volumes => "/dev/sdb1",
}
But I get this error:
Error: Execution of
2007 Nov 29
1
RAID, LVM, extra disks...
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot
/dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2
/dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs
sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and
sdf.
What should I do if I
2011 Jun 14
0
OT: clear error from premature disk removal from LVM
...hat
trying to manipulate sdd may cause problems in the future, so am wary of
doing anything with it before I clear up this issue.
What should my next steps be? I've seen recommendations (in other
somewhat similar situations) to try an lvscan or vgscan to refresh the
list of volumes; or to try vgreduce --removemissing to remove the
now-gone volumes from what LVM thinks is there. I do also still have
the original disks, which I could put back and try to get LVM to
re-find, but would be a pain. I am hoping to avoid a reboot, since the
current LV seems unaffected and is in use, but I can do it if...
2015 Jan 10
3
LVM - pvmove and multiple servers
Hi All.
Looking for some guidance/experience with LVM and pvmove.
I have a LUN/PV being presented from a iscsi SAN. The LUN/PV is presented
to 5 servers as a shared VG they all have LV's they use for data, they are
all connected via iSCSI.
As the SAN I am using is being replaced I need to move onto a new unit.
My migration strategy at this time is to
1. Present a new LUN from the new SAN
2016 Jan 22
2
LVM mirror database to ramdisk
...dev/ramdisk
> >
> > Even with lv_database being 35G, it doesn't take long to activate the
> > mirror.
> >
> > I haven't decided where to put the commands to turn off the lvm
> > mirror.
> > lvconvert -m 0 vg/lv_database
> > vgreduce vg /dev/ramdisk
> > pvremove /dev/ramdisk
> >
> > I haven't put this in real world use, yet.
> >
> > On it's face, this might speed up database access. Would we expect it
> > to speed up database access in real world use?
> >
> >...
2020 May 13
4
CentOS 7 - xfs shrink & expand
I'm having some difficulty finding a method to shrink my /home to expand
my /.? They both correspond to LVMs.? It is my understanding that one
cannot shrink a xfs filesystem.? One must back it up (xfsdump), remove
(lvremove) redefine it and then restore it back (xfsrestore).
Okay, I'm running into a problem where /home? needs to be "unused".? If
tried going in to
2009 Nov 09
6
Move domU lvm based to another dom0
Hi guys, I need to move an lvm based domU from one dom0 to another dom0.
How do you guys do ths?
xm save/restore doesnt have the option to specify lvm target as the storage.
Thanks
Chris
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
2015 Jun 24
6
LVM hatred, was Re: /boot on a separate partition?
On 06/23/2015 08:10 PM, Marko Vojinovic wrote:
> Ok, you made me curious. Just how dramatic can it be? From where I'm
> sitting, a read/write to a disk takes the amount of time it takes, the
> hardware has a certain physical speed, regardless of the presence of
> LVM. What am I missing?
Well, there's best and worst case scenarios. Best case for file-backed
VMs is
2010 Feb 27
17
XEN and clustering?
Hi.
I''m using Xen on RHEL cluster, and I have strange problems. I gave raw
volumes from storage to Xen virtual machines. With windows, I have a
problem that nodes don''t see the volume as same one.... for example:
clusternode1# clusvcadm -d vm:winxp
clusternode1# dd if=/dev/mapper/winxp of=/node1winxp
clusternode2# dd if=/dev/mapper/winxp of=/node2winxp
clusternode3# dd