Displaying 20 results from an estimated 10000 matches similar to: "lvm monitoring"
2016 Jul 26
0
[PATCH 4/5] daemon: lvm: list PVs/VGs/LVs with --foreign
The appliance has no LVM system ID set, which means that lvm commands
will ignore VGs with a system ID set to anything. Since we want to work
with them, pass --foreign at least when listing them to see them.
See also lvmsystemid(7).
---
daemon/lvm.c | 10 ++++++----
generator/daemon.ml | 1 +
2 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/daemon/lvm.c b/daemon/lvm.c
2015 Feb 28
1
Looking for a life-save LVM Guru
Dear James,
Thank you for being quick to help.
Yes, I could see all of them:
# vgs
# lvs
# pvs
Regards,
Khem
On Sat, February 28, 2015 7:37 am, James A. Peltier wrote:
>
>
> ----- Original Message -----
> | Dear All,
> |
> | I am in desperate need for LVM data rescue for my server.
> | I have an VG call vg_hosting consisting of 4 PVs each contained in a
> | separate
2018 Jul 19
1
Re: [PATCH 2/3] New API: lvm_scan, deprecate vgscan (RHBZ#1602353).
On Wednesday, 18 July 2018 15:37:24 CEST Richard W.M. Jones wrote:
> The old vgscan API literally ran vgscan. When we switched to using
> lvmetad (in commit dd162d2cd56a2ecf4bcd40a7f463940eaac875b8) this
> stopped working because lvmetad now ignores plain vgscan commands
> without the --cache option.
>
> We documented that vgscan would rescan PVs, VGs and LVs, but without
>
2010 Jul 20
2
LVM issue
Hi We use AoE disks for some of our systems. Currently, a 15.65Tb filesystem we have is full, I then extended the LVM by a further 4Tb but resize4fs could not handle a filesystem over 16Tb (CentOS 5.5). I then reduced the lvm by the same amount, and attempted to create a new LV, but get this error message in the process
lvcreate -v -ndata2 -L2T -t aoe
Test mode: Metadata will NOT be updated.
2016 Jul 26
1
Re: [PATCH 4/5] daemon: lvm: list PVs/VGs/LVs with --foreign
On Tue, Jul 26, 2016 at 05:41:28PM +0200, Pino Toscano wrote:
> The appliance has no LVM system ID set, which means that lvm commands
> will ignore VGs with a system ID set to anything. Since we want to work
> with them, pass --foreign at least when listing them to see them.
>
> See also lvmsystemid(7).
This is sort of a hack, if I'm understanding correctly. Can we not
2018 Jul 18
0
[PATCH 2/3] New API: lvm_scan, deprecate vgscan (RHBZ#1602353).
The old vgscan API literally ran vgscan. When we switched to using
lvmetad (in commit dd162d2cd56a2ecf4bcd40a7f463940eaac875b8) this
stopped working because lvmetad now ignores plain vgscan commands
without the --cache option.
We documented that vgscan would rescan PVs, VGs and LVs, but without
activating them.
I have introduced a new API (lvm_scan) which scans or rescans PVs, VGs
and LVs. It
2016 Jul 26
8
[PATCH 0/5] Improve LVM handling in the appliance
Hi,
this series improves the way LVM is used in the appliance: in
particular, now lvmetad can eventually run at all, and with the correct
configuration.
Also improve the listing strategies.
Thanks,
Pino Toscano (5):
daemon: lvm-filter: set also global_filter
daemon: lvm-filter: start lvmetad better
daemon: lvm: improve filter for LVs with activationskip flag set
daemon: lvm: list
2018 Dec 05
0
LVM failure after CentOS 7.6 upgrade -- possible corruption
> I've started updating systems to CentOS 7.6, and so far I have one
> failure.
>
> This system has two peculiarities which might have triggered the
> problem. The first is that one of the software RAID arrays on this
> system is degraded. While troubleshooting the problem, I saw similar
> error messages mentioned in bug reports indicating that sGNU/Linux
> ystems
2018 Dec 05
0
LVM failure after CentOS 7.6 upgrade -- possible corruption
My gut feeling is that this is related to a RAID1 issue I'm seeing with 7.6.
See email thread "CentOS 7.6: Software RAID1 fails the only meaningful test"
I suggest trying to boot from an earlier kernel. Good luck!
Ben S
On Wednesday, December 5, 2018 9:27:22 AM PST Gordon Messmer wrote:
> I've started updating systems to CentOS 7.6, and so far I have one failure.
>
2018 Jan 14
0
[PATCH v2 1/3] appliance: init: Avoid running degraded md devices
The issue:
- raid1 will be in degraded state if one of its components is logical volume (LV)
- raid0 will be inoperable at all (inacessible from within appliance) if one of its component is LV
- raidN: you can expect the same issue for any raid level depends on how many components are inaccessible at the time mdadm is running and raid redundency.
It happens because mdadm is launched prior to lvm
2012 Jun 01
2
installation and configuration documentation for XCP
i''ve installed XCP 1.5-beta. i''m a little confused as to what has
happened. everything so far seems to work. however, i need more
information on what was done to my hard disk during the installation
and how was the file system set up.
in particular, i was investigating how to create a new logical volume
to place my ISO file to use as my ISO storage (SR). i notice (see
below with
2011 Apr 02
3
Best way to extend pv partition for LVM
I've replaced disks in a hardware RAID 1 with larger disks and enlarged
the array. Now I have to find a way to tell LVM about the extra space.
It seems there are two ways:
1. delete partition with fdisk and recreate a larger one. This is
obviously a bit tricky if you do not want to lose data, I haven't
investigated further yet.
2. create another partition on the disk, pvcreate another
2018 Dec 05
1
LVM failure after CentOS 7.6 upgrade -- possible corruption
On Wed, 5 Dec 2018 at 14:27, Benjamin Smith <lists at benjamindsmith.com> wrote:
>
> My gut feeling is that this is related to a RAID1 issue I'm seeing with 7.6.
> See email thread "CentOS 7.6: Software RAID1 fails the only meaningful test"
>
You might want to point out which list you posted it on since it
doesn't seem to be this one.
> I suggest trying to
2010 Mar 18
2
[PATCH 0/2] Add API for querying the relationship between LVM objects
Currently I found it it's hard to determine the relationship between
LVM objects, by which I mean "what PVs contain a VG?", or "what LVs
are contained in a VG?"
This simple API exposes that to callers.
{lv,vg,pv}uuid:
Return the UUID of an LVM object. You can already get this
using (eg.) lvs_full, but this is a lot less faffing around.
vg{lv,pv}uuids:
2016 Jan 28
0
[PATCH v2] lvm: support lvm2 older than 2.02.107
lvm2 2.02.107 adds the -S/--select option used in lvs to filter out only
public LVs (see RHBZ#1278878). To make this work again with versions
of lvm2 older than that, only on old versions filter out thin layouts
and compose the resulting device strings ourselves.
The filtering done is much simplier than what "-S lv_role=public" will
do, but should be good enough for our need.
---
2018 May 24
2
[PATCH v2] daemon: Move lvmetad to early in the appliance boot process.
When the daemon starts up it creates a fresh (empty) LVM configuration
and starts up lvmetad (which depends on the LVM configuration).
However this appears to cause problems: Some types of PV seem to
require lvmetad and don't work without it
(https://bugzilla.redhat.com/show_bug.cgi?id=1581810). If we don't
start lvmetad earlier, the device nodes are not created.
Therefore move the
2018 Dec 05
6
LVM failure after CentOS 7.6 upgrade -- possible corruption
I've started updating systems to CentOS 7.6, and so far I have one failure.
This system has two peculiarities which might have triggered the
problem. The first is that one of the software RAID arrays on this
system is degraded. While troubleshooting the problem, I saw similar
error messages mentioned in bug reports indicating that sGNU/Linux
ystems would not boot with degraded software
2016 Jan 27
4
[PATCH] lvm: support lvm2 older than 2.02.107
lvm2 2.02.107 adds the -S/--select option used in lvs to filter out only
public LVs (see RHBZ#1278878). To make this work again with versions
of lvm2 older than that, only on old versions filter out thin layouts
and compose the resulting device strings ourselves.
The filtering done is much simplier than what "-S lv_role=public" will
do, but should be good enough for our need.
---
2017 Nov 07
0
Re: using LVM thin pool LVs as a storage for libvirt guest
Please don't use lvm thin for vm. In our hosting in Russia we have
100-150 vps on each node with lvm thin pool on ssd and have locks,
slowdowns and other bad things because of COW. After we switching to
qcow2 files on plain ssd ext4 fs and happy =).
2017-11-04 23:21 GMT+03:00 Jan HutaĆ <jhutar@redhat.com>:
> Hello,
> as usual, I'm few years behind trends so I have learned
2010 Oct 04
1
Mounting an lvm
I converted a system disk from a virtualbox
VM and added to the config on a qemu VM.
All seems well until I try to mount it. The
virtual machine shows data for the disk
image using commands like:
pvs
lvs
lvdisplay xena-1
but there is no /dev/xena-1/root to be
mounted. I also cannot seem to figure out
whether the lvm related modules are available
for the virtual machine kernel.
Has anyone