Displaying 20 results from an estimated 3000 matches similar to: "[PATCH <= 1.32] appliance: Disable lvmetad."
2018 May 24
2
[PATCH v2] daemon: Move lvmetad to early in the appliance boot process.
When the daemon starts up it creates a fresh (empty) LVM configuration
and starts up lvmetad (which depends on the LVM configuration).
However this appears to cause problems: Some types of PV seem to
require lvmetad and don't work without it
(https://bugzilla.redhat.com/show_bug.cgi?id=1581810). If we don't
start lvmetad earlier, the device nodes are not created.
Therefore move the
2016 Jul 26
0
[PATCH 2/5] daemon: lvm-filter: start lvmetad better
Currently lvmetad is started in init, and thus using the system
(= appliance) configuration of lvm. Later on, in the daemon, a local
copy of the lvm configuration is setup, and set it for use using the
LVM_SYSTEM_DIR environment variable: this means only the programmes
executed by the daemon will use the local lvm configuration, and not
lvmetad.
Thus manually start lvmetad from the daemon, right
2016 May 12
0
[PATCH 09/11] appliance: fix errors in init for SLE / openSUSE
Running the init on openSUSE and SLE machines showed up minor errors:
* skip the /etc/mtab symlink creation if the file is already existing.
* make sure /run/lvm is created or lvmetab will complain.
---
appliance/init | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/appliance/init b/appliance/init
index 413a95f..b22032e 100755
--- a/appliance/init
+++ b/appliance/init
@@
2018 May 24
0
Re: [PATCH v2] daemon: Move lvmetad to early in the appliance boot process.
On Thursday, 24 May 2018 16:01:22 CEST Richard W.M. Jones wrote:
> When the daemon starts up it creates a fresh (empty) LVM configuration
> and starts up lvmetad (which depends on the LVM configuration).
>
> However this appears to cause problems: Some types of PV seem to
> require lvmetad and don't work without it
> (https://bugzilla.redhat.com/show_bug.cgi?id=1581810). If
2016 Jul 26
5
[PATCH v2 0/4] Improve LVM handling in the appliance
Hi,
this series improves the way LVM is used in the appliance: in
particular, now lvmetad can eventually run at all, and with the correct
configuration.
Also improve the listing strategies.
Changes in v2:
- dropped patch #5, will be sent separately
- move lvmetad statup in own function (patch #2)
Thanks,
Pino Toscano (4):
daemon: lvm-filter: set also global_filter
daemon: lvm-filter:
2016 Jul 26
8
[PATCH 0/5] Improve LVM handling in the appliance
Hi,
this series improves the way LVM is used in the appliance: in
particular, now lvmetad can eventually run at all, and with the correct
configuration.
Also improve the listing strategies.
Thanks,
Pino Toscano (5):
daemon: lvm-filter: set also global_filter
daemon: lvm-filter: start lvmetad better
daemon: lvm: improve filter for LVs with activationskip flag set
daemon: lvm: list
2018 Jan 14
0
[PATCH v2 1/3] appliance: init: Avoid running degraded md devices
The issue:
- raid1 will be in degraded state if one of its components is logical volume (LV)
- raid0 will be inoperable at all (inacessible from within appliance) if one of its component is LV
- raidN: you can expect the same issue for any raid level depends on how many components are inaccessible at the time mdadm is running and raid redundency.
It happens because mdadm is launched prior to lvm
2017 Mar 10
1
[PATCH] appliance: run systemd-tmpfiles also for /var/run
Commit a6330e9d3af0f5286f1d53d909fd868387b67f69 enabled /run for
systemd-tmpfiles: while this works fine in most of the cases, there are
few tmpfiles configurations that still references /var/run instead of
/run. As result, include also /var/run in the systemd-tmpfiles
execution.
---
appliance/init | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/appliance/init
2015 Jan 12
0
Re: Resizing lvm fails with fedora20
On 11.01.2015 02:57, Alex Regan wrote:
> Hi,
> I'm trying to resize a 15GB LVM root partition on a fedora20 server with
> a fedora20 guest and I'm having a problem. Is this supported on fedora20?
>
> I recall having a similar problem (maybe even exact same problem) all
> the way back in fedora16 or fedora17, but hoped/thought it would be
> fixed by now?
>
> #
2018 May 24
1
[PATCH] daemon: Move creating of LVM_SYSTEM_DIR into the appliance/init script.
This patch reworks how we start up LVM and lvmetad.
It fixes the problem we had converting a guest which had a peculiar
LVM configuration:
https://bugzilla.redhat.com/show_bug.cgi?id=1581810#c14
However please note I have NOT yet tested it fully.
Rich.
2015 Jan 11
2
Resizing lvm fails with fedora20
Hi,
I'm trying to resize a 15GB LVM root partition on a fedora20 server with
a fedora20 guest and I'm having a problem. Is this supported on fedora20?
I recall having a similar problem (maybe even exact same problem) all
the way back in fedora16 or fedora17, but hoped/thought it would be
fixed by now?
# virt-df -h test1-011015.img
Filesystem Size Used
2016 Jul 26
0
[PATCH 5/5] appliance: run systemd-tmpfiles also for /run
Setup the volatile /run in the appliance also with the tmpfiles
configurations available. In particular, setting up correctly the lvm
bits allow lvmetad to run.
---
appliance/init | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/appliance/init b/appliance/init
index d440007..e678e42 100755
--- a/appliance/init
+++ b/appliance/init
@@ -88,7 +88,7 @@ machine_id=$(dd
2016 Jul 26
0
[PATCH 4/4] appliance: run systemd-tmpfiles also for /run
Setup the volatile /run in the appliance also with the tmpfiles
configurations available. In particular, setting up correctly the lvm
bits allow lvmetad to run.
---
appliance/init | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/appliance/init b/appliance/init
index d440007..e678e42 100755
--- a/appliance/init
+++ b/appliance/init
@@ -88,7 +88,7 @@ machine_id=$(dd
2017 Apr 23
0
Proper way to remove a qemu-nbd-mounted volume usnig lvm
I either haven't searched for the right thing or the web doesn't contain
the answer.
I have used the following to mount an image and now I need to know the
proper way to reverse the process.
qemu-nbd -c /dev/nbd0 <qcow2 image using lvm>
vgscan --cache (had to use --cache to get the qemu-nbd volume to
be recognized, lvmetad is running)
vgchange -ay
2017 Aug 23
2
virt-sysprep: error: no operating systems were found in the guest image on libguestfs-1.36.5
Hi all,
I encountered the following error when I tried to run virt-sysprep on a qcow2 image.
virt-sysprep: error: no operating systems were found in the guest image
The version I used for libguest is as follows:
[libguestfs-1.36.5]
The same issue happened on other versions such as libguestfs-1.32.7.
# ./run ./sysprep/virt-sysprep -v -x -a
2015 Oct 31
0
Re: P2V conversion failed with "/run/lvm/lvmetad.socket: connect failed: No such file or directory"
On Sat, Oct 31, 2015 at 10:14:32PM +0530, Tejas Gadaria wrote:
> Hi,
>
> We are trying to do P2V conversion with virt-p2v.
>
> we have conversion server (virt-p2v) and physical server (virt-p2v) server
> configured as per below documentation.
>
> http://libguestfs.org/virt-p2v.1.html#kernel-command-line-configuration
>
> After "Start Conversion" from GUI
2015 Nov 01
0
Re: P2V conversion failed with "/run/lvm/lvmetad.socket: connect failed: No such file or directory"
Hi Richard,
Thanks for your replay,
We are using Fedora 21 with SAS drive and RAID 0 config on both Physical
and conversion server.
Thanks,
Tejas
On Sat, Oct 31, 2015 at 11:50 PM, Richard W.M. Jones <rjones at redhat.com>
wrote:
> On Sat, Oct 31, 2015 at 06:11:05PM +0000, Richard W.M. Jones wrote:
> >
> > On Sat, Oct 31, 2015 at 10:14:32PM +0530, Tejas Gadaria wrote:
2015 Oct 31
2
P2V conversion failed with "/run/lvm/lvmetad.socket: connect failed: No such file or directory"
Hi,
We are trying to do P2V conversion with virt-p2v.
we have conversion server (virt-p2v) and physical server (virt-p2v) server
configured as per below documentation.
http://libguestfs.org/virt-p2v.1.html#kernel-command-line-configuration
After "Start Conversion" from GUI interface, we are conversion fails with
"/run/lvm/lvmetad.socket: connect failed: No such file or
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present.
This will prevent starting arrays in degraded state.
Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state.
Two new tests are added.
This fixes rhbz1527852.
Here is boot-benchmark
2020 Sep 03
0
Re: Error while loading shared libraries: libsbz.so
[Please keep replies on the list]
> ************************************************************
> * IMPORTANT NOTICE
> *
> * When reporting bugs, include the COMPLETE, UNEDITED
> * output below in your bug report.
> *
> ************************************************************
> libguestfs: trace: set_verbose true