Displaying 20 results from an estimated 10000 matches similar to: "io=native & io=threads"
2017 Jan 17
0
Re: io=native & io=threads
On Mon, Jan 16, 2017 at 10:52:08AM -0800, W Kern wrote:
> Googling provides lots of interesting info on the use of these in various
> situations such as SSD, number of VMs in the pool etc.
>
> What is the default in Libvirt (or is the default 'neither')
Libvirt has no default policy - it just delegates to whatever
QEMU uses as the default.
The current recommendation from the
2020 Jan 10
5
[PATCH Fedora libguestfs] Don't depend on libvirt-daemon-kvm monolith.
libguestfs usually needs qemu. However it only requires an emulator
for the same architecture, not for all architectures.
libvirt-daemon-kvm pulls in qemu which pulls in emulators for all
architectures, as well as a bunch of other stuff we don't need at all
like network interface support and nwfilter.
There are no Fedora TCG-only arches, so drop the conditional section.
I also made support
2020 Jan 10
2
[PATCH Fedora libguestfs v4] Don't depend on libvirt-daemon-kvm
Compared to v3 this suggests:
+Suggests: qemu-block-curl
+Suggests: qemu-block-gluster
+Suggests: qemu-block-iscsi
+Suggests: qemu-block-rbd
+Suggests: qemu-block-ssh
which I missed in an earlier email from danpb.
2020 Jan 10
2
Re: [PATCH Fedora libguestfs] Don't depend on libvirt-daemon-kvm monolith.
On Fri, Jan 10, 2020 at 02:15:10PM +0000, Daniel P. Berrangé wrote:
> Do you use the libvirt "secret" APIs at all (disk encryption, network
> disk auth passwords) ? If so you will need "libvirt-daemon-driver-secret"
> too. How about any other libvirt sub-driver APIs ? Networking ? Host
> dev, etc ?
The full list of APIs we use is attached, assuming I got my
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi,
I'm trying to build an active/active virtualization cluster using a Ceph
RBD as backing for each libvirt-managed LXC. I know live migration for LXC
isn't yet possible, but I'd like to build my infrastructure as if it were.
That is, I would like to be sure proper locking is in place for live
migrations to someday take place. In other words, I'm building things as
if I were
2013 Nov 08
1
Re: RBD images locking
On Thu, Nov 07, 2013 at 09:08:58AM -0700, Eric Blake wrote:
> On 11/07/2013 09:04 AM, NEVEU Stephane wrote:
> > Eric,
>
> [please don't top-post on technical lists]
>
> >
> > Well, in case where several servers may start the same virtual machines after a reboot for exemple.
> > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html
2020 Jan 03
2
Re: Locking without virtlockd (nor sanlock)?
On Sat, Dec 28, 2019 at 02:36:27PM +0100, Gionatan Danti wrote:
> Il 28-12-2019 01:39 Gionatan Danti ha scritto:
> > Hi list,
> > I would like to ask a clarification about how locking works. My test
> > system is CentOS 7.7 with libvirt-4.5.0-23.el7_7.1.x86_64
> >
> > Is was understanding that, by default, libvirt does not use any locks.
> > From here [1]:
2018 Aug 08
3
Re: LIBVIRT-4.6.0 can't work with QEMU 3.0.0
2019 Nov 21
2
Fail to build upstream libvirt on rhel8
Hello,
A compilation failure happened when I tried building libvirt latest code on
rhel8
Version:
gcc-8.3.1-4.5.el8.x86_64
libvirt v5.9.0-352-g5e939cea89
Steps:
1. Clone libvirt source code
2. Create build dir, and run autogen.sh
# cd libvirt
# mkdir build && cd build
# ../autogen.sh --build=x86_64-redhat-linux-gnu
--host=x86_64-redhat-linux-gnu --program-prefix=
2020 Jan 10
2
Re: [PATCH Fedora libguestfs] Don't depend on libvirt-daemon-kvm monolith.
On Friday, 10 January 2020 15:39:21 CET Daniel P. Berrangé wrote:
> On Fri, Jan 10, 2020 at 02:26:35PM +0000, Richard W.M. Jones wrote:
> > On Fri, Jan 10, 2020 at 02:15:10PM +0000, Daniel P. Berrangé wrote:
> > > Do you use the libvirt "secret" APIs at all (disk encryption, network
> > > disk auth passwords) ? If so you will need
2013 Nov 07
4
Re: RBD images locking
Eric,
Well, in case where several servers may start the same virtual machines after a reboot for exemple.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html
I've seen this hook here : http://www.wogri.at/en/linux/ceph-libvirt-locking/
But it's a hook...
Yes, I may try to write a patch. My coding skills are surely not as good as yours but I 'd be glad to make
2013 Jan 19
9
pygrub/hvm boot with alternate script= for block devices
Hi,
I am doing some experimentation with xen and Ceph and have a problem
booting my guest when my disk = [] uses an alternate block script.
Installation from a .iso was ok since the boot device was a file but
now
trying to boot from the rbd neither the hvmbuilder or pygrub can start
as they treat the first value after target= as the /dev node to try and
use.
My disk parameter looks like:
disk =
2020 Jan 10
1
Re: [PATCH Fedora libguestfs] Don't depend on libvirt-daemon-kvm monolith.
On Fri, Jan 10, 2020 at 03:10:51PM +0100, Pino Toscano wrote:
> On Friday, 10 January 2020 15:05:33 CET Richard W.M. Jones wrote:
> > libguestfs usually needs qemu. However it only requires an emulator
> > for the same architecture, not for all architectures.
> > libvirt-daemon-kvm pulls in qemu which pulls in emulators for all
> > architectures, as well as a bunch of
2014 Jan 23
7
[PATCH 0/7] Various fixes for Ceph drives and parsing libvirt XML.
Miscellaneous fixes to:
- Handling of Ceph drives now works end-to-end (RHBZ#1026688).
- In particular, you can now use rbd:/// URIs in guestfish (and
they work).
- Parse Ceph & NBD network drives from libvirt XML correctly, so
that existing domains with Ceph/NBD drives can be added
(eg. using guestfish -d option).
- Add more testing of the above.
2016 Feb 01
2
virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?
Hello everybody,
This is a cross post to libvirt-users, libguestfs and ceph-users.
I came back from FOSDEM 2016 and this was my 7th year or so and seen the
awesome development around visualization going on and want to thank
everybody for there contributions.
I seen presentations from oVirt, OpenStack and quite a few great Redhat
people, just like the last previous years.
I personally been
2016 Feb 01
2
virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?
Hello everybody,
This is a cross post to libvirt-users, libguestfs and ceph-users.
I came back from FOSDEM 2016 and this was my 7th year or so and seen the
awesome development around visualization going on and want to thank
everybody for there contributions.
I seen presentations from oVirt, OpenStack and quite a few great Redhat
people, just like the last previous years.
I personally been
2015 Jan 10
2
missing backend for pool type 5 (iscsi)
Hi,
I try to define an iscsi pool with virsh but I always get the following
error :
error :internal error: missing backend for pool type 5 (iscsi)
And yet libvirt was compiled with iscsi support :
configure: Storage Drivers
configure:
configure: Dir: yes
configure: FS: yes
configure: NetFS: yes
configure: LVM: yes
configure: iSCSI: yes
configure: SCSI: yes
configure:
2012 Apr 05
1
virsh attach-disk with cache=none and io=native for raw devices (online)
Hello,
I see there is an option with virsh attach-disk to set the cache to "none" for raw devices, but I can't find how to attach the disk with io=native (needed for performance reasons).
The goal is to attach a disk online, directly with the proper performance settings.
I know I can set this in virt-manager, but then I have to restart the VM to apply the change, so this is an
2017 Apr 19
2
virsh error: driver is not whitelisted
Hi,
I'm using virsh to instance a VM in my environment, but I'm running on
some issues. I created the following domain file:
<domain type='kvm'>
<name>demovm</name>
<uuid>4a9b3f53-fa2a-47f3-a757-dd87720d9d1d</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory
2018 Feb 27
1
Reply: Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
Dear Michal
After I fix the local libvirt master branch follow your patch, and build rpm
for CentOS 7.4. virDomainUpdateDeviceFlags as bellow:
================================================
2018-02-27 09:27:43.782+0000: 16656: debug : virDomainUpdateDeviceFlags:8326
: dom=0x7f2084000c50, (VM: name=6ec499397d594e f2a64fcfc938f38225,
uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), xml=<disk