Displaying 20 results from an estimated 10000 matches similar to: "Trouble with virtio storage"
2012 Mar 07
1
libvirt for spice
Hi all:
I tried kvm on my ubuntu with the libvirt.xml file as follows:
<domain type='kvm'>
<name>instance-00000011</name>
<memory>2097152</memory>
<os>
<type>hvm</type>
<boot dev="hd" />
</os>
<features>
<acpi/>
2012 Apr 06
1
qemu-kvm fails on RHEL6
Hi,
When I'm trying to run qemu-kvm command on RHEL6(linux kernel 2.6.32) then I get following errors which I think related to tap devices in my setup. Any idea why is that?
bash$ LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name instance-00000027 -uuid
2014 Jun 18
3
bridge could not be initialized
Hi
I'm trying to run VM using libvirt.xml file and getting following error...
virsh start instance-00000003
error: Failed to start domain instance-00000003
error: internal error Process exited while reading console log output:
failed to launch bridge helper
kvm: -netdev bridge,br=qbr1f2191ce-38,id=hostnet0: Device 'bridge' could
not be initialized
Below is the libvirt.xml file..
2014 Jun 19
0
Re: bridge could not be initialized
On 06/18/2014 04:43 PM, abhishek jain wrote:
> Hi
>
> I'm trying to run VM using libvirt.xml file and getting following error...
>
> virsh start instance-00000003
>
> error: Failed to start domain instance-00000003
> error: internal error Process exited while reading console log output:
> failed to launch bridge helper
> kvm: -netdev
2016 Jun 06
0
Adding a channel device within an Openstack Fedora Instance ..
Hi,
I'm trying to add a channel on a Openstack instance via this command :
# virsh attach-device instance-00000005 test.xml
that returns this error
error: Failed to attach device from test.xml
error: internal error: no virtio-serial controllers are available
#
# cat test.xml :
<channel type='unix'>
<source mode='bind'
2014 Jun 27
1
libvirt on OpenStack
Hi,
I am running OpenStack Cluster and use libvirt + cgroup to limit vm’s performance
https://wiki.openstack.org/InstanceResourceQuota
What I am confusing is..
1. After running a vm instance with some croup limit applied, I can’t find any related cgroup settings.
2. Can I change limit value after instance is running? like change disk_read_iops_sec from 10 to 20.
One of the xml file like below.
2014 Feb 12
2
Re: Help? Running into problems with migrateToURI2() and virDomainDefCheckABIStability()
On 02/11/2014 04:45 PM, Cole Robinson wrote:
> On 02/10/2014 06:46 PM, Chris Friesen wrote:
>> Hi,
>>
>> We've run into a problem with libvirt 1.1.2 and are looking for some comments
>> on whether this is a bug or design intent.
>>
>> We're trying to use migrateToURI() but we're using a few things (numatune,
>> vcpu mask, etc.) that may need
2013 Apr 09
0
Filesystem passthrough of a Lustre mounted directory
Hi,
I am trying to pass a Lustre directory mounted on the host to the guest.
I can pass a local directory in just fine when starting an instance via
virsh. I can execute the qemu command from libvirt's logs (dropping ?S
flag) directly, and passing the Lustre mounted directory also works (but
the network complains about different MAC address). However, when I
start an instance using
2012 Jul 24
1
How can I make sVirt work with LXC (libvirt-0.9.13)?
?Hi,
?I've installed libvirt-0.9.13 on RHEL6.2 from the source code.
I cannot make sVirt working with LXC. (sVirt works well with KVM, though.)
I can start an LXC instance, but the label of the process is not right.
Can someone help me?
I tried to change /etc/libvirtd/lxc.conf file to explicitly enable
security_driver = "selinux".
But it ends up with error saying "error :
2012 Apr 12
0
Live migration of instance using KVM hypervisor fails
Hi,
I am trying to migrate a running instance, but it fails with the following error:
$ virsh migrate --live instance-00000008 qemu+tcp://10.2.3.150/system --verbose
error: operation failed: migration job: unexpectedly failed
I can see following in the instance specific qemu log directory (/var/log/libvirt/qemu/instance-00000008.log) on the destination host:
2012-04-12 03:57:26.211: starting up
2015 Jan 13
3
Re: domain has active block job
Il 13/01/2015 10:51, Kashyap Chamarthy ha scritto:
> On Tue, Jan 13, 2015 at 10:10:53AM +0100, Fiorenza Meini wrote:
>> Hi there,
>> I receive this error when I run nova image-create <VM name> <Vm Sanpshot
>> name>:
>
> Okay, you're talking in the context of OpenStack.
>
> You can also check the Nova compute.log for more contextual details of
>
2017 Apr 26
3
Tunnelled migrate Windows7 VMs halted
[moderator note: I'm forwarding a stripped down version of the original
mail which was rejected in the moderator queue. I stripped the 3.3
megabyte .tar.bz2 of the log file attachment, which is inappropriate for
a technical list. Either trim the log to the relevant portion, or host
the log externally and have your list email merely give a URL of the
externally-hosted file]
>
2017 Sep 08
0
GlusterFS as virtual machine storage
Back to replica 3 w/o arbiter. Two fio jobs running (direct=1 and
direct=0), rebooting one node... and VM dmesg looks like:
[ 483.862664] blk_update_request: I/O error, dev vda, sector 23125016
[ 483.898034] blk_update_request: I/O error, dev vda, sector 2161832
[ 483.901103] blk_update_request: I/O error, dev vda, sector 2161832
[ 483.904045] Aborting journal on device vda1-8.
[ 483.906959]
2023 Apr 11
1
storage backup with encryption on-the-fly ?
On Fri, Apr 07, 2023 at 19:42:11 +0200, lejeczek wrote:
>
>
> On 06/04/2023 16:12, Peter Krempa wrote:
> > On Thu, Apr 06, 2023 at 15:22:10 +0200, lejeczek wrote:
> > > Hi guys.
> > >
> > > Is there a solution, perhaps a function of libvirt, to backup guest's
> > > storage and encrypt the resulting image file?
> > > On-the-fly
2016 Jan 29
1
storage pool and volume usage question
Hi
I'm creating a storage volume in a storage pool (directory type).
I'd like to create a volume that creates a file in a sub directory under
the directory that the pool points to but am not able to work out how,
maybe it is not possible?
My pool was created using following xml...
<pool type='dir'>
<name>nova-local-pool</name>
<target>
2007 Oct 11
4
Specifying geographic related facts
Let''s say I have two different geographic sites. They are pretty much
identical ie. each site has a machine called web1 which is a web server,
etc. Except there are couple site-specific settings ie. outgoing DNS
servers are different, SSL certs are different etc.
On the puppetmaster I can put in a file called e.g.
/etc/sideid
which would uniquely identify a site ie. siteX or siteY.
2023 Apr 07
1
storage backup with encryption on-the-fly ?
On 06/04/2023 16:12, Peter Krempa wrote:
> On Thu, Apr 06, 2023 at 15:22:10 +0200, lejeczek wrote:
>> Hi guys.
>>
>> Is there a solution, perhaps a function of libvirt, to backup guest's
>> storage and encrypt the resulting image file?
>> On-the-fly ideally.
>> If not ready/built-in solution then perhaps a best technique you
>> recommend/use?
2015 Jan 13
0
Re: domain has active block job
On Tue, Jan 13, 2015 at 03:07:07PM +0100, Fiorenza Meini wrote:
> Il 13/01/2015 10:51, Kashyap Chamarthy ha scritto:
[. . .]
> >>In libvirt log file I can see:
> >>error : qemuDomainDefineXML:6312 : block copy still active: domain has
> >>active block job
> >>
> >>Libvirt is 1.2.7 version, linux system is Debian Wheezy
> >>
>
2016 May 30
0
Re: migrate local storage to ceph | exchanging the storage system
> -----Oorspronkelijk bericht-----
> Van: libvirt-users-bounces@redhat.com [mailto:libvirt-users-
> bounces@redhat.com] Namens Björn Lässig
> Verzonden: vrijdag 27 mei 2016 10:10
> Aan: libvirt-users@redhat.com
> Onderwerp: [libvirt-users] migrate local storage to ceph | exchanging the
> storage system
>
> TLDR: Why is virsh migrate --persistent --live domain
>
2016 May 27
2
migrate local storage to ceph | exchanging the storage system
TLDR: Why is virsh migrate --persistent --live domain
qemu+ssh://root@host/system --xml domain.ceph.xml
not persistent and what could i do about it?
Hi,
after years of beeing pleased with local storage and migrating the
complete storage from one host to another, it was time for ceph.
After setting up a cluster and testing it, its time now for moving a lot
of VMs on that type of storage, without