similar to: high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME

Displaying 20 results from an estimated 1000 matches similar to: "high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME"

2017 Feb 15
2
Re: high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
On 15 February 2017 at 00:57, Daniel P. Berrange <berrange@redhat.com> wrote: > What is the actual error you're getting during startup. # virsh -d0 start instance-0000037c start: domain(optdata): instance-0000037c start: found option <domain>: instance-0000037c start: <domain> trying as domain NAME error: Failed to start domain instance-0000037c error: monitor socket did
2017 Feb 15
2
Re: high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
On Wed, Feb 15, 2017 at 10:27:46AM +0100, Michal Privoznik wrote: > On 02/15/2017 03:43 AM, Blair Bethwaite wrote: > > On 15 February 2017 at 00:57, Daniel P. Berrange <berrange@redhat.com> wrote: > >> What is the actual error you're getting during startup. > > > > # virsh -d0 start instance-0000037c > > start: domain(optdata): instance-0000037c >
2017 Feb 14
0
Re: high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
On Tue, Feb 14, 2017 at 06:13:20PM +1100, Blair Bethwaite wrote: > Hi all, > > In IRC last night Dan helpfully confirmed my analysis of an issue we are > seeing attempting to launch high memory KVM guests backed by hugepages... > > In this case the guests have 240GB of memory allocated from two host NUMA > nodes to two guest NUMA nodes. The trouble is that allocating the
2017 Feb 15
0
Re: high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
On 15 February 2017 at 20:40, Daniel P. Berrange <berrange@redhat.com> wrote: > On Wed, Feb 15, 2017 at 10:27:46AM +0100, Michal Privoznik wrote: >> On 02/15/2017 03:43 AM, Blair Bethwaite wrote: >> > On 15 February 2017 at 00:57, Daniel P. Berrange <berrange@redhat.com> wrote: >> >> What is the actual error you're getting during startup. >> >
2017 Feb 15
0
Re: high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
On 02/15/2017 03:43 AM, Blair Bethwaite wrote: > On 15 February 2017 at 00:57, Daniel P. Berrange <berrange@redhat.com> wrote: >> What is the actual error you're getting during startup. > > # virsh -d0 start instance-0000037c > start: domain(optdata): instance-0000037c > start: found option <domain>: instance-0000037c > start: <domain> trying as
2017 Nov 14
2
Re: dramatic performance slowdown due to THP allocation failure with full pagecache
Thanks for the reply Daniel, However I think you slightly misunderstood the scenario... On 14 November 2017 at 10:32, Daniel P. Berrange <berrange@redhat.com> wrote: > IOW, if your application has a certain expectation of performance that can only > be satisfied by having the KVM guest backed by huge pages, then you should > really change to explicitly reserve huge pages for the
2017 Nov 14
2
dramatic performance slowdown due to THP allocation failure with full pagecache
Hi all, This is not really a libvirt issue but I'm hoping some of the smart folks here will know more about this problem... We have noticed when running some HPC applications on our OpenStack (libvirt+KVM) cloud that the same application occasionally performs much worse (4-5x slowdown) than normal. We can reproduce this quite easily by filling pagecache (i.e. dd-ing a single large file to
2017 Nov 14
1
Re: dramatic performance slowdown due to THP allocation failure with full pagecache
On 14 November 2017 at 10:56, Daniel P. Berrange <berrange@redhat.com> wrote: > Oh well THP usage inside the guest is then not really anything todo with > virt, just a regular Linux questions, so not sure libvirt is the best > place to ask. True, I just hoped you or one of the other devs might have some insight on reclaim behaviour that would provide a clue. I guess I'll try a
2018 Jan 18
3
Re: Could not destroy domain, current job is remoteDispatchConnectGetAllDomainStats
On 01/18/2018 08:25 AM, Ján Tomko wrote: > On Wed, Jan 17, 2018 at 04:45:38PM +0200, Serhii Kharchenko wrote: >> Hello libvirt-users list, >> >> We're catching the same bug since 3.4.0 version (3.3.0 works OK). >> So, we have process that is permanently connected to libvirtd via socket >> and it is collecting stats, listening to events and control the VPSes.
2016 Dec 06
1
Re: How to best I/O performance for Window2008 and MSSQL guest VM
On 12/06/2016 06:06 AM, Blair Bethwaite wrote: > Hi Roberto, Hi Blair > What is the cpu and memory configuration of your guest? I've set to copy host configuration (16 cores) and memory is set to 24GB, host has 64GB. Guest is Windows 2012 64bits version Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 16
2018 Jan 17
4
Could not destroy domain, current job is remoteDispatchConnectGetAllDomainStats
Hello libvirt-users list, We're catching the same bug since 3.4.0 version (3.3.0 works OK). So, we have process that is permanently connected to libvirtd via socket and it is collecting stats, listening to events and control the VPSes. When we try to 'shutdown' a number of VPSes we often catch the bug. One of VPSes sticks in 'in shutdown' state, no related 'qemu'
2016 Dec 05
2
How to best I/O performance for Window2008 and MSSQL guest VM
Hi There, I've moved some Windows2012 with MSSQL VMs from an hold ESXi 5.5 machine to a more recent and powerful machine running Fedora 24 x86_64 and related libvirt + KVM virtualization. I've moved the VMs filesystem to LVM slices and installed the VirtIO drivers in to all Windows VMs. I've also set both Disk and Network interface to work using VirtIO. So far so good everything works
2014 Jul 24
2
Re: vhost-net requested but could not be initialized
​The qemu that I am using isn't modified at all. It's the VHOST drivers that are mounted elsewhere i.e., not on default /dev/vhost-net. Here is the command that I use to launch my qemu VM: qemu-system-x86_64 -cpu host -boot order=c -hda /root/Disks/ubuntu1.qcow2 -m 1024M -smp 2 --enable-kvm -name 'client 1' -nographic -vnc :2 -net none -no-reboot -mem-path /dev/hugepages
2015 Jan 07
2
Block Commit: [100 %]error: failed to pivot job for disk vda
Hello. I'm seeing this error while doing a backup of a VM. + virsh blockcommit kaltura vda --active --verbose --pivot Block Commit: [100 %]error: failed to pivot job for disk vda error: internal error: unable to execute QEMU command 'block-job-complete': The active block job for device 'drive-virtio-disk0' cannot be completed I'm on qemu 2.2.0 and libvirt-1.2.11. Does
2018 Mar 08
1
Statistics domain memory block when domain shutdown
Hi My libvirt version is 3.4.0,host system is centos 7.4 ,kernel is 3.10.0-693.el7.x86_64 , when I shutdown domain in virtual system, My program call virDomainMemoryStats, My program blocked in this api. the call stack is #0 0x00007ff242d78a3d in poll () from /lib64/libc.so.6 #1 0x00007ff243755ce8 in virNetClientIOEventLoop () from /lib64/libvirt.so.0 #2 0x00007ff24375654b in
2011 Mar 20
6
PATCH: Hugepage support for Domains booting with 4KB pages
We have implemented hugepage support for guests in following manner In our implementation we added a parameter hugepage_num which is specified in the config file of the DomU. It is the number of hugepages that the guest is guaranteed to receive whenever the kernel asks for hugepage by using its boot time parameter or reserving after booting (eg. Using echo XX > /proc/sys/vm/nr_hugepages).
2019 Jun 04
2
blockcommit of domain not successfull
Hi, i have several domains running on a 2-node HA-cluster. Each night i create snapshots of the domains, after copying the consistent raw file to a CIFS server i blockcommit the changes into the raw files. That's running quite well. But recent the blockcommit didn't work for one domain: I create a logfile from the whole procedure:
2011 Jan 10
9
Hugepage Support
hi, I tried to make huge page request in Fedora x86_64 PV guest using xen 4.1 unstable and it crashed(crash info given below) I had enabled superpages in config file I had also set hugepages parameter at boot time for the PV Dom U By excuting # cat /proc/mem_info | grep Huge gave me that there are 10 free huge pages available , still the domain crashed. [ 86.403654] BUG: unable to handle
2008 Nov 04
7
[PATCH 1/1] Xen PV support for hugepages
This is the latest version of a patch that adds hugepage support to the Xen hypervisor in a PV environment. It is against the latest xen-unstable tree on xenbits.xensource.com. I believe this version addresses the comments made about the previous version of the patch. Hugepage support must be enabled via the hypervisor command line option "allowhugepage". It assumes the guest
2019 Jun 11
2
Re: blockcommit of domain not successfull
----- On Jun 5, 2019, at 4:49 PM, Peter Krempa pkrempa@redhat.com wrote: > On Wed, Jun 05, 2019 at 13:33:49 +0200, Lentes, Bernd wrote: >> Hi Peter, >> >> thanks for your help. >> >> ----- On Jun 5, 2019, at 9:27 AM, Peter Krempa pkrempa@redhat.com wrote: > > [...] > >> >> > >> > So that's interresting. Usually assertion