similar to: Get data of Physical Machine

Displaying 20 results from an estimated 2000 matches similar to: "Get data of Physical Machine"

2012 Oct 18
0
R: Using libvirt to monitor virtual environment.
Hello I need a solution for monitoring and automatic migration of Guest vm machines. A good approach would be to determine which guests to migrate based on "trend usage algorithm". There is something in java and libvirt that could be used? RegardsRoberto ----Messaggio originale---- Da: vinicius.braga at lupa.inf.ufg.br Data: 18/10/2012 16.02 A: <libvirt-users at redhat.com>
2014 Jan 13
2
how to detect if qemu supports live disk snapshot
Hi everyone, Using the QEMU hypervisor, when a live disk snapshot is requested through libvirt, the request can fail if the underyling qemu binary lacks the snapshotting support. In python, we have something like libvirtError: Operation not supported: live disk snapshot not supported with this QEMU binary I'd like to detect ahead of time if the underlying QEMU can or cannot do
2010 Jan 06
0
[PATCH] Converter: Fixes to Xen metadata conversion
Specifically fixes the issue where <script path='vif-bridge'/> would be corrupted rather than removed properly. Makes metadata conversion less generic. --- lib/Sys/VirtV2V/Converter.pm | 144 +++++++++++++++++++++--------------------- 1 files changed, 73 insertions(+), 71 deletions(-) diff --git a/lib/Sys/VirtV2V/Converter.pm b/lib/Sys/VirtV2V/Converter.pm index a6eba45..64a5a46
2018 Jun 19
0
Re: [PATCH] v2v: Set machine type explicitly for outputs which support it (RHBZ#1581428).
On Tue, Jun 19, 2018 at 12:12:30PM +0100, Richard W.M. Jones wrote: > On Tue, Jun 19, 2018 at 11:43:38AM +0100, Daniel P. Berrangé wrote: > > I'd encourage apps to check the capabilities XML to see what > > machine types are available. > > One issue is we don't always have access to the target hypervisor. > > For example in the Glance case we have to write
2014 Jul 10
2
How to config qga to support dompmsuspend
Hi, I tried to run domsuspend command on my PowerPC board but failed. # virsh dompmsuspend sdk --target mem error: Domain sdk could not be suspended error: argument unsupported: QEMU guest agent is not configured It seemed that support suspend-to-mem only from capabilities. # virsh capabilities <capabilities> <host>
2012 Mar 21
1
转发: Error when executing virsh command to ESX
To Whom It May concern: I found that the following error is caused by adding a scsi disk to install the os, I remove the scsi disk and the error remove but still report 'out of memory': virsh # dumpxml win2003_122 internal error Invalid or not yet handled value '/vmfs/devices/genscsi/mp x.vmhba0:C0:T1:L0' for VMX entry 'scsi0:1.fileName' virsh # list --all Id Name
2009 Dec 22
1
conga and "virsh nodeinfo"
Hi folks, I have run into a confusing problem. My initial problem is: Conga does not offer "Add a virtual machine service". So I googled and found a RedHat advisory on that: http://rhn.redhat.com/errata/RHBA-2009-1623.html which points updates that should fix this. I checked on my cluster, but the relevant packages are current (and even if ALL packages are current it does not work).
2019 Mar 27
6
[PATCH 0/2] Limit number of hw queues by nr_cpu_ids for virtio-blk and virtio-scsi
When tag_set->nr_maps is 1, the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are use by virtio-blk/virtio-scsi, as they both have (tag_set->nr_maps == 1), they can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num-queues' specified by qemu is more than maxcpus, virtio-blk/virtio-scsi would not be
2014 Jun 23
1
operation on ‘numsels’ may be undefined
Dear all, Since many years the following C++ code does compile on ALL Bioconductor servers (Linux, Windows, Mac) without any warnings: Int_t numsels = 0; //number of selected entries ... for (Int_t i=0; i<size; i++) { numsels = (arrMask[i] == 1) ? ++numsels : numsels; }//for_i Even on the recently added release server 'zin2' Linux (Ubuntu 12.04.4 LTS) the
2012 Mar 21
0
Error when executing virsh command to ESX
To Whom It May Concern: I use virsh windows 0.9.3 to connect an esx server 4.0 with no_verify=1. The Esx server is intel 2 x 3.19 GHZ CPU, 8G memroy and 2 TB hard disk. following is my execute commands and results. virsh -c esx://xxx.xxx.xxx.x?no_verify=1 virsh # list all Id Name State ---------------------------------- 80 win2003_122 running virsh # nodeinfo error:
2017 Jan 06
0
mlx4_0 Initializing and... (infiniband)
... Initializing.. hi all, I've a a very basic setup, directly two boxes via two MHEH28-XTC and I cannot activate them. One peculiar thing is I get (randomly & !often): [85947.090496] AMD-Vi: Event logged [ [85947.090539] IO_PAGE_FAULT device=09:00.7 domain=0x0000 address=0x00000000f6ffb000 flags=0x0050] [85947.298509] AMD-Vi: Event logged [ [85947.298550] IO_PAGE_FAULT device=09:00.7
2012 Oct 24
3
KVM + virsh nodeinfo + CentOS 6.3
Hi, Please let me know in case I am posting my question to the wrong forum. I apologize if that is the case! Here is my question: We run CentOS 6.3 on a server with dual Xeon CPU's. Our "dual blade" server uses this motherboard: http://www.supermicro.com/products/motherboard/Xeon/C600/X9DRT-HF.cfm We have two of these CPUs installed and working: Intel(R) Xeon(R) CPU E5-2620 0 @
2011 Sep 07
0
[PATCH] libxl: vcpu_avail is a bitmask, use it as such
vcpu_avail is a bitmask of available cpus but we are currently using it as the number of cpus available. This patch fixes it. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> diff -r 6580ff415189 tools/libxl/libxl_dm.c --- a/tools/libxl/libxl_dm.c Wed Sep 07 13:29:15 2011 +0000 +++ b/tools/libxl/libxl_dm.c Wed Sep 07 15:39:46 2011 +0000 @@ -360,8 +360,13 @@ static char
2019 Mar 27
0
[PATCH 1/2] virtio-blk: limit number of hw queues by nr_cpu_ids
When tag_set->nr_maps is 1, the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num-queues' specified by qemu is more than maxcpus, virtio-blk would not be able to allocate more than maxcpus
2019 Mar 27
0
[PATCH 2/2] scsi: virtio_scsi: limit number of hw queues by nr_cpu_ids
When tag_set->nr_maps is 1, the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-scsi, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num_queues' specified by qemu is more than maxcpus, virtio-scsi would not be able to allocate more than maxcpus
2019 Apr 27
0
[PATCH AUTOSEL 5.0 65/79] virtio-blk: limit number of hw queues by nr_cpu_ids
From: Dongli Zhang <dongli.zhang at oracle.com> [ Upstream commit bf348f9b78d413e75bb079462751a1d86b6de36c ] When tag_set->nr_maps is 1, the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the
2019 Apr 27
0
[PATCH AUTOSEL 4.19 44/53] virtio-blk: limit number of hw queues by nr_cpu_ids
From: Dongli Zhang <dongli.zhang at oracle.com> [ Upstream commit bf348f9b78d413e75bb079462751a1d86b6de36c ] When tag_set->nr_maps is 1, the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the
2019 Apr 27
0
[PATCH AUTOSEL 4.14 26/32] virtio-blk: limit number of hw queues by nr_cpu_ids
From: Dongli Zhang <dongli.zhang at oracle.com> [ Upstream commit bf348f9b78d413e75bb079462751a1d86b6de36c ] When tag_set->nr_maps is 1, the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the
2019 Apr 27
0
[PATCH AUTOSEL 4.9 13/16] virtio-blk: limit number of hw queues by nr_cpu_ids
From: Dongli Zhang <dongli.zhang at oracle.com> [ Upstream commit bf348f9b78d413e75bb079462751a1d86b6de36c ] When tag_set->nr_maps is 1, the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the
2011 Mar 09
0
[PATCH 04/11] x86: cleanup mpparse.c
Remove unused and pointless bits from mpparse.c (and other files where they are related to it). Of what remains, move whatever possible into .init.*, and some data items into .data.read_mostly. Signed-off-by: Jan Beulich <jbeulich@novell.com> --- 2011-03-09.orig/xen/arch/x86/acpi/boot.c +++ 2011-03-09/xen/arch/x86/acpi/boot.c @@ -177,7 +177,8 @@ acpi_parse_x2apic(struct acpi_subtable_h