similar to: Release of libvirt-0.9.0

Displaying 20 results from an estimated 4000 matches similar to: "Release of libvirt-0.9.0"

2011 May 05
0
Release of libvirt-0.9.1
As planned and after most of the clang detected problems got fixed (thanks Eric !) the new release is available at: ftp://libvirt.org/libvirt/ It's a mixed release, it includes a number of improvements as well as many bug fixes and a few new features: Features: - support various persistent domain updates (KAMEZAWA Hiroyuki) - improvements on memory APIs (Taku Izumi) - Add
2011 Jun 06
0
Release of libvirt-0.9.2
As planned the new release is available at: ftp://libvirt.org/libvirt/ It is a rather large release with near 400 commits included. From an user point of view the main improvement is likely to be when using migration as various work has been done to extend the protocol and to avoid having the migration command stop other concurrent operations (like virsh list). See below for a number of
2010 Jul 05
0
Release of libvirt-0.8.2
Following Dan advice, I decided to not wait for more patches and push the current git head as the release. Let's plan to have another release by this month end with the QEmu debugging and hacking APIs. The release is available as usual at ftp://libvirt.org/libvirt Quite a lot of bug fixes during the two months since 0.8.1, and a few new feature. It also tagged more commits as being
2018 Sep 14
0
Re: live migration and config
On Fri, Sep 14, 2018 at 16:00:43 +0400, Dmitry Melekhov wrote: > 14.09.2018 15:43, Jiri Denemark пишет: > > On Thu, Sep 13, 2018 at 19:37:00 +0400, Dmitry Melekhov wrote: > >> > >> 13.09.2018 18:57, Jiri Denemark пишет: > >>> On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Melekhov wrote: > >>>> 13.09.2018 17:47, Jiri Denemark пишет: >
2015 Apr 15
0
[PATCH] vhost: fix log base address
VHOST_SET_LOG_BASE got an incorrect address, causing migration errors and potentially even memory corruption. Cc: Peter Maydell <peter.maydell at linaro.org> Reported-by: Wen Congyang <wency at cn.fujitsu.com> Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- Could you please confirm this fixes the problem for you? hw/virtio/vhost.c | 5 ++++- 1 file changed, 4
2015 Apr 15
0
[PATCH] vhost: fix log base address
VHOST_SET_LOG_BASE got an incorrect address, causing migration errors and potentially even memory corruption. Cc: Peter Maydell <peter.maydell at linaro.org> Reported-by: Wen Congyang <wency at cn.fujitsu.com> Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- Could you please confirm this fixes the problem for you? hw/virtio/vhost.c | 5 ++++- 1 file changed, 4
2018 Sep 14
2
Re: live migration and config
14.09.2018 15:43, Jiri Denemark пишет: > On Thu, Sep 13, 2018 at 19:37:00 +0400, Dmitry Melekhov wrote: >> >> 13.09.2018 18:57, Jiri Denemark пишет: >>> On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Melekhov wrote: >>>> 13.09.2018 17:47, Jiri Denemark пишет: >>>>> On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Melekhov wrote:
2012 Apr 17
2
[libvirt] [test-API 2/3] Add the better copyright statements in scripts
On Tue, Apr 17, 2012 at 07:22:48PM +0800, Osier Yang wrote: > [ CC to Rich ] > > On 2012?04?17? 19:18, Osier Yang wrote: > >On 2012?04?17? 19:13, Daniel P. Berrange wrote: > >>On Tue, Apr 17, 2012 at 07:09:36PM +0800, Osier Yang wrote: > >>>On 2012?04?17? 19:04, Daniel P. Berrange wrote: > >>>>On Tue, Apr 17, 2012 at 06:59:24PM +0800, Osier Yang
2018 Sep 13
2
Re: live migration and config
13.09.2018 18:57, Jiri Denemark пишет: > On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Melekhov wrote: >> >> 13.09.2018 17:47, Jiri Denemark пишет: >>> On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Melekhov wrote: >>>> After some mistakes yesterday we ( me and my colleague ) think that it >>>> will be wise for libvirt to check config file existence
2018 Sep 14
0
Re: live migration and config
On Thu, Sep 13, 2018 at 19:37:00 +0400, Dmitry Melekhov wrote: > > > 13.09.2018 18:57, Jiri Denemark пишет: > > On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Melekhov wrote: > >> > >> 13.09.2018 17:47, Jiri Denemark пишет: > >>> On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Melekhov wrote: > >>>> After some mistakes yesterday we ( me
2016 Jan 14
2
Re: [libvirt] Quantifying libvirt errors in launching the libguestfs appliance
On 01/14/2016 05:12 AM, Daniel P. Berrange wrote: > On Thu, Jan 14, 2016 at 10:51:47AM +0100, Jiri Denemark wrote: >> On Wed, Jan 13, 2016 at 16:25:14 +0100, Martin Kletzander wrote: >>> On Wed, Jan 13, 2016 at 10:18:42AM +0000, Richard W.M. Jones wrote: >>>> As people may know, we frequently encounter errors caused by libvirt >>>> when running the
2016 Jan 14
0
Re: [libvirt] Quantifying libvirt errors in launching the libguestfs appliance
On Thu, Jan 14, 2016 at 10:51:47AM +0100, Jiri Denemark wrote: > On Wed, Jan 13, 2016 at 16:25:14 +0100, Martin Kletzander wrote: > > On Wed, Jan 13, 2016 at 10:18:42AM +0000, Richard W.M. Jones wrote: > > >As people may know, we frequently encounter errors caused by libvirt > > >when running the libguestfs appliance. > > > > > >I wanted to find out
2012 Aug 29
1
Use virsh command domjobinfo but get nothing
Hi, all I test virsh comand "domjobinfo" on x86-i386 and PPC64 host. But Both get nothing. # virsh version Compiled against library: libvir 0.9.13 Using library: libvir 0.9.13 Using API: QEMU 0.9.13 Running hypervisor: QEMU 1.1.50 # virsh list Id Name State ---------------------------------------------------- 21 f16-ppc-qcow2
2013 May 11
0
[PATCH v6, part3 13/16] mm: correctly update zone->mamaged_pages
Enhance adjust_managed_page_count() to adjust totalhigh_pages for highmem pages. And change code which directly adjusts totalram_pages to use adjust_managed_page_count() because it adjusts totalram_pages, totalhigh_pages and zone->managed_pages altogether in a safe way. Remove inc_totalhigh_pages() and dec_totalhigh_pages() from xen/balloon driver bacause adjust_managed_page_count() has
2013 May 11
0
[PATCH v6, part3 13/16] mm: correctly update zone->mamaged_pages
Enhance adjust_managed_page_count() to adjust totalhigh_pages for highmem pages. And change code which directly adjusts totalram_pages to use adjust_managed_page_count() because it adjusts totalram_pages, totalhigh_pages and zone->managed_pages altogether in a safe way. Remove inc_totalhigh_pages() and dec_totalhigh_pages() from xen/balloon driver bacause adjust_managed_page_count() has
2013 May 11
0
[PATCH v6, part3 13/16] mm: correctly update zone->mamaged_pages
Enhance adjust_managed_page_count() to adjust totalhigh_pages for highmem pages. And change code which directly adjusts totalram_pages to use adjust_managed_page_count() because it adjusts totalram_pages, totalhigh_pages and zone->managed_pages altogether in a safe way. Remove inc_totalhigh_pages() and dec_totalhigh_pages() from xen/balloon driver bacause adjust_managed_page_count() has
2017 Feb 20
2
Re: Determining domain job kind from job stats?
Jiri Denemark <jdenemar@redhat.com> writes: > On Fri, Feb 17, 2017 at 12:38:24 +0100, Milan Zamazal wrote: >> Jiri Denemark <jdenemar@redhat.com> writes: >> >> > On Fri, Feb 10, 2017 at 21:50:19 +0100, Milan Zamazal wrote: >> >> Hi, is there a reliable way to find out to what kind of job does the >> >> information returned from
2017 Apr 26
0
Re: Tunnelled migrate Windows7 VMs halted
On Wed, Apr 26, 2017 at 08:51:39AM -0500, Eric Blake wrote: > > > > I migrated a Windows 7 VM with libvirtd tunnelled, the VM halted > > on the target although the status is running. What do you mean by halted ? The guest OS has shutdown, or QEMU has crashed, or something else ? > > > > > > [root@test15 ~]# virsh migrate --live --p2p --tunnelled
2013 Jul 31
0
Re: start lxc container on fedora 19
On Wed, Jul 31, 2013 at 12:46:58PM +0530, Aarti Sawant wrote: > hello, > > i am new to lxc, i have created a lxc container on fedora 19 > i created a container rootfs of fedora 19 by using > yum --installroot=/containers/test1 --releasever=19 install openssh > > test1.xml file for container test1 > <domain type="lxc"> > <name>test1</name>
2023 May 12
1
Question regaring correct usage of CPU shares
Hi there, I have a question regarding the shares option of the cputune section. I want to illustrate my question with the following example. Let's assume I have two virtual machines like the following on four dedicated core with two threads each: VM1: <cputune> <shares>512</shares> <vcpupin vcpu="0" cpuset="0"/> <vcpupin