similar to: Xen CentOS 7.3 server + CentOS 7.3 VM fails to boot after CR updates (applied to VM)!

Displaying 20 results from an estimated 500 matches similar to: "Xen CentOS 7.3 server + CentOS 7.3 VM fails to boot after CR updates (applied to VM)!"

2017 Sep 02
2
Xen CentOS 7.3 server + CentOS 7.3 VM fails to boot after CR updates (applied to VM)!
On 09/01/2017 02:41 PM, Kevin Stange wrote: > On 08/31/2017 07:50 AM, PJ Welsh wrote: >> A recently created and fully functional CentOS 7.3 VM fails to boot >> after applying CR updates: > <snip> >> Server OS is CentOS 7.3 using Xen (no CR updates): >> rpm -qa xen\* >> xen-hypervisor-4.6.3-15.el7.x86_64 >> xen-4.6.3-15.el7.x86_64 >>
2017 Sep 04
2
Xen CentOS 7.3 server + CentOS 7.3 VM fails to boot after CR updates (applied to VM)!
On 09/04/2017 03:59 PM, Kevin Stange wrote: > On 09/02/2017 08:11 AM, Johnny Hughes wrote: >> On 09/01/2017 02:41 PM, Kevin Stange wrote: >>> On 08/31/2017 07:50 AM, PJ Welsh wrote: >>>> A recently created and fully functional CentOS 7.3 VM fails to boot >>>> after applying CR updates: >>> <snip> >>>> Server OS is CentOS 7.3
2017 Sep 01
0
Xen CentOS 7.3 server + CentOS 7.3 VM fails to boot after CR updates (applied to VM)!
On 08/31/2017 07:50 AM, PJ Welsh wrote: > A recently created and fully functional CentOS 7.3 VM fails to boot > after applying CR updates: <snip> > Server OS is CentOS 7.3 using Xen (no CR updates): > rpm -qa xen\* > xen-hypervisor-4.6.3-15.el7.x86_64 > xen-4.6.3-15.el7.x86_64 > xen-licenses-4.6.3-15.el7.x86_64 > xen-libs-4.6.3-15.el7.x86_64 >
2017 Sep 06
3
Xen CentOS 7.3 server + CentOS 7.3 VM fails to boot after CR updates (applied to VM)!
On 09/05/2017 02:26 PM, Kevin Stange wrote: > On 09/04/2017 05:27 PM, Johnny Hughes wrote: >> On 09/04/2017 03:59 PM, Kevin Stange wrote: >>> On 09/02/2017 08:11 AM, Johnny Hughes wrote: >>>> On 09/01/2017 02:41 PM, Kevin Stange wrote: >>>>> On 08/31/2017 07:50 AM, PJ Welsh wrote: >>>>>> A recently created and fully functional CentOS
2017 Sep 04
0
Xen CentOS 7.3 server + CentOS 7.3 VM fails to boot after CR updates (applied to VM)!
On 09/02/2017 08:11 AM, Johnny Hughes wrote: > On 09/01/2017 02:41 PM, Kevin Stange wrote: >> On 08/31/2017 07:50 AM, PJ Welsh wrote: >>> A recently created and fully functional CentOS 7.3 VM fails to boot >>> after applying CR updates: >> <snip> >>> Server OS is CentOS 7.3 using Xen (no CR updates): >>> rpm -qa xen\* >>>
2017 Sep 05
0
Xen CentOS 7.3 server + CentOS 7.3 VM fails to boot after CR updates (applied to VM)!
On 09/04/2017 05:27 PM, Johnny Hughes wrote: > On 09/04/2017 03:59 PM, Kevin Stange wrote: >> On 09/02/2017 08:11 AM, Johnny Hughes wrote: >>> On 09/01/2017 02:41 PM, Kevin Stange wrote: >>>> On 08/31/2017 07:50 AM, PJ Welsh wrote: >>>>> A recently created and fully functional CentOS 7.3 VM fails to boot >>>>> after applying CR updates:
2015 Dec 20
8
[Bug 93458] New: page allocation failure: order:5, mode:0x240c0c0
https://bugs.freedesktop.org/show_bug.cgi?id=93458 Bug ID: 93458 Summary: page allocation failure: order:5, mode:0x240c0c0 Product: xorg Version: 7.7 (2012.06) Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: normal Priority: medium Component: Driver/nouveau
2019 Jul 12
2
Out of memory: kill process
Hello, On my bridge head DC, i can see in kern.log lots of < Out of memory : kill process > I'm using Samba 4.9.6 (11147 objects) on Debian Stretch 64 This DC synchronize from/to 20 others DC in bridge head mode (no mesh) Here is my VM (HyperV 2016) config : - 4 x Vcpu (intel Xeon Silver 4110) - 2 Go Ram - 1 Go swap I'm really not an expert on this
2017 Oct 23
1
problems running a vol over IPoIB, and qemu off it?
hi people I wonder if anybody experience any problems with vols in replica mode that run across IPoIB links and libvirt stores qcow image on such a volume? I wonder if maybe devel could confirm it should just work, and then hardware/Infiniband I should blame. I have a direct IPoIB link between two hosts, gluster replica volume, libvirt store disk images there. I start a guest on hostA and
2009 Sep 09
4
Dmesg log for 2.6.31-rc8 kernel been built on F12 (rawhide) vs log for same kernel been built on F11 and installed on F12
Previous 2.6.31-rc8 kernel was built on F11 and installed with modules on F12. Current kernel has been built on F12 (2.6.31-0.204.rc9.fc12.x86_64) and installed on F12 before loading under Xen 3.4.1. Dmesg log looks similar to Michael Yuong''s ''rc7.git4''  kernel for F12. Boris. --- On Tue, 9/8/09, Boris Derzhavets <bderzhavets@yahoo.com> wrote: From: Boris
2017 Oct 18
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
Tetsuo Handa wrote: > 20171016-deflate.log.xz continued printing "puff" messages without any OOM > killer messages, for fill_balloon() always inflates faster than leak_balloon() > deflates. > > Since the OOM killer cannot be invoked unless leak_balloon() completely > deflates faster than fill_balloon() inflates, the guest remained unusable > (e.g. unable to login
2017 Oct 18
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
Tetsuo Handa wrote: > 20171016-deflate.log.xz continued printing "puff" messages without any OOM > killer messages, for fill_balloon() always inflates faster than leak_balloon() > deflates. > > Since the OOM killer cannot be invoked unless leak_balloon() completely > deflates faster than fill_balloon() inflates, the guest remained unusable > (e.g. unable to login
2019 Dec 04
5
[PATCH] virtio-balloon: fix managed page counts when migrating pages between zones
In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. One way to reproduce: 1. Start a QEMU guest with 4GB, no NUMA 2. Hotplug a 1GB DIMM and only the memory to ZONE_NORMAL 3. Inflate the balloon
2019 Dec 04
5
[PATCH] virtio-balloon: fix managed page counts when migrating pages between zones
In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. One way to reproduce: 1. Start a QEMU guest with 4GB, no NUMA 2. Hotplug a 1GB DIMM and only the memory to ZONE_NORMAL 3. Inflate the balloon
2017 Oct 16
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
Tetsuo Handa wrote: > Michael S. Tsirkin wrote: > > > > > > > > The proper fix isn't that hard - just avoid allocations under lock. > > > > > > > > Patch posted, pls take a look. > > > > > > Your patch allocates pages in order to inflate the balloon, but > > > your patch will allow leak_balloon() to deflate the
2017 Oct 16
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
Tetsuo Handa wrote: > Michael S. Tsirkin wrote: > > > > > > > > The proper fix isn't that hard - just avoid allocations under lock. > > > > > > > > Patch posted, pls take a look. > > > > > > Your patch allocates pages in order to inflate the balloon, but > > > your patch will allow leak_balloon() to deflate the
2019 Dec 05
2
[PATCH v2] virtio-balloon: fix managed page counts when migrating pages between zones
In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. One way to reproduce: 1. Start a QEMU guest with 4GB, no NUMA 2. Hotplug a 1GB DIMM and only the memory to ZONE_NORMAL 3. Inflate the balloon
2019 Dec 05
2
[PATCH v2] virtio-balloon: fix managed page counts when migrating pages between zones
In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. One way to reproduce: 1. Start a QEMU guest with 4GB, no NUMA 2. Hotplug a 1GB DIMM and only the memory to ZONE_NORMAL 3. Inflate the balloon
2017 Sep 07
3
Xen CentOS 7.3 server + CentOS 7.3 VM fails to boot after CR updates (applied to VM)!
On 09/06/2017 05:21 PM, Kevin Stange wrote: > On 09/06/2017 08:40 AM, Johnny Hughes wrote: >> On 09/05/2017 02:26 PM, Kevin Stange wrote: >>> On 09/04/2017 05:27 PM, Johnny Hughes wrote: >>>> On 09/04/2017 03:59 PM, Kevin Stange wrote: >>>>> On 09/02/2017 08:11 AM, Johnny Hughes wrote: >>>>>> On 09/01/2017 02:41 PM, Kevin Stange wrote:
2019 Dec 11
1
[PATCH v3] virtio-balloon: fix managed page counts when migrating pages between zones
In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. One way to reproduce: 1. Start a QEMU guest with 4GB, no NUMA 2. Hotplug a 1GB DIMM and online the memory to ZONE_NORMAL 3. Inflate the