Elliott Mitchell
2021-Aug-21 03:50 UTC
[Pkg-xen-devel] Experiment with rebuilt 4.14.2+25-gb6a8c4f72d-2
I've got a rough border between what I'm up for and not. I'm not up for running full testing if I can avoid, but I'm also not up for security holes of doom. As a result my approach was to try building the 4.14.2+25-gb6a8c4f72d-2 for stable. The only difference this causes that I've found is to depend on libc6 2.28-10 instead of 2.31-13. I'm concerned about scripts, but one has to accept those risks. So far things to work as expected. There is a domain configuration I've found which causes the Xen hypervisor to reliably panic. This occurs with both 4.11 and 4.14 (so no worse with 4.14). I'm think the minimum set of configuration options is `type = "hvm"`, `memory = 7168` and `maxmem = 15726`. Mainly I *think* the issue is HVM and memory != maxmem. This also seems unlikely to be a Debian package bug, yet asking on the Xen lists no one had seen this before. Anyone up for trying to recreate? (beware this appears a Xen panic, the entire system gets reset with no chance to save running VMs) -- (\___(\___(\______ --=> 8-) EHM <=-- ______/)___/)___/) \BS ( | ehem+sigmsg at m5p.com PGP 87145445 | ) / \_CS\ | _____ -O #include <stddisclaimer.h> O- _____ | / _/ 8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445
Elliott Mitchell
2021-Sep-01 22:35 UTC
[Pkg-xen-devel] Experiment with rebuilt 4.14.2+25-gb6a8c4f72d-2
On Fri, Aug 20, 2021 at 08:50:44PM -0700, Elliott Mitchell wrote:> I've got a rough border between what I'm up for and not. I'm not up for > running full testing if I can avoid, but I'm also not up for security > holes of doom. > > As a result my approach was to try building the 4.14.2+25-gb6a8c4f72d-2 > for stable. The only difference this causes that I've found is to > depend on libc6 2.28-10 instead of 2.31-13. I'm concerned about scripts, > but one has to accept those risks. > > So far things to work as expected.Configuration from 4.11 works with 4.14 fine. I've been wanting PVH for a while, but hadn't gotten it working from Debian packages. Trying to boot a PVH guest using pvGRUB produces an error similar to what Colin Watson saw in #776450: xc: error: panic: xc_dom_elfloader.c:64: xc_dom_guest_type: image not capable of booting inside a HVM container: Invalid kernel libxl: error: libxl_dom.c:578:libxl__build_dom: xc_dom_parse_image failed This might be an issue of the Debian GRUB 2.02 packages not liking Xen 4.14, but reading #776450 I'm wondering whether setups like this were ever working. Appears power management is a lower priority for Xen development. Result is things include various levels of funkiness. For Xen 4.11 enabling higher C-states (lower power) on a core required Domain 0 to have a corresponding vCPU. You could offline extra vCPUs after boot by using `xl vcpu-set`, but you ended up with unused vCPUs. Unfortunately appears the Linux 4.19.194 kernel is unable to enable higher C-states with Xen 4.14. Apparently fixes for Xen 4.14 didn't get backported to 4.19 (this is discouraging). I'm unsure who to contact about this issue. -- (\___(\___(\______ --=> 8-) EHM <=-- ______/)___/)___/) \BS ( | ehem+sigmsg at m5p.com PGP 87145445 | ) / \_CS\ | _____ -O #include <stddisclaimer.h> O- _____ | / _/ 8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445