Installed 134 on some IBM x3550 and keep getting the message: HYPERVISOR_update_va_mapping failed. Press any key to reboot. Is there anything I can do to try to get around it? 134 boots just fine without XEN. -- This message posted from opensolaris.org
We actually have 4 of these IBMs, they come with 24GB memory in 4 sticks. 1 quad-core CPU. I tried xen.gz-4.0.1 but it failed the same way. I tried specifying mem=1024M and to my surprise, it boots. So for some reason, I went through to find the exact number: 1024 - OK 4096 - NG 2048 - NG 1536 - OK 1664 - NG 1660 - OK 1663 - OK So, not entirely sure why 1663M boots, but 1664M does not. I have tried to use mem=2048M dom0_mem=1024M but that does not boot. It would appear the if the hypervisor tries to use more than 1663M it dies. This is the same on all 4 machines, so we don''t think it is a faulty stick. (The sticks appear to be 4GB anyway, and 1664M should be well inside one stick) -- This message posted from opensolaris.org
Downloaded the XEN liveCD to test, and it boots just fine, so it most likely is a problem with OpenSolaris. I followed the IllumOS guide to go to b145, but it just panics on boot (without xen) and if I try to boot it with xen, I get the same error. It would be my guess that the hardware is fine, but OpenSolaris has issues with some part of the hardware when used with xen. -- This message posted from opensolaris.org
Also tried OpenIndiana b147, but it has same error: HYPERVISOR_update_va_mapping failed Tried 32bit xen, and/or, 32bit Solaris, same error. Copied the xen from the liveCD, but same error. Copied xen and vmlinux to /boot/amd64, to run from Solaris grub. It boots the linux kernel, until rootfs can''t be mounted, then panics due to it. So it would appear to be Solaris that has problems. Tried both NUMA and non-NUMA memory settings, as well as xen numa=on, no difference. We will throw in the towel on this, and drop Solaris. IBMs will have to run Linux. Sorry for the noise. -- This message posted from opensolaris.org