similar to: Re: virsh not detecting hugepage mount; disabled by config?

Displaying 20 results from an estimated 1100 matches similar to: "Re: virsh not detecting hugepage mount; disabled by config?"

2016 Dec 13
1
virsh not detecting hugepage mount; disabled by config?
Hi, I’m struggling with virsh not detecting my hugepage mount. I have the following kernel command line: BOOT_IMAGE=/vmlinuz-4.8.13-gentoo root=/dev/mapper/gensd-gentoo ro quiet splash intel_iommu=on video=efifb:off,vesafb:off,simplefb:off splash=verbose,theme:livedvd-aurora kvm.ignore_msrs=1 transparent_hugepage=never hugepages=3072 softlevel=qemuvm My startup script outputs the following:
2014 Jun 12
0
about sharing the hugepage memory segment between the host and the container
Dear all, What I want to do is to share a hugepage memory segment between the host and a container (I am trying to use intel DPDK package in container). For a normal memory (4k page), the memory sharing can be achieved by memory-mapped I/O (mmap())method with the same disk file on the host exposed to the container (that is, the host and the container share the same disk file). my questions are:
2015 Feb 02
0
Re: HugePages - can't start guest that requires them
Regarding fine tuning my explanation about what system does de actual mounting of Hugepages, you?re probably right?. Thanks for the correction. On upstart systems (like Ubuntu) the mounting of Hugepages is done by the init script qemu-kvm.conf Van: G. Richard Bellamy [mailto:rbellamy at pteradigm.com] Verzonden: zondag 1 februari 2015 0:02 Aan: Dominique Ramaekers CC: libvirt-users at
2015 Jan 31
2
Re: HugePages - can't start guest that requires them
Yeah, Dominique, your wiki was one of the many docs I read through before/during/after starting down this primrose path... thanks for writing it. I'm an Arch user, and I couldn't find anything to indicate qemu, as its compiled for Arch, will look in /etc/default/qemu-kvm. And now that I've got the right page size, the instances are starting... The reason I want to use the page element
2015 Jan 30
4
HugePages - can't start guest that requires them
Hello All, I'm trying to enable hugepages, I've turned off THP (Transparent Huge Pages), and enabled hugepages in memoryBacking, and set my 2MB hugepages count via sysctl. I'm getting "libvirtd[5788]: Failed to autostart VM 'atlas': internal error: Unable to find any usable hugetlbfs mount for 16777216 KiB" where atlas is one of my guests and 16777216 KiB is the
2015 Jan 31
0
Re: HugePages - can't start guest that requires them
Did you create a mount for the hugepages? If you did, that's maybe the problem. I did that also at first but with libvirt it isn't necessary and in my case, it broke hugepages... If I'm not mistaking, libvirt takes care of the hugepages mount. A while ago, I've written a wiki to use hugepages in libvirt and Ubuntu. https://help.ubuntu.com/community/KVM%20-%20Using%20Hugepages
2015 Jan 31
0
Re: HugePages - can't start guest that requires them
On Fri, Jan 30, 2015 at 03:33:43PM -0800, G. Richard Bellamy wrote: >Hello All, > >I'm trying to enable hugepages, I've turned off THP (Transparent Huge >Pages), and enabled hugepages in memoryBacking, and set my 2MB >hugepages count via sysctl. > >I'm getting "libvirtd[5788]: Failed to autostart VM 'atlas': internal >error: Unable to find any
2015 Feb 04
2
Re: HugePages - can't start guest that requires them
As I mentioned, I got the instances to launch... but they're only taking HugePages from "Node 0", when I believe my setup should pull from both nodes. [atlas] http://sprunge.us/FSEf [prometheus] http://sprunge.us/PJcR 2015-02-03 16:51:48 root@eanna i ~ # virsh start atlas Domain atlas started 2015-02-03 16:51:58 root@eanna i ~ # virsh start prometheus Domain prometheus started
2011 Mar 20
6
PATCH: Hugepage support for Domains booting with 4KB pages
We have implemented hugepage support for guests in following manner In our implementation we added a parameter hugepage_num which is specified in the config file of the DomU. It is the number of hugepages that the guest is guaranteed to receive whenever the kernel asks for hugepage by using its boot time parameter or reserving after booting (eg. Using echo XX > /proc/sys/vm/nr_hugepages).
2016 Dec 09
0
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
Hello, On Fri, Dec 09, 2016 at 05:35:45AM +0000, Li, Liang Z wrote: > > On 12/08/2016 08:45 PM, Li, Liang Z wrote: > > > What's the conclusion of your discussion? It seems you want some > > > statistic before deciding whether to ripping the bitmap from the ABI, > > > am I right? > > > > I think Andrea and David feel pretty strongly that we should
2011 Jan 10
9
Hugepage Support
hi, I tried to make huge page request in Fedora x86_64 PV guest using xen 4.1 unstable and it crashed(crash info given below) I had enabled superpages in config file I had also set hugepages parameter at boot time for the PV Dom U By excuting # cat /proc/mem_info | grep Huge gave me that there are 10 free huge pages available , still the domain crashed. [ 86.403654] BUG: unable to handle
2017 Feb 26
1
error : Failed to switch root mount into slave mode: Permission denied
libvirt-3.0.0 When attemping to create a virtual machine I receive the error "error : Failed to switch root mount into slave mode: Permission denied”. I’m attempting to run qemu/libvirt/virt-manager in an Arch Linux lxc container on a Ubuntu 16.04 host. The host uses zfs for its containers. The arch container is set up as a priveleged container. I do already have kvm/qemu/libvirt working
2019 Mar 12
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Tue, Mar 12, 2019 at 10:52:15AM +0800, Jason Wang wrote: > > On 2019/3/11 ??8:48, Michael S. Tsirkin wrote: > > On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote: > > > On 2019/3/9 ??3:48, Andrea Arcangeli wrote: > > > > Hello Jeson, > > > > > > > > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > > >
2015 Jun 26
0
[RFCv2 4/5] mm/compaction: compaction calls generic migration
Compaction calls interfaces of driver page migration instead of calling balloon migration directly. Signed-off-by: Gioh Kim <gioh.kim at lge.com> --- drivers/virtio/virtio_balloon.c | 1 + mm/compaction.c | 9 +++++---- mm/migrate.c | 21 ++++++++++++--------- 3 files changed, 18 insertions(+), 13 deletions(-) diff --git
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 02:09:10PM -0500, Jerome Glisse wrote: > I thought this patch was only for anonymous memory ie not file back ? Yes, the other common usages are on hugetlbfs/tmpfs that also don't need to implement writeback and are obviously safe too. > If so then set dirty is mostly useless it would only be use for swap > but for this you can use an unlock version to set the
2019 Mar 11
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/9 ??3:48, Andrea Arcangeli wrote: > Hello Jeson, > > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: >> Just to make sure I understand here. For boosting through huge TLB, do >> you mean we can do that in the future (e.g by mapping more userspace >> pages to kenrel) or it can be done by this series (only about three 4K >> pages were vmapped
2020 Sep 25
2
Debian client/workstation pam_mount
Is still not working Sep 25 13:45:46 ubuntucliente lightdm[702]: (pam_mount.c:365): pam_mount 2.14: entering auth stage Sep 25 13:45:46 ubuntucliente org.gtk.vfs.Daemon[9012]: A connection to the bus can't be made Sep 25 13:45:46 ubuntucliente systemd[1]: Started Session c16 of user prueba3. Sep 25 13:45:46 ubuntucliente lightdm[702]: (pam_mount.c:568): pam_mount 2.14: entering session stage
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jeson, On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > Just to make sure I understand here. For boosting through huge TLB, do > you mean we can do that in the future (e.g by mapping more userspace > pages to kenrel) or it can be done by this series (only about three 4K > pages were vmapped per virtqueue)? When I answered about the advantages of mmu notifier and
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jeson, On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > Just to make sure I understand here. For boosting through huge TLB, do > you mean we can do that in the future (e.g by mapping more userspace > pages to kenrel) or it can be done by this series (only about three 4K > pages were vmapped per virtqueue)? When I answered about the advantages of mmu notifier and
2008 Nov 04
7
[PATCH 1/1] Xen PV support for hugepages
This is the latest version of a patch that adds hugepage support to the Xen hypervisor in a PV environment. It is against the latest xen-unstable tree on xenbits.xensource.com. I believe this version addresses the comments made about the previous version of the patch. Hugepage support must be enabled via the hypervisor command line option "allowhugepage". It assumes the guest