Displaying 20 results from an estimated 1000 matches similar to: "virsh not detecting hugepage mount; disabled by config?"
2016 Dec 14
0
Re: virsh not detecting hugepage mount; disabled by config?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Thanks a lot you two,
yes hugetlbfs is not mounted, when libvirtd is started… It was a long
night. Did put it to fstab and since I do not allocate them on regular
boots, it won’t eat my ram (that was the main consideration for doing it
like this).
As a last means before going to sleep, I added
<qemu:arg value='-numa'/>
2014 Jun 12
0
about sharing the hugepage memory segment between the host and the container
Dear all,
What I want to do is to share a hugepage memory segment between the host and a container (I am trying to use intel DPDK package in container). For a normal memory (4k page), the memory sharing can be achieved by memory-mapped I/O (mmap())method with the same disk file on the host exposed to the container (that is, the host and the container share the same disk file).
my questions are:
2015 Jan 30
4
HugePages - can't start guest that requires them
Hello All,
I'm trying to enable hugepages, I've turned off THP (Transparent Huge
Pages), and enabled hugepages in memoryBacking, and set my 2MB
hugepages count via sysctl.
I'm getting "libvirtd[5788]: Failed to autostart VM 'atlas': internal
error: Unable to find any usable hugetlbfs mount for 16777216 KiB"
where atlas is one of my guests and 16777216 KiB is the
2015 Jan 31
2
Re: HugePages - can't start guest that requires them
Yeah, Dominique, your wiki was one of the many docs I read through
before/during/after starting down this primrose path... thanks for writing
it. I'm an Arch user, and I couldn't find anything to indicate qemu, as its
compiled for Arch, will look in /etc/default/qemu-kvm. And now that I've
got the right page size, the instances are starting...
The reason I want to use the page element
2011 Mar 20
6
PATCH: Hugepage support for Domains booting with 4KB pages
We have implemented hugepage support for guests in following manner
In
our implementation we added a parameter hugepage_num which is specified
in the config file of the DomU. It is the number of hugepages that the
guest is guaranteed to receive whenever the kernel asks for hugepage by
using its boot time parameter or reserving after booting (eg. Using echo
XX > /proc/sys/vm/nr_hugepages).
2015 Feb 02
0
Re: HugePages - can't start guest that requires them
Regarding fine tuning my explanation about what system does de actual mounting of Hugepages, you?re probably right?. Thanks for the correction.
On upstart systems (like Ubuntu) the mounting of Hugepages is done by the init script qemu-kvm.conf
Van: G. Richard Bellamy [mailto:rbellamy at pteradigm.com]
Verzonden: zondag 1 februari 2015 0:02
Aan: Dominique Ramaekers
CC: libvirt-users at
2015 Jan 31
0
Re: HugePages - can't start guest that requires them
Did you create a mount for the hugepages? If you did, that's maybe the problem. I did that also at first but with libvirt it isn't necessary and in my case, it broke hugepages...
If I'm not mistaking, libvirt takes care of the hugepages mount.
A while ago, I've written a wiki to use hugepages in libvirt and Ubuntu. https://help.ubuntu.com/community/KVM%20-%20Using%20Hugepages
2015 Jan 31
0
Re: HugePages - can't start guest that requires them
On Fri, Jan 30, 2015 at 03:33:43PM -0800, G. Richard Bellamy wrote:
>Hello All,
>
>I'm trying to enable hugepages, I've turned off THP (Transparent Huge
>Pages), and enabled hugepages in memoryBacking, and set my 2MB
>hugepages count via sysctl.
>
>I'm getting "libvirtd[5788]: Failed to autostart VM 'atlas': internal
>error: Unable to find any
2015 Feb 04
2
Re: HugePages - can't start guest that requires them
As I mentioned, I got the instances to launch... but they're only
taking HugePages from "Node 0", when I believe my setup should pull
from both nodes.
[atlas] http://sprunge.us/FSEf
[prometheus] http://sprunge.us/PJcR
2015-02-03 16:51:48
root@eanna i ~ # virsh start atlas
Domain atlas started
2015-02-03 16:51:58
root@eanna i ~ # virsh start prometheus
Domain prometheus started
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jeson,
On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote:
> Just to make sure I understand here. For boosting through huge TLB, do
> you mean we can do that in the future (e.g by mapping more userspace
> pages to kenrel) or it can be done by this series (only about three 4K
> pages were vmapped per virtqueue)?
When I answered about the advantages of mmu notifier and
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jeson,
On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote:
> Just to make sure I understand here. For boosting through huge TLB, do
> you mean we can do that in the future (e.g by mapping more userspace
> pages to kenrel) or it can be done by this series (only about three 4K
> pages were vmapped per virtqueue)?
When I answered about the advantages of mmu notifier and
2019 May 23
2
df
On Thu, 23 May 2019, Stephen John Smoogen wrote:
> I might actually be able to have a workable answer:
>
> alias drf='/usr/bin/df -x tmpfs'
/usr/bin/df \
-x autofs -x binfmt_misc -x cgroup -x configfs -x debugfs \
-x devpts -x devtmpfs -x efivarfs -x hugetlbfs -x mqueue \
-x nfsd -x proc -x pstore -x rpc_pipefs -x securityfs \
-x selinuxfs -x sysfs -x tmpfs
:-)
--
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote:
>
> On 2019/3/9 ??3:48, Andrea Arcangeli wrote:
> > Hello Jeson,
> >
> > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote:
> > > Just to make sure I understand here. For boosting through huge TLB, do
> > > you mean we can do that in the future (e.g by mapping more userspace
> >
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote:
>
> On 2019/3/9 ??3:48, Andrea Arcangeli wrote:
> > Hello Jeson,
> >
> > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote:
> > > Just to make sure I understand here. For boosting through huge TLB, do
> > > you mean we can do that in the future (e.g by mapping more userspace
> >
2015 Jul 04
1
[RFCv2 4/5] mm/compaction: compaction calls generic migration
On Fri, Jun 26, 2015 at 12:58 PM, Gioh Kim <gioh.kim at lge.com> wrote:
> Compaction calls interfaces of driver page migration
> instead of calling balloon migration directly.
>
> Signed-off-by: Gioh Kim <gioh.kim at lge.com>
> ---
> drivers/virtio/virtio_balloon.c | 1 +
> mm/compaction.c | 9 +++++----
> mm/migrate.c | 21
2015 Jul 04
1
[RFCv2 4/5] mm/compaction: compaction calls generic migration
On Fri, Jun 26, 2015 at 12:58 PM, Gioh Kim <gioh.kim at lge.com> wrote:
> Compaction calls interfaces of driver page migration
> instead of calling balloon migration directly.
>
> Signed-off-by: Gioh Kim <gioh.kim at lge.com>
> ---
> drivers/virtio/virtio_balloon.c | 1 +
> mm/compaction.c | 9 +++++----
> mm/migrate.c | 21
2016 Aug 24
2
Transparent HugePages question
Hello,
I have a CentOS 7 installation on baremetal with 2 CPUs, 10 cores each and
HT enabled, 128 GB RAM.
The system has transparent hugetables enabled.
cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
The system reports anonymous hugepages pages usage and a size of hugepage
of 2048Kb
cat /proc/meminfo |grep -i hugepages|grep AnonHugePages
AnonHugePages: 35491840 kB
2008 Nov 04
7
[PATCH 1/1] Xen PV support for hugepages
This is the latest version of a patch that adds hugepage support to the Xen
hypervisor in a PV environment. It is against the latest xen-unstable tree
on xenbits.xensource.com. I believe this version addresses the comments
made about the previous version of the patch.
Hugepage support must be enabled via the hypervisor command line option
"allowhugepage".
It assumes the guest
2014 Jun 13
2
[Qemu-devel] Why I advise against using ivshmem
Il 13/06/2014 11:26, Vincent JARDIN ha scritto:
>> Markus especially referred to parts *outside* QEMU: the server, the
>> uio driver, etc. These out-of-tree, non-packaged parts of ivshmem
>> are one of the reasons why Red Hat has disabled ivshmem in RHEL7.
>
> You made the right choices, these out-of-tree packages are not required.
> You can use QEMU's ivshmem
2014 Jun 13
2
[Qemu-devel] Why I advise against using ivshmem
Il 13/06/2014 11:26, Vincent JARDIN ha scritto:
>> Markus especially referred to parts *outside* QEMU: the server, the
>> uio driver, etc. These out-of-tree, non-packaged parts of ivshmem
>> are one of the reasons why Red Hat has disabled ivshmem in RHEL7.
>
> You made the right choices, these out-of-tree packages are not required.
> You can use QEMU's ivshmem