Displaying 20 results from an estimated 165 matches for "hugetlbf".
Did you mean:
hugetlb
2016 Dec 13
1
virsh not detecting hugepage mount; disabled by config?
...and line:
BOOT_IMAGE=/vmlinuz-4.8.13-gentoo root=/dev/mapper/gensd-gentoo ro quiet
splash intel_iommu=on video=efifb:off,vesafb:off,simplefb:off
splash=verbose,theme:livedvd-aurora kvm.ignore_msrs=1
transparent_hugepage=never hugepages=3072 softlevel=qemuvm
My startup script outputs the following:
hugetlbfs /var/lib/hugetlbfs hugetlbfs
rw,relatime,pagesize=2097152,uid=77,gid=77,mode=0770 0 0
[2016.12.13 22:22:50 virsh 2808] ERROR Failed to start domain win10
[2016.12.13 22:22:50 virsh 2808] ERROR internal error: hugetlbfs
filesystem is not mounted or disabled by administrator config
virsh was unsucce...
2016 Dec 14
0
Re: virsh not detecting hugepage mount; disabled by config?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Thanks a lot you two,
yes hugetlbfs is not mounted, when libvirtd is started… It was a long
night. Did put it to fstab and since I do not allocate them on regular
boots, it won’t eat my ram (that was the main consideration for doing it
like this).
As a last means before going to sleep, I added
<qemu:arg value='-numa'/...
2015 Jan 30
4
HugePages - can't start guest that requires them
...llo All,
I'm trying to enable hugepages, I've turned off THP (Transparent Huge
Pages), and enabled hugepages in memoryBacking, and set my 2MB
hugepages count via sysctl.
I'm getting "libvirtd[5788]: Failed to autostart VM 'atlas': internal
error: Unable to find any usable hugetlbfs mount for 16777216 KiB"
where atlas is one of my guests and 16777216 KiB is the amount of
memory I'm trying to give to the guest.
Yes, i can see the hugepages via numastat -m and hugetlbfs is mounted
via /dev/hugepages and there is a dir structure
/dev/hugepages/libvirt/qemu (it's em...
2016 Dec 09
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> On 12/08/2016 08:45 PM, Li, Liang Z wrote:
> > What's the conclusion of your discussion? It seems you want some
> > statistic before deciding whether to ripping the bitmap from the ABI,
> > am I right?
>
> I think Andrea and David feel pretty strongly that we should remove the
> bitmap, unless we have some data to support keeping it. I don't feel as
>
2016 Dec 09
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> On 12/08/2016 08:45 PM, Li, Liang Z wrote:
> > What's the conclusion of your discussion? It seems you want some
> > statistic before deciding whether to ripping the bitmap from the ABI,
> > am I right?
>
> I think Andrea and David feel pretty strongly that we should remove the
> bitmap, unless we have some data to support keeping it. I don't feel as
>
2019 Jul 26
0
[PATCH v2 6/7] mm/hmm: remove hugetlbfs check in hmm_vma_walk_pmd
walk_page_range() will only call hmm_vma_walk_hugetlb_entry() for
hugetlbfs pages and doesn't call hmm_vma_walk_pmd() in this case.
Therefore, it is safe to remove the check for vma->vm_flags & VM_HUGETLB
in hmm_vma_walk_pmd().
Signed-off-by: Ralph Campbell <rcampbell at nvidia.com>
Cc: "Jérôme Glisse" <jglisse at redhat.com>
Cc: Jason Gu...
2016 Dec 09
0
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
...menter.
2) allowing qemu to tell the guest to stop inflating the balloon and
report a fragmentation limit being hit, when sync compaction
powered allocations fails at certain power-of-two order granularity
passed by qemu to the guest. This order constraint will be passed
by default for hugetlbfs guests with 2MB hpage size, while it can
be used optionally on THP backed guests. This option with THP
guests would allow a highlevel management software to provide a
"don't reduce guest performance" while shrinking the memory size of
the guest from the GUI. If you desele...
2015 Jan 31
2
Re: HugePages - can't start guest that requires them
...ant to target a numa node directly - in other words, I like the
idea of one VM running on Node 0, and the other running on Node 2.
Your comment about libvirt taking care of the hugepages mount isn't
consistent with my reading or experience - on a systemd-based system,
systemd takes care of the hugetlbfs mount to /dev/hugepages, and the
libvirt builds the /dev/hugepages/qemu... directory structure. At least
that's what I've seen.
-rb
On Sat, Jan 31, 2015 at 11:43 AM, Dominique Ramaekers <
dominique.ramaekers@cometal.be> wrote:
> Did you create a mount for the hugepages? If you...
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 02:09:10PM -0500, Jerome Glisse wrote:
> I thought this patch was only for anonymous memory ie not file back ?
Yes, the other common usages are on hugetlbfs/tmpfs that also don't
need to implement writeback and are obviously safe too.
> If so then set dirty is mostly useless it would only be use for swap
> but for this you can use an unlock version to set the page dirty.
It's not a practical issue but a security issue perhaps: you can...
2019 May 23
2
df
On Thu, 23 May 2019, Stephen John Smoogen wrote:
> I might actually be able to have a workable answer:
>
> alias drf='/usr/bin/df -x tmpfs'
/usr/bin/df \
-x autofs -x binfmt_misc -x cgroup -x configfs -x debugfs \
-x devpts -x devtmpfs -x efivarfs -x hugetlbfs -x mqueue \
-x nfsd -x proc -x pstore -x rpc_pipefs -x securityfs \
-x selinuxfs -x sysfs -x tmpfs
:-)
--
Paul Heinlein
heinlein at madboa.com
45?38' N, 122?6' W
2015 Feb 02
0
Re: HugePages - can't start guest that requires them
...ant to target a numa node directly - in other words, I like the idea of one VM running on Node 0, and the other running on Node 2.
Your comment about libvirt taking care of the hugepages mount isn't consistent with my reading or experience - on a systemd-based system, systemd takes care of the hugetlbfs mount to /dev/hugepages, and the libvirt builds the /dev/hugepages/qemu... directory structure. At least that's what I've seen.
-rb
On Sat, Jan 31, 2015 at 11:43 AM, Dominique Ramaekers <dominique.ramaekers at cometal.be<mailto:dominique.ramaekers at cometal.be>> wrote:
Did y...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote:
>
> On 2019/3/7 ??12:31, Michael S. Tsirkin wrote:
> > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used)
> > > +{
> > > + int i;
> > > +
> > > + for (i = 0; i < used->npages; i++)
> > > + set_page_dirty_lock(used->pages[i]);
> > This seems to rely on
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote:
>
> On 2019/3/7 ??12:31, Michael S. Tsirkin wrote:
> > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used)
> > > +{
> > > + int i;
> > > +
> > > + for (i = 0; i < used->npages; i++)
> > > + set_page_dirty_lock(used->pages[i]);
> > This seems to rely on
2019 Mar 07
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:34:39AM -0500, Michael S. Tsirkin wrote:
> On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote:
> >
> > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote:
> > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used)
> > > > +{
> > > > + int i;
> > > > +
> > > > + for (i = 0; i <
2015 Jul 04
1
[RFCv2 4/5] mm/compaction: compaction calls generic migration
...notifier.h>
>
> #include <asm/tlbflush.h>
> @@ -76,7 +76,7 @@ int migrate_prep_local(void)
> * from where they were once taken off for compaction/migration.
> *
> * This function shall be used whenever the isolated pageset has been
> - * built from lru, balloon, hugetlbfs page. See isolate_migratepages_range()
> + * built from lru, driver, hugetlbfs page. See isolate_migratepages_range()
> * and isolate_huge_page().
> */
> void putback_movable_pages(struct list_head *l)
> @@ -92,8 +92,8 @@ void putback_movable_pages(struct list_head *l)
>...
2015 Jul 04
1
[RFCv2 4/5] mm/compaction: compaction calls generic migration
...notifier.h>
>
> #include <asm/tlbflush.h>
> @@ -76,7 +76,7 @@ int migrate_prep_local(void)
> * from where they were once taken off for compaction/migration.
> *
> * This function shall be used whenever the isolated pageset has been
> - * built from lru, balloon, hugetlbfs page. See isolate_migratepages_range()
> + * built from lru, driver, hugetlbfs page. See isolate_migratepages_range()
> * and isolate_huge_page().
> */
> void putback_movable_pages(struct list_head *l)
> @@ -92,8 +92,8 @@ void putback_movable_pages(struct list_head *l)
>...
2019 Sep 27
5
[PATCH] vhost: introduce mdev based hardware backend
...gt;
>
> A question here, consider we're using noiommu mode. If guest physical
> address is passed here, how can a device use that?
>
> I believe you meant "host physical address" here? And it also have the
> implication that the HPA should be continuous (e.g using hugetlbfs).
The comment is talking about the virtual IOMMU (i.e. iotlb in vhost).
It should be rephrased to cover the noiommu case as well. Thanks for
spotting this.
> > +
> > + switch (cmd) {
> > + case VHOST_MDEV_SET_STATE:
> > + r = vhost_set_state(m, argp);
> > + break...
2019 Sep 27
5
[PATCH] vhost: introduce mdev based hardware backend
...gt;
>
> A question here, consider we're using noiommu mode. If guest physical
> address is passed here, how can a device use that?
>
> I believe you meant "host physical address" here? And it also have the
> implication that the HPA should be continuous (e.g using hugetlbfs).
The comment is talking about the virtual IOMMU (i.e. iotlb in vhost).
It should be rephrased to cover the noiommu case as well. Thanks for
spotting this.
> > +
> > + switch (cmd) {
> > + case VHOST_MDEV_SET_STATE:
> > + r = vhost_set_state(m, argp);
> > + break...
2020 May 08
2
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
...2MB mapping.
>
Sure, the I/O will work OK, but is it safe?
Copy on write isn't an issue? splitting a PMD in one process due to
mprotect of a shared page will cause other process' page tables to be split
the same way?
Recall that these are system memory pages that could be THPs, shmem, hugetlbfs,
mmap shared file pages, etc.
2014 Jun 13
2
[Qemu-devel] Why I advise against using ivshmem
...g/archive/html/qemu-devel/2014-03/msg00581.html
The only part of ivshmem that vhost doesn't include is the n-way
inter-guest doorbell. This is the part that requires a server and uio
driver. vhost only supports host->guest and guest->host doorbells.
>> * it doesn't require hugetlbfs (which only enabled shared memory by
>> chance in older QEMU releases, that was never documented)
>
> ivhsmem does not require hugetlbfs. It is optional.
>
>> * it doesn't require the kernel driver from the DPDK sample
>
> ivhsmem does not require DPDK kernel driver....