Displaying 20 results from an estimated 35 matches for "umap".
Did you mean:
map
2016 Mar 04
3
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...ges
> will be process and sent to the destination, it's not optimal.
First, the likelihood of such a situation is marginal, there's no point
optimizing for it specifically.
And second, even if that happens, you inflate the balloon right before
the migration and the free memory will get umapped very quickly, so this
case is covered nicely by the same technique that works for more
realistic cases, too.
Roman.
2016 Mar 04
3
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...ges
> will be process and sent to the destination, it's not optimal.
First, the likelihood of such a situation is marginal, there's no point
optimizing for it specifically.
And second, even if that happens, you inflate the balloon right before
the migration and the free memory will get umapped very quickly, so this
case is covered nicely by the same technique that works for more
realistic cases, too.
Roman.
1998 Dec 15
4
Why does oplocks = False not seem to stop file cacheing?
....
; Configuration file for smbd.
; ============================================================================
[global]
printing = sysv
printcap name = /etc/printcap
load printers = yes
keepalive = 60
guest account = nobody
socket options = TCP_NODELAY
username map = /opt/samba/lib/umap
read prediction = False
oplocks = no
debug level = 11
lock directory = /opt/samba/lib/locks
kernel oplocks = no
ole locking compatibility = no
share modes = yes
locking = yes
strict locking = yes
[pc]
path = /NEWPC
public = yes
only guest = yes
writable = yes
print...
2020 Feb 05
2
[PATCH] vhost: introduce vDPA based backend
...ing through set_map()
- Reuse vhost IOTLB, so for type 1), simply forward update/invalidate
request via IOMMU API, for type 2), send IOTLB to vDPA device driver via
set_map(), device driver may choose to send diffs or rebuild all mapping
at their will
Technically we can use vhost IOTLB API (map/umap) for building
VHOST_SET_MEM_TABLE, but to avoid device to process the each request, it
looks to me we need new UAPI which seems sub optimal.
What's you thought?
Thanks
>
2020 Feb 05
2
[PATCH] vhost: introduce vDPA based backend
...ing through set_map()
- Reuse vhost IOTLB, so for type 1), simply forward update/invalidate
request via IOMMU API, for type 2), send IOTLB to vDPA device driver via
set_map(), device driver may choose to send diffs or rebuild all mapping
at their will
Technically we can use vhost IOTLB API (map/umap) for building
VHOST_SET_MEM_TABLE, but to avoid device to process the each request, it
looks to me we need new UAPI which seems sub optimal.
What's you thought?
Thanks
>
2020 Feb 05
1
[PATCH] vhost: introduce vDPA based backend
..., so for type 1), simply forward update/invalidate
>> request via IOMMU API, for type 2), send IOTLB to vDPA device driver via
>> set_map(), device driver may choose to send diffs or rebuild all mapping at
>> their will
>>
>> Technically we can use vhost IOTLB API (map/umap) for building
>> VHOST_SET_MEM_TABLE, but to avoid device to process the each request, it
>> looks to me we need new UAPI which seems sub optimal.
>>
>> What's you thought?
>>
>> Thanks
> I suspect we can't completely avoid a new UAPI.
AFAIK, memory...
2007 Apr 18
0
[PATCH 8/21] i386 Segment protect properly
It is impossible to have a zero length segment in descriptor tables using
"normal" segments. One of many ways to properly protect segments to zero
length is to map the base to an umapped page. Create a nicer way to do
this, and stop subtracting 1 from the length passed to set_limit (note
calling set limit with a zero limit does something very bad! - not anymore).
Signed-off-by: Zachary Amsden <zach@vmware.com>
Index: linux-2.6.14-zach-work/include/asm-i386/desc.h
=======...
1998 Nov 19
0
Turning oplocks off, how to do?
...If anybody has turned this off please send
me your configuration file. Here is the latest config file I have
tried. Thanks for any help,
David.
[global]
printing = sysv
printcap name = /etc/printcap
load printers = yes
keepalive = 60
guest account = nobody
username map = /opt/samba/lib/umap
read prediction = False
oplocks = no
debug level = 11
lock directory = /opt/samba/lib/locks
kernel oplocks = no
ole locking compatibility = no
share modes = yes
locking = yes
strict locking = yes
[pc]
path = /NEWPC
public = yes
only guest = yes
writable = yes
prin...
1998 Dec 03
0
How to Turn of oplocks in version 2?
...end me their smb.conf file I would really appreciate
it. Thanks in advance,
David.
(My current smb.conf file)
[global]
printing = sysv
printcap name = /etc/printcap
load printers = yes
keepalive = 60
guest account = nobody
socket options = TCP_NODELAY
username map = /opt/samba/lib/umap
read prediction = False
oplocks = no
debug level = 11
lock directory = /opt/samba/lib/locks
kernel oplocks = no
ole locking compatibility = no
share modes = yes
locking = yes
strict locking = yes
[pc]
path = /NEWPC
public = yes
only guest = yes
writable = yes
print...
2007 Apr 18
0
[PATCH 8/21] i386 Segment protect properly
It is impossible to have a zero length segment in descriptor tables using
"normal" segments. One of many ways to properly protect segments to zero
length is to map the base to an umapped page. Create a nicer way to do
this, and stop subtracting 1 from the length passed to set_limit (note
calling set limit with a zero limit does something very bad! - not anymore).
Signed-off-by: Zachary Amsden <zach@vmware.com>
Index: linux-2.6.14-zach-work/include/asm-i386/desc.h
=======...
2014 Feb 04
1
[RFC 07/16] drm/nouveau/bar/nvc0: support chips without BAR3
..._wo32(mem, 0x0208, lower_32_bits(nv_device_resource_len(device, 1) - 1));
> - nv_wo32(mem, 0x020c, upper_32_bits(nv_device_resource_len(device, 1) - 1));
> -
> - priv->base.alloc = nouveau_bar_alloc;
> - priv->base.kmap = nvc0_bar_kmap;
> priv->base.umap = nvc0_bar_umap;
> priv->base.unmap = nvc0_bar_unmap;
> priv->base.flush = nv84_bar_flush;
> @@ -176,12 +177,16 @@ nvc0_bar_dtor(struct nouveau_object *object)
> nouveau_gpuobj_ref(NULL, &priv->bar[1].pgd);
> nouveau_gpuobj_ref(NULL, &...
2016 Mar 04
0
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...ent to the destination, it's not optimal.
>
> First, the likelihood of such a situation is marginal, there's no point
> optimizing for it specifically.
>
> And second, even if that happens, you inflate the balloon right before
> the migration and the free memory will get umapped very quickly, so this
> case is covered nicely by the same technique that works for more
> realistic cases, too.
Although I wonder which is cheaper; that would be fairly expensive for
the guest wouldn't it? And you'd somehow have to kick the guest
before migration to do the balloo...
2013 Nov 12
0
[PATCH 2/7] drm/nv50-: untile mmap'd bo's
...rtions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/core/subdev/bar/nv50.c b/drivers/gpu/drm/nouveau/core/subdev/bar/nv50.c
index 160d27f..9907a25 100644
--- a/drivers/gpu/drm/nouveau/core/subdev/bar/nv50.c
+++ b/drivers/gpu/drm/nouveau/core/subdev/bar/nv50.c
@@ -67,7 +67,10 @@ nv50_bar_umap(struct nouveau_bar *bar, struct nouveau_mem *mem,
if (ret)
return ret;
- nouveau_vm_map(vma, mem);
+ if (mem->pages)
+ nouveau_vm_map_sg(vma, 0, mem->size << 12, mem);
+ else
+ nouveau_vm_map(vma, mem);
return 0;
}
diff --git a/drivers/gpu/drm/nouveau/core/subdev/bar/nvc0....
2020 Feb 05
0
[PATCH] vhost: introduce vDPA based backend
...- Reuse vhost IOTLB, so for type 1), simply forward update/invalidate
> request via IOMMU API, for type 2), send IOTLB to vDPA device driver via
> set_map(), device driver may choose to send diffs or rebuild all mapping at
> their will
>
> Technically we can use vhost IOTLB API (map/umap) for building
> VHOST_SET_MEM_TABLE, but to avoid device to process the each request, it
> looks to me we need new UAPI which seems sub optimal.
>
> What's you thought?
>
> Thanks
I suspect we can't completely avoid a new UAPI.
>
> >
2014 Mar 24
0
[PATCH 04/12] drm/nouveau/bar/nvc0: support chips without BAR3
...pper_32_bits(priv->bar[1].pgd->addr));
- nv_wo32(mem, 0x0208, lower_32_bits(nv_device_resource_len(device, 1) - 1));
- nv_wo32(mem, 0x020c, upper_32_bits(nv_device_resource_len(device, 1) - 1));
-
- priv->base.alloc = nouveau_bar_alloc;
- priv->base.kmap = nvc0_bar_kmap;
priv->base.umap = nvc0_bar_umap;
priv->base.unmap = nvc0_bar_unmap;
priv->base.flush = nv84_bar_flush;
@@ -201,7 +202,9 @@ nvc0_bar_init(struct nouveau_object *object)
nv_mask(priv, 0x100c80, 0x00000001, 0x00000000);
nv_wr32(priv, 0x001704, 0x80000000 | priv->bar[1].mem->addr >> 12);
-...
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...39;s not optimal.
> >
> > First, the likelihood of such a situation is marginal, there's no
> > point optimizing for it specifically.
> >
> > And second, even if that happens, you inflate the balloon right before
> > the migration and the free memory will get umapped very quickly, so
> > this case is covered nicely by the same technique that works for more
> > realistic cases, too.
>
> Although I wonder which is cheaper; that would be fairly expensive for the
> guest wouldn't it? And you'd somehow have to kick the guest before
&...
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...39;s not optimal.
> >
> > First, the likelihood of such a situation is marginal, there's no
> > point optimizing for it specifically.
> >
> > And second, even if that happens, you inflate the balloon right before
> > the migration and the free memory will get umapped very quickly, so
> > this case is covered nicely by the same technique that works for more
> > realistic cases, too.
>
> Although I wonder which is cheaper; that would be fairly expensive for the
> guest wouldn't it? And you'd somehow have to kick the guest before
&...
2014 Feb 01
0
[RFC 07/16] drm/nouveau/bar/nvc0: support chips without BAR3
...pper_32_bits(priv->bar[1].pgd->addr));
- nv_wo32(mem, 0x0208, lower_32_bits(nv_device_resource_len(device, 1) - 1));
- nv_wo32(mem, 0x020c, upper_32_bits(nv_device_resource_len(device, 1) - 1));
-
- priv->base.alloc = nouveau_bar_alloc;
- priv->base.kmap = nvc0_bar_kmap;
priv->base.umap = nvc0_bar_umap;
priv->base.unmap = nvc0_bar_unmap;
priv->base.flush = nv84_bar_flush;
@@ -176,12 +177,16 @@ nvc0_bar_dtor(struct nouveau_object *object)
nouveau_gpuobj_ref(NULL, &priv->bar[1].pgd);
nouveau_gpuobj_ref(NULL, &priv->bar[1].mem);
- if (priv->bar[0].vm)...
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
On Thu, Mar 03, 2016 at 05:46:15PM +0000, Dr. David Alan Gilbert wrote:
> * Liang Li (liang.z.li at intel.com) wrote:
> > The current QEMU live migration implementation mark the all the
> > guest's RAM pages as dirtied in the ram bulk stage, all these pages
> > will be processed and that takes quit a lot of CPU cycles.
> >
> > From guest's point of view,
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
On Thu, Mar 03, 2016 at 05:46:15PM +0000, Dr. David Alan Gilbert wrote:
> * Liang Li (liang.z.li at intel.com) wrote:
> > The current QEMU live migration implementation mark the all the
> > guest's RAM pages as dirtied in the ram bulk stage, all these pages
> > will be processed and that takes quit a lot of CPU cycles.
> >
> > From guest's point of view,