Displaying 18 results from an estimated 18 matches for "unsafe_put_user".
Did you mean:
unsafe_get_user
2018 Dec 14
3
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
...he data
itself. This idea has been used for high speed userspace backend for
years, e.g packet socket or recent AF_XDP. The only difference is the
page was remap to from kernel to userspace.
> I don't
> like the idea I have to say. As a first step, why don't we switch to
> unsafe_put_user/unsafe_get_user etc?
Several reasons:
- They only have x86 variant, it won't have any difference for the rest
of architecture.
- unsafe_put_user/unsafe_get_user is not sufficient for accessing
structures (e.g accessing descriptor) or arrays (batching).
- Unless we can batch at least the...
2018 Dec 14
3
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
...he data
itself. This idea has been used for high speed userspace backend for
years, e.g packet socket or recent AF_XDP. The only difference is the
page was remap to from kernel to userspace.
> I don't
> like the idea I have to say. As a first step, why don't we switch to
> unsafe_put_user/unsafe_get_user etc?
Several reasons:
- They only have x86 variant, it won't have any difference for the rest
of architecture.
- unsafe_put_user/unsafe_get_user is not sufficient for accessing
structures (e.g accessing descriptor) or arrays (batching).
- Unless we can batch at least the...
2018 Dec 24
2
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
...least that avoids the g.u.p mess.
I'm still not very clear at the point. We only pin 2 or 4 pages, they're
several other cases that will pin much more.
>
>>> I don't
>>> like the idea I have to say. As a first step, why don't we switch to
>>> unsafe_put_user/unsafe_get_user etc?
>>
>> Several reasons:
>>
>> - They only have x86 variant, it won't have any difference for the rest of
>> architecture.
> Is there an issue on other architectures? If yes they can be extended
> there.
Consider the unexpected amount of...
2018 Dec 24
2
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
...least that avoids the g.u.p mess.
I'm still not very clear at the point. We only pin 2 or 4 pages, they're
several other cases that will pin much more.
>
>>> I don't
>>> like the idea I have to say. As a first step, why don't we switch to
>>> unsafe_put_user/unsafe_get_user etc?
>>
>> Several reasons:
>>
>> - They only have x86 variant, it won't have any difference for the rest of
>> architecture.
> Is there an issue on other architectures? If yes they can be extended
> there.
Consider the unexpected amount of...
2018 Dec 14
0
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
...t they are mostly
designed for priveledged userspace.
> The only difference is the page was remap to
> from kernel to userspace.
At least that avoids the g.u.p mess.
>
> > I don't
> > like the idea I have to say. As a first step, why don't we switch to
> > unsafe_put_user/unsafe_get_user etc?
>
>
> Several reasons:
>
> - They only have x86 variant, it won't have any difference for the rest of
> architecture.
Is there an issue on other architectures? If yes they can be extended
there.
> - unsafe_put_user/unsafe_get_user is not sufficient...
2018 Dec 24
0
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
...9;m still not very clear at the point. We only pin 2 or 4 pages, they're
> several other cases that will pin much more.
>
>
> >
> > > > I don't
> > > > like the idea I have to say. As a first step, why don't we switch to
> > > > unsafe_put_user/unsafe_get_user etc?
> > >
> > > Several reasons:
> > >
> > > - They only have x86 variant, it won't have any difference for the rest of
> > > architecture.
> > Is there an issue on other architectures? If yes they can be extended
> > t...
2018 Dec 14
3
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
...This might fail, if it does just bounce things out to
> a thread.
I'm not sure what context you meant here. Is this for TX path of TUN?
But a fundamental difference is my series is targeted for extreme heavy
load not light one, 100% cpu for vhost is expected.
>
> 2. Switch to unsafe_put_user/unsafe_get_user,
> and batch up multiple accesses.
As I said, unless we can batch accessing of two difference places of
three of avail, descriptor and used. It won't help for batching the
accessing of a single place like used. I'm even not sure this can be
done consider the case...
2018 Dec 14
3
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
...This might fail, if it does just bounce things out to
> a thread.
I'm not sure what context you meant here. Is this for TX path of TUN?
But a fundamental difference is my series is targeted for extreme heavy
load not light one, 100% cpu for vhost is expected.
>
> 2. Switch to unsafe_put_user/unsafe_get_user,
> and batch up multiple accesses.
As I said, unless we can batch accessing of two difference places of
three of avail, descriptor and used. It won't help for batching the
accessing of a single place like used. I'm even not sure this can be
done consider the case...
2018 Dec 13
11
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
Hi:
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too much
overheads like checks, spec barriers or even hardware feature
toggling.
Test shows about 24% improvement on TX PPS. It should benefit other
cases as well.
Please review
Jason Wang (3):
vhost: generalize adding used elem
vhost: fine grain userspace memory
2018 Dec 13
11
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
Hi:
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too much
overheads like checks, spec barriers or even hardware feature
toggling.
Test shows about 24% improvement on TX PPS. It should benefit other
cases as well.
Please review
Jason Wang (3):
vhost: generalize adding used elem
vhost: fine grain userspace memory
2018 Dec 13
0
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
...rheads like checks, spec barriers or even hardware feature
> toggling.
Userspace accesses through remapping tricks and next time there's a need
for a new barrier we are left to figure it out by ourselves. I don't
like the idea I have to say. As a first step, why don't we switch to
unsafe_put_user/unsafe_get_user etc?
That would be more of an apples to apples comparison, would it not?
> Test shows about 24% improvement on TX PPS. It should benefit other
> cases as well.
>
> Please review
>
> Jason Wang (3):
> vhost: generalize adding used elem
> vhost: fine gr...
2018 Dec 13
0
[PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
...mall packets
directly in an atomic context. This should cut latency
down significantly, the tricky part is to only do it
on a light load and disable this
for the streaming case otherwise it's unfair.
This might fail, if it does just bounce things out to
a thread.
2. Switch to unsafe_put_user/unsafe_get_user,
and batch up multiple accesses.
3. Allow adding a fixup point manually,
such that multiple independent get_user accesses
can get a single fixup (will allow better compiler
optimizations).
> Jason Wang (3):
> vhost: generalize adding used elem
> vhost:...
2018 Nov 02
3
[PULL] vhost: cleanups and fixes
...-underscore version. It's basically
> always a mis-optimization due to entirely historical reasons. I can
> pretty much guarantee that it's not visible in profiles.
>
> Linus
OK. So maybe we should focus on switching to user_access_begin/end +
unsafe_get_user/unsafe_put_user in a loop which does seem to be
measureable. That moves the barrier out of the loop, which seems to be
consistent with what you would expect.
--
MST
2018 Nov 02
3
[PULL] vhost: cleanups and fixes
...-underscore version. It's basically
> always a mis-optimization due to entirely historical reasons. I can
> pretty much guarantee that it's not visible in profiles.
>
> Linus
OK. So maybe we should focus on switching to user_access_begin/end +
unsafe_get_user/unsafe_put_user in a loop which does seem to be
measureable. That moves the barrier out of the loop, which seems to be
consistent with what you would expect.
--
MST
2018 Nov 02
2
[PULL] vhost: cleanups and fixes
On Fri, Nov 02, 2018 at 11:46:36AM +0000, Mark Rutland wrote:
> On Thu, Nov 01, 2018 at 04:06:19PM -0700, Linus Torvalds wrote:
> > On Thu, Nov 1, 2018 at 4:00 PM Kees Cook <keescook at chromium.org> wrote:
> > >
> > > + memset(&rsp, 0, sizeof(rsp));
> > > + rsp.response = VIRTIO_SCSI_S_FUNCTION_REJECTED;
> > > + resp =
2018 Nov 02
2
[PULL] vhost: cleanups and fixes
On Fri, Nov 02, 2018 at 11:46:36AM +0000, Mark Rutland wrote:
> On Thu, Nov 01, 2018 at 04:06:19PM -0700, Linus Torvalds wrote:
> > On Thu, Nov 1, 2018 at 4:00 PM Kees Cook <keescook at chromium.org> wrote:
> > >
> > > + memset(&rsp, 0, sizeof(rsp));
> > > + rsp.response = VIRTIO_SCSI_S_FUNCTION_REJECTED;
> > > + resp =
2018 Nov 01
5
[PULL] vhost: cleanups and fixes
On Thu, Nov 1, 2018 at 4:00 PM Kees Cook <keescook at chromium.org> wrote:
>
> + memset(&rsp, 0, sizeof(rsp));
> + rsp.response = VIRTIO_SCSI_S_FUNCTION_REJECTED;
> + resp = vq->iov[out].iov_base;
> + ret = __copy_to_user(resp, &rsp, sizeof(rsp));
>
> Is it actually safe to trust that iov_base has passed an earlier
> access_ok()
2018 Nov 01
5
[PULL] vhost: cleanups and fixes
On Thu, Nov 1, 2018 at 4:00 PM Kees Cook <keescook at chromium.org> wrote:
>
> + memset(&rsp, 0, sizeof(rsp));
> + rsp.response = VIRTIO_SCSI_S_FUNCTION_REJECTED;
> + resp = vq->iov[out].iov_base;
> + ret = __copy_to_user(resp, &rsp, sizeof(rsp));
>
> Is it actually safe to trust that iov_base has passed an earlier
> access_ok()