Displaying 20 results from an estimated 52 matches for "copy_from".
2008 Aug 26
4
Samba write performance in kernel
...the memory copy is even slower. The reading performance is over 7 MB/s, with mmap and sendfile enabled, while writing is only 4-5 MB/s. Without mmap and sendfile, reading from samba is also about 4-5 MB/s. I use Oprofile to profile writing file to samba and found that CPU takes over 30% CPU time on copy_from/to_user, so I think going to user space and back again is the bottleneck.
Since there is sendfile, why is'nt there counterpart on write path? Is there some difficalties or what? Is it implementable?
Please give me some advice.
Best Regards,
Mac Lin
_________________________________________...
2020 Jun 05
2
[PATCH RFC 03/13] vhost: batching fetches
..., 2020 at 03:27:39PM +0800, Jason Wang wrote:
>> On 2020/6/2 ??9:06, Michael S. Tsirkin wrote:
>>> With this patch applied, new and old code perform identically.
>>>
>>> Lots of extra optimizations are now possible, e.g.
>>> we can fetch multiple heads with copy_from/to_user now.
>>> We can get rid of maintaining the log array. Etc etc.
>>>
>>> Signed-off-by: Michael S. Tsirkin<mst at redhat.com>
>>> Signed-off-by: Eugenio P?rez<eperezma at redhat.com>
>>> Link:https://lore.kernel.org/r/20200401183118.83...
2020 Jun 05
2
[PATCH RFC 03/13] vhost: batching fetches
..., 2020 at 03:27:39PM +0800, Jason Wang wrote:
>> On 2020/6/2 ??9:06, Michael S. Tsirkin wrote:
>>> With this patch applied, new and old code perform identically.
>>>
>>> Lots of extra optimizations are now possible, e.g.
>>> we can fetch multiple heads with copy_from/to_user now.
>>> We can get rid of maintaining the log array. Etc etc.
>>>
>>> Signed-off-by: Michael S. Tsirkin<mst at redhat.com>
>>> Signed-off-by: Eugenio P?rez<eperezma at redhat.com>
>>> Link:https://lore.kernel.org/r/20200401183118.83...
2020 Feb 06
2
vhost changes (batched) in linux-next after 12/13 trigger random crashes in KVM guests after reboot
On Thu, Feb 06, 2020 at 04:12:21PM +0100, Christian Borntraeger wrote:
>
>
> On 06.02.20 15:22, eperezma at redhat.com wrote:
> > Hi Christian.
> >
> > Could you try this patch on top of ("38ced0208491 vhost: use batched version by default")?
> >
> > It will not solve your first random crash but it should help with the lost of network
2020 Feb 06
2
vhost changes (batched) in linux-next after 12/13 trigger random crashes in KVM guests after reboot
On Thu, Feb 06, 2020 at 04:12:21PM +0100, Christian Borntraeger wrote:
>
>
> On 06.02.20 15:22, eperezma at redhat.com wrote:
> > Hi Christian.
> >
> > Could you try this patch on top of ("38ced0208491 vhost: use batched version by default")?
> >
> > It will not solve your first random crash but it should help with the lost of network
2020 Feb 07
1
vhost changes (batched) in linux-next after 12/13 trigger random crashes in KVM guests after reboot
...0400
> >>
> >> vhost: batching fetches
> >>
> >> With this patch applied, new and old code perform identically.
> >>
> >> Lots of extra optimizations are now possible, e.g.
> >> we can fetch multiple heads with copy_from/to_user now.
> >> We can get rid of maintaining the log array. Etc etc.
> >>
> >> Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
> >>
> >> drivers/vhost/test.c | 2 +-
> >> drivers/vhost/vhost.c | 39 +++++++++++++++...
2009 Aug 13
0
[PATCHv3 1/2] mm: export use_mm/unuse_mm to modules
...1
#define dprintk printk
@@ -595,51 +595,6 @@ static struct kioctx *lookup_ioctx(unsigned long ctx_id)
}
/*
- * use_mm
- * Makes the calling kernel thread take on the specified
- * mm context.
- * Called by the retry thread execute retries within the
- * iocb issuer's mm context, so that copy_from/to_user
- * operations work seamlessly for aio.
- * (Note: this routine is intended to be called only
- * from a kernel thread context)
- */
-static void use_mm(struct mm_struct *mm)
-{
- struct mm_struct *active_mm;
- struct task_struct *tsk = current;
-
- task_lock(tsk);
- active_mm = tsk->act...
2009 Aug 13
0
[PATCHv3 1/2] mm: export use_mm/unuse_mm to modules
...1
#define dprintk printk
@@ -595,51 +595,6 @@ static struct kioctx *lookup_ioctx(unsigned long ctx_id)
}
/*
- * use_mm
- * Makes the calling kernel thread take on the specified
- * mm context.
- * Called by the retry thread execute retries within the
- * iocb issuer's mm context, so that copy_from/to_user
- * operations work seamlessly for aio.
- * (Note: this routine is intended to be called only
- * from a kernel thread context)
- */
-static void use_mm(struct mm_struct *mm)
-{
- struct mm_struct *active_mm;
- struct task_struct *tsk = current;
-
- task_lock(tsk);
- active_mm = tsk->act...
2009 Aug 19
0
[PATCHv4 1/2] mm: export use_mm/unuse_mm to modules
...1
#define dprintk printk
@@ -595,51 +595,6 @@ static struct kioctx *lookup_ioctx(unsigned long ctx_id)
}
/*
- * use_mm
- * Makes the calling kernel thread take on the specified
- * mm context.
- * Called by the retry thread execute retries within the
- * iocb issuer's mm context, so that copy_from/to_user
- * operations work seamlessly for aio.
- * (Note: this routine is intended to be called only
- * from a kernel thread context)
- */
-static void use_mm(struct mm_struct *mm)
-{
- struct mm_struct *active_mm;
- struct task_struct *tsk = current;
-
- task_lock(tsk);
- active_mm = tsk->act...
2009 Aug 19
0
[PATCHv4 1/2] mm: export use_mm/unuse_mm to modules
...1
#define dprintk printk
@@ -595,51 +595,6 @@ static struct kioctx *lookup_ioctx(unsigned long ctx_id)
}
/*
- * use_mm
- * Makes the calling kernel thread take on the specified
- * mm context.
- * Called by the retry thread execute retries within the
- * iocb issuer's mm context, so that copy_from/to_user
- * operations work seamlessly for aio.
- * (Note: this routine is intended to be called only
- * from a kernel thread context)
- */
-static void use_mm(struct mm_struct *mm)
-{
- struct mm_struct *active_mm;
- struct task_struct *tsk = current;
-
- task_lock(tsk);
- active_mm = tsk->act...
2009 Sep 17
0
[PATCHv3 1/2] mm: move use_mm/unuse_mm from aio.c to mm/
...1
#define dprintk printk
@@ -595,51 +595,6 @@ static struct kioctx *lookup_ioctx(unsigned long ctx_id)
}
/*
- * use_mm
- * Makes the calling kernel thread take on the specified
- * mm context.
- * Called by the retry thread execute retries within the
- * iocb issuer's mm context, so that copy_from/to_user
- * operations work seamlessly for aio.
- * (Note: this routine is intended to be called only
- * from a kernel thread context)
- */
-static void use_mm(struct mm_struct *mm)
-{
- struct mm_struct *active_mm;
- struct task_struct *tsk = current;
-
- task_lock(tsk);
- active_mm = tsk->act...
2009 Sep 17
0
[PATCHv3 1/2] mm: move use_mm/unuse_mm from aio.c to mm/
...1
#define dprintk printk
@@ -595,51 +595,6 @@ static struct kioctx *lookup_ioctx(unsigned long ctx_id)
}
/*
- * use_mm
- * Makes the calling kernel thread take on the specified
- * mm context.
- * Called by the retry thread execute retries within the
- * iocb issuer's mm context, so that copy_from/to_user
- * operations work seamlessly for aio.
- * (Note: this routine is intended to be called only
- * from a kernel thread context)
- */
-static void use_mm(struct mm_struct *mm)
-{
- struct mm_struct *active_mm;
- struct task_struct *tsk = current;
-
- task_lock(tsk);
- active_mm = tsk->act...
2009 Aug 11
1
[PATCHv2 1/2] mm: export use_mm/unuse_mm to modules
...1
#define dprintk printk
@@ -595,51 +595,6 @@ static struct kioctx *lookup_ioctx(unsigned long ctx_id)
}
/*
- * use_mm
- * Makes the calling kernel thread take on the specified
- * mm context.
- * Called by the retry thread execute retries within the
- * iocb issuer's mm context, so that copy_from/to_user
- * operations work seamlessly for aio.
- * (Note: this routine is intended to be called only
- * from a kernel thread context)
- */
-static void use_mm(struct mm_struct *mm)
-{
- struct mm_struct *active_mm;
- struct task_struct *tsk = current;
-
- task_lock(tsk);
- active_mm = tsk->act...
2009 Aug 11
1
[PATCHv2 1/2] mm: export use_mm/unuse_mm to modules
...1
#define dprintk printk
@@ -595,51 +595,6 @@ static struct kioctx *lookup_ioctx(unsigned long ctx_id)
}
/*
- * use_mm
- * Makes the calling kernel thread take on the specified
- * mm context.
- * Called by the retry thread execute retries within the
- * iocb issuer's mm context, so that copy_from/to_user
- * operations work seamlessly for aio.
- * (Note: this routine is intended to be called only
- * from a kernel thread context)
- */
-static void use_mm(struct mm_struct *mm)
-{
- struct mm_struct *active_mm;
- struct task_struct *tsk = current;
-
- task_lock(tsk);
- active_mm = tsk->act...
2009 Aug 27
1
[PATCHv5 1/3] mm: export use_mm/unuse_mm to modules
...1
#define dprintk printk
@@ -595,51 +595,6 @@ static struct kioctx *lookup_ioctx(unsigned long ctx_id)
}
/*
- * use_mm
- * Makes the calling kernel thread take on the specified
- * mm context.
- * Called by the retry thread execute retries within the
- * iocb issuer's mm context, so that copy_from/to_user
- * operations work seamlessly for aio.
- * (Note: this routine is intended to be called only
- * from a kernel thread context)
- */
-static void use_mm(struct mm_struct *mm)
-{
- struct mm_struct *active_mm;
- struct task_struct *tsk = current;
-
- task_lock(tsk);
- active_mm = tsk->act...
2009 Aug 27
1
[PATCHv5 1/3] mm: export use_mm/unuse_mm to modules
...1
#define dprintk printk
@@ -595,51 +595,6 @@ static struct kioctx *lookup_ioctx(unsigned long ctx_id)
}
/*
- * use_mm
- * Makes the calling kernel thread take on the specified
- * mm context.
- * Called by the retry thread execute retries within the
- * iocb issuer's mm context, so that copy_from/to_user
- * operations work seamlessly for aio.
- * (Note: this routine is intended to be called only
- * from a kernel thread context)
- */
-static void use_mm(struct mm_struct *mm)
-{
- struct mm_struct *active_mm;
- struct task_struct *tsk = current;
-
- task_lock(tsk);
- active_mm = tsk->act...
2020 Jun 08
1
[PATCH RFC 03/13] vhost: batching fetches
...rote:
>>>> On 2020/6/2 ??9:06, Michael S. Tsirkin wrote:
>>>>> With this patch applied, new and old code perform identically.
>>>>>
>>>>> Lots of extra optimizations are now possible, e.g.
>>>>> we can fetch multiple heads with copy_from/to_user now.
>>>>> We can get rid of maintaining the log array. Etc etc.
>>>>>
>>>>> Signed-off-by: Michael S. Tsirkin<mst at redhat.com>
>>>>> Signed-off-by: Eugenio P?rez<eperezma at redhat.com>
>>>>> Link:htt...
2020 Jun 03
2
[PATCH RFC 03/13] vhost: batching fetches
On 2020/6/2 ??9:06, Michael S. Tsirkin wrote:
> With this patch applied, new and old code perform identically.
>
> Lots of extra optimizations are now possible, e.g.
> we can fetch multiple heads with copy_from/to_user now.
> We can get rid of maintaining the log array. Etc etc.
>
> Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
> Signed-off-by: Eugenio P?rez <eperezma at redhat.com>
> Link: https://lore.kernel.org/r/20200401183118.8334-4-eperezma at redhat.com
> Sign...
2020 Jun 03
2
[PATCH RFC 03/13] vhost: batching fetches
On 2020/6/2 ??9:06, Michael S. Tsirkin wrote:
> With this patch applied, new and old code perform identically.
>
> Lots of extra optimizations are now possible, e.g.
> we can fetch multiple heads with copy_from/to_user now.
> We can get rid of maintaining the log array. Etc etc.
>
> Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
> Signed-off-by: Eugenio P?rez <eperezma at redhat.com>
> Link: https://lore.kernel.org/r/20200401183118.8334-4-eperezma at redhat.com
> Sign...
2019 Oct 12
2
[PATCH RFC v1 2/2] vhost: batching fetches
On 2019/10/11 ??9:46, Michael S. Tsirkin wrote:
> With this patch applied, new and old code perform identically.
>
> Lots of extra optimizations are now possible, e.g.
> we can fetch multiple heads with copy_from/to_user now.
> We can get rid of maintaining the log array. Etc etc.
>
> Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
> ---
> drivers/vhost/test.c | 2 +-
> drivers/vhost/vhost.c | 50 ++++++++++++++++++++++++++++++++++++-------
> drivers/vhost/vhost.h |...