Displaying 7 results from an estimated 7 matches for "ea0327d".
2015 Oct 22
4
[PATCH net-next RFC 1/2] vhost: introduce vhost_has_work()
...as_work(struct vhost_dev *dev)
+{
+ return !list_empty(&dev->work_list);
+}
+EXPORT_SYMBOL_GPL(vhost_has_work);
+
void vhost_poll_queue(struct vhost_poll *poll)
{
vhost_work_queue(poll->dev, &poll->work);
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 4772862..ea0327d 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -37,6 +37,7 @@ struct vhost_poll {
void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn);
void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work);
+bool vhost_has_work(struct vhost_dev *dev);
void vho...
2015 Oct 22
4
[PATCH net-next RFC 1/2] vhost: introduce vhost_has_work()
...as_work(struct vhost_dev *dev)
+{
+ return !list_empty(&dev->work_list);
+}
+EXPORT_SYMBOL_GPL(vhost_has_work);
+
void vhost_poll_queue(struct vhost_poll *poll)
{
vhost_work_queue(poll->dev, &poll->work);
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 4772862..ea0327d 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -37,6 +37,7 @@ struct vhost_poll {
void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn);
void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work);
+bool vhost_has_work(struct vhost_dev *dev);
void vho...
2015 Oct 22
0
[PATCH net-next RFC 1/2] vhost: introduce vhost_has_work()
...gt; {
> vhost_work_queue(poll->dev, &poll->work);
This doesn't take a lock so it's unreliable.
I think it's ok in this case since it's just
an optimization - but pls document this.
> diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> index 4772862..ea0327d 100644
> --- a/drivers/vhost/vhost.h
> +++ b/drivers/vhost/vhost.h
> @@ -37,6 +37,7 @@ struct vhost_poll {
>
> void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn);
> void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work);
> +bool vhost_has_work(...
2015 Nov 12
5
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
Hi all:
This series tries to add basic busy polling for vhost net. The idea is
simple: at the end of tx/rx processing, busy polling for new tx added
descriptor and rx receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified ioctl.
Test were done through:
- 50 us as busy loop timeout
- Netperf 2.6
- Two machines with back to back connected ixgbe
2015 Nov 12
5
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
Hi all:
This series tries to add basic busy polling for vhost net. The idea is
simple: at the end of tx/rx processing, busy polling for new tx added
descriptor and rx receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified ioctl.
Test were done through:
- 50 us as busy loop timeout
- Netperf 2.6
- Two machines with back to back connected ixgbe
2015 Oct 29
4
[PATCH net-next rfc V2 0/2] basic busy polling support for vhost_net
Hi all:
This series tries to add basic busy polling for vhost net. The idea is
simple: at the end of tx processing, busy polling for new tx added
descriptor and rx receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified through
module parameter.
Test were done through:
- 50 us as busy loop timeout
- Netperf 2.6
- Two machines with back to back
2015 Oct 29
4
[PATCH net-next rfc V2 0/2] basic busy polling support for vhost_net
Hi all:
This series tries to add basic busy polling for vhost net. The idea is
simple: at the end of tx processing, busy polling for new tx added
descriptor and rx receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified through
module parameter.
Test were done through:
- 50 us as busy loop timeout
- Netperf 2.6
- Two machines with back to back