Displaying 7 results from an estimated 7 matches for "40us".
Did you mean:
40ms
2023 May 22
1
[PATCH] virtio-fs: Improved request latencies when Virtio queue is full
When the Virtio queue is full, a work item is scheduled
to execute in 1ms that retries adding the request to the queue.
This is a large amount of time on the scale on which a
virtio-fs device can operate. When using a DPU this is around
40us baseline without going to a remote server (4k, QD=1).
This patch queues requests when the Virtio queue is full,
and when a completed request is taken off, immediately fills
it back up with queued requests.
This reduces the 99.9th percentile latencies in our tests by
60x and slightly increases the...
2023 Jul 03
2
[PATCH V4] virtio-fs: Improved request latencies when Virtio queue is full
When the Virtio queue is full, a work item is scheduled
to execute in 1ms that retries adding the request to the queue.
This is a large amount of time on the scale on which a
virtio-fs device can operate. When using a DPU this is around
40us baseline without going to a remote server (4k, QD=1).
This patch queues requests when the Virtio queue is full,
and when a completed request is taken off, immediately fills
it back up with queued requests.
This reduces the 99.9th percentile latencies in our tests by
60x and slightly increases the...
2023 Jun 01
2
[PATCH V2] virtio-fs: Improved request latencies when Virtio queue is full
...io queue is full, a work item is scheduled
> > >> to execute in 1ms that retries adding the request to the queue.
> > >> This is a large amount of time on the scale on which a
> > >> virtio-fs device can operate. When using a DPU this is around
> > >> 40us baseline without going to a remote server (4k, QD=1).
> > >> This patch queues requests when the Virtio queue is full,
> > >> and when a completed request is taken off, immediately fills
> > >> it back up with queued requests.
> > >>
> > >>...
2023 May 31
1
[PATCH V2] virtio-fs: Improved request latencies when Virtio queue is full
...t;> When the Virtio queue is full, a work item is scheduled
> >> to execute in 1ms that retries adding the request to the queue.
> >> This is a large amount of time on the scale on which a
> >> virtio-fs device can operate. When using a DPU this is around
> >> 40us baseline without going to a remote server (4k, QD=1).
> >> This patch queues requests when the Virtio queue is full,
> >> and when a completed request is taken off, immediately fills
> >> it back up with queued requests.
> >>
> >> This reduces the 99.9th...
2023 May 31
1
[PATCH V2] virtio-fs: Improved request latencies when Virtio queue is full
When the Virtio queue is full, a work item is scheduled
to execute in 1ms that retries adding the request to the queue.
This is a large amount of time on the scale on which a
virtio-fs device can operate. When using a DPU this is around
40us baseline without going to a remote server (4k, QD=1).
This patch queues requests when the Virtio queue is full,
and when a completed request is taken off, immediately fills
it back up with queued requests.
This reduces the 99.9th percentile latencies in our tests by
60x and slightly increases the...
2023 Jun 01
1
[PATCH V2] virtio-fs: Improved request latencies when Virtio queue is full
...is full, a work item is scheduled
>>>>> to execute in 1ms that retries adding the request to the queue.
>>>>> This is a large amount of time on the scale on which a
>>>>> virtio-fs device can operate. When using a DPU this is around
>>>>> 40us baseline without going to a remote server (4k, QD=1).
>>>>> This patch queues requests when the Virtio queue is full,
>>>>> and when a completed request is taken off, immediately fills
>>>>> it back up with queued requests.
>>>>>
>>&...
2007 Aug 10
14
Live migration: 2500ms downtime
...10.10.241.44: icmp_seq=99 ttl=64 time=0.039 ms
64 bytes from 10.10.241.44: icmp_seq=125 ttl=64 time=0.195 ms
64 bytes from 10.10.241.44: icmp_seq=126 ttl=64 time=0.263 ms
64 bytes from 10.10.241.44: icmp_seq=127 ttl=64 time=0.210 ms
As you can see, the response time before the migration is around 40us, and
after, it''s 200us, which is understandable since the VM is now in another
physical host.
The problem is the 25 lost packets between the last phase of the migration.
Don''t get me wrong: 2.5s is a very good time, but 50 times higher than what
it is told to be, isn''t....