Displaying 8 results from an estimated 8 matches for "flush_ops".
Did you mean:
flush_op
2011 Sep 09
7
[PATCH] xen-blk[front|back] FUA additions.
I am proposing these two patches for 3.2. They allow the backend
to process the REQ_FUA request as well. Previous to these patches
it only did REQ_FLUSH. There is also a bug-fix for the logic
of how barrier/flushes were handled.
The patches are based on a branch which also has ''feature-discard''
patches, so they won''t apply nativly on top of 3.1-rc5.
Please review and
2011 Sep 01
9
[PATCH V4 0/3] xen-blkfront/blkback discard support
Dear list,
This is the V4 of the trim support for xen-blkfront/blkback,
Now we move BLKIF_OP_TRIM to BLKIF_OP_DISCARD, and dropped all
"trim" stuffs in the patches, and use "discard" instead.
Also we updated the helpers of blkif_x86_{32|64}_request or we
will meet problems using a non-native protocol.
And this patch has been tested with both SSD and raw file,
with SSD we will
2019 Nov 30
5
[PATCH nbdkit 0/3] filters: stats: More useful, more friendly
- Use more friendly output with GiB and MiB/s.
- Measure time per operation, providing finer grain stats
- Add missing stats for flush
I hope that these changes will help to understand and imporve virt-v2v
performance.
Nir Soffer (3):
filters: stats: Show size in GiB, rate in MiB/s
filters: stats: Measure time per operation
filters: stats: Add flush stats
filters/stats/stats.c | 117
2013 May 13
22
[PATCH] xen-blk(front|back): Handle large physical sector disks
I accidentally realized today that any domU''s using the paravirt disk driver
potentially suffer from poor performance when they get handed in a physical
volume and partitioning is done inside the guest. The physical volume passed in
has to be one that has the compat 512 logical sector size but hints its real
sector size (eg. 4096) as physical sector size.
In dom0 handling is correct and
2012 Sep 19
27
[PATCH] Persistent grant maps for xen blk drivers
This patch implements persistent grants for the xen-blk{front,back}
mechanism. The effect of this change is to reduce the number of unmap
operations performed, since they cause a (costly) TLB shootdown. This
allows the I/O performance to scale better when a large number of VMs
are performing I/O.
Previously, the blkfront driver was supplied a bvec[] from the request
queue. This was granted to
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
From: Santosh Jodh <santosh.jodh at citrix.com>
Add support for multi page ring for block devices.
The number of pages is configurable for blkback via module parameter.
blkback reports max-ring-page-order to blkfront via xenstore.
blkfront reports its supported ring-page-order to blkback via xenstore.
blkfront reports multi page ring references via ring-refNN in xenstore.
The change allows
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
From: Santosh Jodh <santosh.jodh at citrix.com>
Add support for multi page ring for block devices.
The number of pages is configurable for blkback via module parameter.
blkback reports max-ring-page-order to blkfront via xenstore.
blkfront reports its supported ring-page-order to blkback via xenstore.
blkfront reports multi page ring references via ring-refNN in xenstore.
The change allows
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
From: Santosh Jodh <santosh.jodh at citrix.com>
Add support for multi page ring for block devices.
The number of pages is configurable for blkback via module parameter.
blkback reports max-ring-page-order to blkfront via xenstore.
blkfront reports its supported ring-page-order to blkback via xenstore.
blkfront reports multi page ring references via ring-refNN in xenstore.
The change allows