Displaying 15 results from an estimated 15 matches for "write_barri".
Did you mean:
write_barrier
2011 Jun 21
13
VM disk I/O limit patch
...= 1;
@@ -367,14 +416,14 @@ handle_request:
switch (req.operation) {
case BLKIF_OP_READ:
blkif->st_rd_req++;
- ret = dispatch_rw_block_io(blkif, &req, pending_req);
+ ret = dispatch_rw_block_io(blkif, &req, pending_req,&last_done_nr_sects);
break;
case BLKIF_OP_WRITE_BARRIER:
blkif->st_br_req++;
/* fall through */
case BLKIF_OP_WRITE:
blkif->st_wr_req++;
- ret = dispatch_rw_block_io(blkif, &req, pending_req);
+ ret = dispatch_rw_block_io(blkif, &req, pending_req,&last_done_nr_sects);
break;
case BLKIF_OP_PACKET:
DPRINTK...
2011 Jun 21
13
VM disk I/O limit patch
...= 1;
@@ -367,14 +416,14 @@ handle_request:
switch (req.operation) {
case BLKIF_OP_READ:
blkif->st_rd_req++;
- ret = dispatch_rw_block_io(blkif, &req, pending_req);
+ ret = dispatch_rw_block_io(blkif, &req, pending_req,&last_done_nr_sects);
break;
case BLKIF_OP_WRITE_BARRIER:
blkif->st_br_req++;
/* fall through */
case BLKIF_OP_WRITE:
blkif->st_wr_req++;
- ret = dispatch_rw_block_io(blkif, &req, pending_req);
+ ret = dispatch_rw_block_io(blkif, &req, pending_req,&last_done_nr_sects);
break;
case BLKIF_OP_PACKET:
DPRINTK...
2008 Nov 08
0
No subject
...r policies to schedule BIOs.
> - Policies which fits SSD.
> e.g.)
> - Guarantee response time.
> - Guarantee throughput.
> - Policies which fits Highend Storage or hardware raid storage.
> - Some LUNs may share the same bandwidth.
> - Support WRITE_BARRIER when the device-mapper layer supports it.
> - Implement the algorithm of dm-ioband in the block I/O layer
> experimentally.
>
> bio-cgroup
> ==========
>
> Bio-cgroup is a BIO tracking mechanism, which is implemented on the
> cgroup memory subsystem. With the mecha...
2008 Nov 08
0
No subject
...r policies to schedule BIOs.
> - Policies which fits SSD.
> e.g.)
> - Guarantee response time.
> - Guarantee throughput.
> - Policies which fits Highend Storage or hardware raid storage.
> - Some LUNs may share the same bandwidth.
> - Support WRITE_BARRIER when the device-mapper layer supports it.
> - Implement the algorithm of dm-ioband in the block I/O layer
> experimentally.
>
> bio-cgroup
> ==========
>
> Bio-cgroup is a BIO tracking mechanism, which is implemented on the
> cgroup memory subsystem. With the mecha...
2011 Sep 08
3
blkfront: barrier: empty write op failed
I have some Xen systems running Xen-4.1.1, dom0 linux-2.6.38 patched (it''s gentoo''s xen-sources) and domUs running linux-3.0.4 (vanilla sources from kernel.org).
Block devices are phy on LVM2 volumes on DRBD-8.3.9 devices.
Not immediately after boot, but after some I/O load on the disks I start seeing these in the domUs:
blkfront: barrier: empty write xvdb1 op failed
blkfront:
2008 Nov 13
6
[PATCH 0/8] I/O bandwidth controller and BIO tracking
...sn't so active.
TODO:
- Other policies to schedule BIOs.
- Policies which fits SSD.
e.g.)
- Guarantee response time.
- Guarantee throughput.
- Policies which fits Highend Storage or hardware raid storage.
- Some LUNs may share the same bandwidth.
- Support WRITE_BARRIER when the device-mapper layer supports it.
- Implement the algorithm of dm-ioband in the block I/O layer
experimentally.
bio-cgroup
==========
Bio-cgroup is a BIO tracking mechanism, which is implemented on the
cgroup memory subsystem. With the mechanism, it is able to determine
which cgro...
2008 Nov 13
6
[PATCH 0/8] I/O bandwidth controller and BIO tracking
...sn't so active.
TODO:
- Other policies to schedule BIOs.
- Policies which fits SSD.
e.g.)
- Guarantee response time.
- Guarantee throughput.
- Policies which fits Highend Storage or hardware raid storage.
- Some LUNs may share the same bandwidth.
- Support WRITE_BARRIER when the device-mapper layer supports it.
- Implement the algorithm of dm-ioband in the block I/O layer
experimentally.
bio-cgroup
==========
Bio-cgroup is a BIO tracking mechanism, which is implemented on the
cgroup memory subsystem. With the mechanism, it is able to determine
which cgro...
2008 Nov 13
6
[PATCH 0/8] I/O bandwidth controller and BIO tracking
...sn't so active.
TODO:
- Other policies to schedule BIOs.
- Policies which fits SSD.
e.g.)
- Guarantee response time.
- Guarantee throughput.
- Policies which fits Highend Storage or hardware raid storage.
- Some LUNs may share the same bandwidth.
- Support WRITE_BARRIER when the device-mapper layer supports it.
- Implement the algorithm of dm-ioband in the block I/O layer
experimentally.
bio-cgroup
==========
Bio-cgroup is a BIO tracking mechanism, which is implemented on the
cgroup memory subsystem. With the mechanism, it is able to determine
which cgro...
2008 Jan 23
7
[PATCH 0/2] dm-band: The I/O bandwidth controller: Overview
...and2 default group 10
Remove band devices
-------------------
Remove the band devices when no longer used.
# dmsetup remove band1
# dmsetup remove band2
TODO
========================
- Cgroup support.
- Control read and write requests separately.
- Support WRITE_BARRIER.
- Optimization.
- More configuration tools. Or is the dmsetup command sufficient?
- Other policies to schedule BIOs. Or is the weight policy sufficient?
Thanks,
Ryo Tsuruta
2008 Jan 23
7
[PATCH 0/2] dm-band: The I/O bandwidth controller: Overview
...and2 default group 10
Remove band devices
-------------------
Remove the band devices when no longer used.
# dmsetup remove band1
# dmsetup remove band2
TODO
========================
- Cgroup support.
- Control read and write requests separately.
- Support WRITE_BARRIER.
- Optimization.
- More configuration tools. Or is the dmsetup command sufficient?
- Other policies to schedule BIOs. Or is the weight policy sufficient?
Thanks,
Ryo Tsuruta
2010 Sep 15
15
xenpaging fixes for kernel and hypervisor
Patrick,
there following patches fix xenpaging for me.
Granttable handling is incomplete. If a page is gone, a GNTST_eagain
should be returned to the caller to inidcate the hypercall has to be
retried after a while, until the page is available again.
Please review.
Olaf
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
2013 Oct 28
5
FreeBSD PVH guest support
...xn0: Ethernet address: 00:16:3e:0b:a4:b1
xenbusb_back0: <Xen Backend Devices> on xenstore0
xctrl0: <Xen Control Device> on xenstore0
xn0: backend features: feature-sg feature-gso-tcp4
xbd0: 20480MB <Virtual Block Device> at device/vbd/51712 on xenbusb_front0
xbd0: features: flush, write_barrier
xbd0: synchronize cache commands enabled.
GEOM: new disk xbd0
random: unblocking device.
Netvsc initializing... SMP: AP CPU #5 Launched!
SMP: AP CPU #2 Launched!
SMP: AP CPU #1 Launched!
SMP: AP CPU #3 Launched!
SMP: AP CPU #6 Launched!
SMP: AP CPU #4 Launched!
TSC timecounter discards lower 1 bi...
2008 Feb 05
2
[PATCH 0/2] dm-ioband v0.0.3: The I/O bandwidth controller: Introduction
Hi everyone,
This is dm-ioband version 0.0.3 release.
Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver,
which gives specified bandwidth to each job running on the same physical
device.
Changes since 0.0.2 (23rd January):
- Ported to linux-2.6.24.
- Rename the name of this device-mapper device as "ioband."
- The output format of "dmsetup
2008 Feb 05
2
[PATCH 0/2] dm-ioband v0.0.3: The I/O bandwidth controller: Introduction
Hi everyone,
This is dm-ioband version 0.0.3 release.
Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver,
which gives specified bandwidth to each job running on the same physical
device.
Changes since 0.0.2 (23rd January):
- Ported to linux-2.6.24.
- Rename the name of this device-mapper device as "ioband."
- The output format of "dmsetup
2008 Feb 05
2
[PATCH 0/2] dm-ioband v0.0.3: The I/O bandwidth controller: Introduction
Hi everyone,
This is dm-ioband version 0.0.3 release.
Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver,
which gives specified bandwidth to each job running on the same physical
device.
Changes since 0.0.2 (23rd January):
- Ported to linux-2.6.24.
- Rename the name of this device-mapper device as "ioband."
- The output format of "dmsetup