search for: iterate_devices

Displaying 20 results from an estimated 24 matches for "iterate_devices".

2019 Jun 01
1
[PATCH v10 4/7] dm: enable synchronous dax
...unsigned i; > + bool dax_sync = true; > > /* Ensure that all targets support DAX. */ > for (i = 0; i < dm_table_get_num_targets(t); i++) { > @@ -901,7 +908,14 @@ static bool dm_table_supports_dax(struct dm_table *t) > if (!ti->type->iterate_devices || > !ti->type->iterate_devices(ti, device_supports_dax, NULL)) > return false; > + > + /* Check devices support synchronous DAX */ > + if (dax_sync && > + !ti->type->iter...
2019 Jun 10
2
[PATCH v11 4/7] dm: enable synchronous dax
...dm_target *ti; > unsigned i; > + bool dax_sync = true; > > /* Ensure that all targets support DAX. */ > for (i = 0; i < dm_table_get_num_targets(t); i++) { > @@ -906,7 +913,14 @@ bool dm_table_supports_dax(struct dm_table *t, int blocksize) > !ti->type->iterate_devices(ti, device_supports_dax, > &blocksize)) > return false; > + > + /* Check devices support synchronous DAX */ > + if (dax_sync && > + !ti->type->iterate_devices(ti, device_synchronous, NULL)) > + dax_sync = false; > } > + if (dax_syn...
2019 May 14
0
[PATCH v9 4/7] dm: enable synchronous dax
...upports_dax(struct dm_table *t) { struct dm_target *ti; unsigned i; + bool dax_sync = true; /* Ensure that all targets support DAX. */ for (i = 0; i < dm_table_get_num_targets(t); i++) { @@ -901,7 +908,14 @@ static bool dm_table_supports_dax(struct dm_table *t) if (!ti->type->iterate_devices || !ti->type->iterate_devices(ti, device_supports_dax, NULL)) return false; + + /* Check devices support synchronous DAX */ + if (dax_sync && + !ti->type->iterate_devices(ti, device_synchronous, NULL)) + dax_sync = false; } + if (dax_sync) + set_dax_synchr...
2019 May 21
0
[PATCH v10 4/7] dm: enable synchronous dax
...upports_dax(struct dm_table *t) { struct dm_target *ti; unsigned i; + bool dax_sync = true; /* Ensure that all targets support DAX. */ for (i = 0; i < dm_table_get_num_targets(t); i++) { @@ -901,7 +908,14 @@ static bool dm_table_supports_dax(struct dm_table *t) if (!ti->type->iterate_devices || !ti->type->iterate_devices(ti, device_supports_dax, NULL)) return false; + + /* Check devices support synchronous DAX */ + if (dax_sync && + !ti->type->iterate_devices(ti, device_synchronous, NULL)) + dax_sync = false; } + if (dax_sync) + set_dax_synchr...
2019 Jun 11
0
[PATCH v12 4/7] dm: enable synchronous dax
...patch sets dax device 'DAXDEV_SYNC' flag if all the target devices of device mapper support synchrononous DAX. If device mapper consists of both synchronous and asynchronous dax devices, we don't set 'DAXDEV_SYNC' flag. 'dm_table_supports_dax' is refactored to pass 'iterate_devices_fn' as argument so that the callers can pass the appropriate functions. Suggested-by: Mike Snitzer <snitzer at redhat.com> Signed-off-by: Pankaj Gupta <pagupta at redhat.com> --- drivers/md/dm-table.c | 24 ++++++++++++++++++------ drivers/md/dm.c | 2 +- drivers/md/dm.h...
2019 Jun 11
0
[Qemu-devel] [PATCH v11 4/7] dm: enable synchronous dax
...t is strange > to have a getter have a side-effect of being a setter too. Overloading > like this could get you in trouble in the future. > > Are you certain this is what you want? I agree with you. > > Or would it be better to refactor dm_table_supports_dax() to take an > iterate_devices_fn arg and have callers pass the appropriate function? > Then have dm_table_set_restrictions() caller do: > > if (dm_table_supports_dax(t, device_synchronous, NULL)) > set_dax_synchronous(t->md->dax_dev); > > (NULL arg implies dm_table_supports_dax() re...
2019 Jun 10
0
[PATCH v11 4/7] dm: enable synchronous dax
...able *t, int blocksize) { struct dm_target *ti; unsigned i; + bool dax_sync = true; /* Ensure that all targets support DAX. */ for (i = 0; i < dm_table_get_num_targets(t); i++) { @@ -906,7 +913,14 @@ bool dm_table_supports_dax(struct dm_table *t, int blocksize) !ti->type->iterate_devices(ti, device_supports_dax, &blocksize)) return false; + + /* Check devices support synchronous DAX */ + if (dax_sync && + !ti->type->iterate_devices(ti, device_synchronous, NULL)) + dax_sync = false; } + if (dax_sync) + set_dax_synchronous(t->md->dax_de...
2019 May 21
9
[PATCH v10 0/7] virtio pmem driver
This patch series is ready to be merged via nvdimm tree as discussed with Dan. We have ack/review on XFS, EXT4 & VIRTIO patches. Need an ack on device mapper change in patch 4. Mike, Can you please review patch 4 which has change for dax with device mapper. Incorporated all the changes suggested in v9. This version has minor changes in patch 2(virtio) and does not change the
2019 May 14
12
[PATCH v9 0/7] virtio pmem driver
Hi Dan, Proposing the patch series to be merged via nvdimm tree as kindly agreed by you. We have ack/review on XFS, EXT4 & VIRTIO patches. Incorporated all the changes suggested in v8. This version added a new patch 4 for dax for device mapper change and some minor style changes in patch 2. Kept all the reviews. Request to please merge the series. --- This patch series has
2019 Jun 10
8
[PATCH v11 0/7] virtio pmem driver
This patch series is ready to be merged via nvdimm tree as discussed with Dan. We have ack/review on XFS, EXT4 & VIRTIO patches. Need an ack on device mapper change in patch 4. Mike, Can you please review and ack patch4. This version does not has any additonal code change from v10 and is only rebase of v10 on Linux 5.2-rc4 which is required for patch4. Keeping all the existing
2013 Dec 12
10
[PATCH 0/4] Turn-key PV-GRUB2 installation
This patch set should make it easier to maintain PV-GRUB2 installations. The general idea is based on discussions I had with Xen developers (mainly Ian Jackson) at the Ubuntu Developer Summit in May 2011; though I never did manage to get the core port done and Vladimir beat me to that, I think the configuration approach we discussed there is still valid and useful. The idea here is that people
2019 Jun 21
7
[PATCH v14 0/7] virtio pmem driver
This patch series is ready to be merged via nvdimm tree as discussed with Dan. We have ack/review on XFS, EXT4 device mapper & VIRTIO patches. This version has fix for test bot build failure. Keeping all the existing r-o-bs. Jakob CCed also tested the patch series and confirmed the working of v9. --- This patch series has implementation for "virtio pmem". "virtio
2019 Jul 05
8
[PATCH v15 0/7] virtio pmem driver
Hi Dan, This series has only change in patch 2 for linux-next build failure. There is no functional change. Keeping all the existing review/acks and reposting the patch series for merging via libnvdimm tree. --- This patch series has implementation for "virtio pmem". "virtio pmem" is fake persistent memory(nvdimm) in guest which allows to bypass the guest page
2019 Jun 12
8
[PATCH v13 0/7] virtio pmem driver
This patch series is ready to be merged via nvdimm tree as discussed with Dan. We have ack/review on XFS, EXT4 device mapper & VIRTIO patches. This version has minor changes in patch 2. Keeping all the existing r-o-bs. Jakob CCed also tested the patch series and confirmed the working of v9. --- This patch series has implementation for "virtio pmem". "virtio pmem"
2019 Jun 11
9
[PATCH v12 0/7] virtio pmem driver
This patch series is ready to be merged via nvdimm tree as discussed with Dan. We have ack/review on XFS, EXT4 & VIRTIO patches. Device mapper change is also reviewed. Mike, Can you please provide ack for device mapper change i.e patch4. This version has changed implementation for patch 4 as suggested by 'Mike'. Keeping all the existing r-o-bs. Jakob CCed also tested the
2009 Jul 31
1
[PATCH] dm-ioband-v1.12.3: I/O bandwidth controller
...->private; + struct request_queue *q = bdev_get_queue(gp->c_dev->bdev); + + if (!q->merge_bvec_fn) + return max_size; + + bvm->bi_bdev = gp->c_dev->bdev; + bvm->bi_sector -= ti->begin; + + return min(max_size, q->merge_bvec_fn(q, bvm, biovec)); +} + +static int ioband_iterate_devices(struct dm_target *ti, + iterate_devices_callout_fn fn, void *data) +{ + struct ioband_group *gp = ti->private; + + return fn(ti, gp->c_dev, 0, ti->len, data); +} + +static struct target_type ioband_target = { + .name = "ioband", + .module = THIS_MODULE, + .version...
2009 Jul 31
1
[PATCH] dm-ioband-v1.12.3: I/O bandwidth controller
...->private; + struct request_queue *q = bdev_get_queue(gp->c_dev->bdev); + + if (!q->merge_bvec_fn) + return max_size; + + bvm->bi_bdev = gp->c_dev->bdev; + bvm->bi_sector -= ti->begin; + + return min(max_size, q->merge_bvec_fn(q, bvm, biovec)); +} + +static int ioband_iterate_devices(struct dm_target *ti, + iterate_devices_callout_fn fn, void *data) +{ + struct ioband_group *gp = ti->private; + + return fn(ti, gp->c_dev, 0, ti->len, data); +} + +static struct target_type ioband_target = { + .name = "ioband", + .module = THIS_MODULE, + .version...
2009 Jul 31
1
[PATCH] dm-ioband-v1.12.3: I/O bandwidth controller
...->private; + struct request_queue *q = bdev_get_queue(gp->c_dev->bdev); + + if (!q->merge_bvec_fn) + return max_size; + + bvm->bi_bdev = gp->c_dev->bdev; + bvm->bi_sector -= ti->begin; + + return min(max_size, q->merge_bvec_fn(q, bvm, biovec)); +} + +static int ioband_iterate_devices(struct dm_target *ti, + iterate_devices_callout_fn fn, void *data) +{ + struct ioband_group *gp = ti->private; + + return fn(ti, gp->c_dev, 0, ti->len, data); +} + +static struct target_type ioband_target = { + .name = "ioband", + .module = THIS_MODULE, + .version...
2009 Jul 30
1
[PATCH] dm-ioband-v1.12.2: I/O bandwidth controller
...->private; + struct request_queue *q = bdev_get_queue(gp->c_dev->bdev); + + if (!q->merge_bvec_fn) + return max_size; + + bvm->bi_bdev = gp->c_dev->bdev; + bvm->bi_sector -= ti->begin; + + return min(max_size, q->merge_bvec_fn(q, bvm, biovec)); +} + +static int ioband_iterate_devices(struct dm_target *ti, + iterate_devices_callout_fn fn, void *data) +{ + struct ioband_group *gp = ti->private; + + return fn(ti, gp->c_dev, 0, ti->len, data); +} + +static struct target_type ioband_target = { + .name = "ioband", + .module = THIS_MODULE, + .version...
2009 Jul 30
1
[PATCH] dm-ioband-v1.12.2: I/O bandwidth controller
...->private; + struct request_queue *q = bdev_get_queue(gp->c_dev->bdev); + + if (!q->merge_bvec_fn) + return max_size; + + bvm->bi_bdev = gp->c_dev->bdev; + bvm->bi_sector -= ti->begin; + + return min(max_size, q->merge_bvec_fn(q, bvm, biovec)); +} + +static int ioband_iterate_devices(struct dm_target *ti, + iterate_devices_callout_fn fn, void *data) +{ + struct ioband_group *gp = ti->private; + + return fn(ti, gp->c_dev, 0, ti->len, data); +} + +static struct target_type ioband_target = { + .name = "ioband", + .module = THIS_MODULE, + .version...