Cornelia Huck
2014-Oct-07 14:39 UTC
[PATCH RFC 00/11] linux: towards virtio-1 guest support
This patchset tries to go towards implementing both virtio-1 compliant and transitional virtio drivers in Linux. Branch available at git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux virtio-1 This is based on some old patches by Rusty to handle extended feature bits and endianness conversions. Thomas implemented the new virtio-ccw transport revision command, and I hacked up some further endianness stuff and virtio-ccw enablement. Probably a lot still missing, but I can run a virtio-ccw guest that enables virtio-1 accesses if the host supports it (via the qemu host patchset) - virtio-net and virtio-blk only so far. I consider this patchset a starting point for further discussions. Cornelia Huck (5): virtio: endianess conversion helpers virtio: allow transports to get avail/used addresses virtio_blk: use virtio v1.0 endian KVM: s390: virtio-ccw revision 1 SET_VQ KVM: s390: enable virtio-ccw revision 1 Rusty Russell (5): virtio: use u32, not bitmap for struct virtio_device's features virtio: add support for 64 bit features. virtio_ring: implement endian reversal based on VERSION_1 feature. virtio_config: endian conversion for v1.0. virtio_net: use v1.0 endian. Thomas Huth (1): KVM: s390: Set virtio-ccw transport revision drivers/block/virtio_blk.c | 4 + drivers/char/virtio_console.c | 2 +- drivers/lguest/lguest_device.c | 16 +-- drivers/net/virtio_net.c | 31 +++-- drivers/remoteproc/remoteproc_virtio.c | 7 +- drivers/s390/kvm/kvm_virtio.c | 10 +- drivers/s390/kvm/virtio_ccw.c | 165 ++++++++++++++++++++----- drivers/virtio/virtio.c | 22 ++-- drivers/virtio/virtio_mmio.c | 20 +-- drivers/virtio/virtio_pci.c | 8 +- drivers/virtio/virtio_ring.c | 213 +++++++++++++++++++++++--------- include/linux/virtio.h | 46 ++++++- include/linux/virtio_config.h | 17 +-- include/uapi/linux/virtio_config.h | 3 + tools/virtio/linux/virtio.h | 22 +--- tools/virtio/linux/virtio_config.h | 2 +- tools/virtio/virtio_test.c | 5 +- tools/virtio/vringh_test.c | 16 +-- 18 files changed, 428 insertions(+), 181 deletions(-) -- 1.7.9.5
Cornelia Huck
2014-Oct-07 14:39 UTC
[PATCH RFC 01/11] virtio: use u32, not bitmap for struct virtio_device's features
From: Rusty Russell <rusty at rustcorp.com.au> It seemed like a good idea, but it's actually a pain when we get more than 32 feature bits. Just change it to a u32 for now. Cc: Brian Swetland <swetland at google.com> Cc: Christian Borntraeger <borntraeger at de.ibm.com> Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com> Acked-by: Pawel Moll <pawel.moll at arm.com> Acked-by: Ohad Ben-Cohen <ohad at wizery.com> --- drivers/char/virtio_console.c | 2 +- drivers/lguest/lguest_device.c | 8 ++++---- drivers/remoteproc/remoteproc_virtio.c | 2 +- drivers/s390/kvm/kvm_virtio.c | 2 +- drivers/s390/kvm/virtio_ccw.c | 23 +++++++++-------------- drivers/virtio/virtio.c | 10 +++++----- drivers/virtio/virtio_mmio.c | 8 ++------ drivers/virtio/virtio_pci.c | 3 +-- drivers/virtio/virtio_ring.c | 2 +- include/linux/virtio.h | 3 +-- include/linux/virtio_config.h | 2 +- tools/virtio/linux/virtio.h | 22 +--------------------- tools/virtio/linux/virtio_config.h | 2 +- tools/virtio/virtio_test.c | 5 ++--- tools/virtio/vringh_test.c | 16 ++++++++-------- 15 files changed, 39 insertions(+), 71 deletions(-) diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c index b585b47..c4a437e 100644 --- a/drivers/char/virtio_console.c +++ b/drivers/char/virtio_console.c @@ -355,7 +355,7 @@ static inline bool use_multiport(struct ports_device *portdev) */ if (!portdev->vdev) return 0; - return portdev->vdev->features[0] & (1 << VIRTIO_CONSOLE_F_MULTIPORT); + return portdev->vdev->features & (1 << VIRTIO_CONSOLE_F_MULTIPORT); } static DEFINE_SPINLOCK(dma_bufs_lock); diff --git a/drivers/lguest/lguest_device.c b/drivers/lguest/lguest_device.c index d0a1d8a..c831c47 100644 --- a/drivers/lguest/lguest_device.c +++ b/drivers/lguest/lguest_device.c @@ -137,14 +137,14 @@ static void lg_finalize_features(struct virtio_device *vdev) vring_transport_features(vdev); /* - * The vdev->feature array is a Linux bitmask: this isn't the same as a - * the simple array of bits used by lguest devices for features. So we - * do this slow, manual conversion which is completely general. + * Since lguest is currently x86-only, we're little-endian. That + * means we could just memcpy. But it's not time critical, and in + * case someone copies this code, we do it the slow, obvious way. */ memset(out_features, 0, desc->feature_len); bits = min_t(unsigned, desc->feature_len, sizeof(vdev->features)) * 8; for (i = 0; i < bits; i++) { - if (test_bit(i, vdev->features)) + if (vdev->features & (1 << i)) out_features[i / 8] |= (1 << (i % 8)); } diff --git a/drivers/remoteproc/remoteproc_virtio.c b/drivers/remoteproc/remoteproc_virtio.c index a34b506..dafaf38 100644 --- a/drivers/remoteproc/remoteproc_virtio.c +++ b/drivers/remoteproc/remoteproc_virtio.c @@ -231,7 +231,7 @@ static void rproc_virtio_finalize_features(struct virtio_device *vdev) * Remember the finalized features of our vdev, and provide it * to the remote processor once it is powered on. */ - rsc->gfeatures = vdev->features[0]; + rsc->gfeatures = vdev->features; } static void rproc_virtio_get(struct virtio_device *vdev, unsigned offset, diff --git a/drivers/s390/kvm/kvm_virtio.c b/drivers/s390/kvm/kvm_virtio.c index a134965..d747ca4 100644 --- a/drivers/s390/kvm/kvm_virtio.c +++ b/drivers/s390/kvm/kvm_virtio.c @@ -106,7 +106,7 @@ static void kvm_finalize_features(struct virtio_device *vdev) memset(out_features, 0, desc->feature_len); bits = min_t(unsigned, desc->feature_len, sizeof(vdev->features)) * 8; for (i = 0; i < bits; i++) { - if (test_bit(i, vdev->features)) + if (vdev->features & (1 << i)) out_features[i / 8] |= (1 << (i % 8)); } } diff --git a/drivers/s390/kvm/virtio_ccw.c b/drivers/s390/kvm/virtio_ccw.c index d2c0b44..c5acd19 100644 --- a/drivers/s390/kvm/virtio_ccw.c +++ b/drivers/s390/kvm/virtio_ccw.c @@ -701,7 +701,6 @@ static void virtio_ccw_finalize_features(struct virtio_device *vdev) { struct virtio_ccw_device *vcdev = to_vc_device(vdev); struct virtio_feature_desc *features; - int i; struct ccw1 *ccw; ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL); @@ -715,19 +714,15 @@ static void virtio_ccw_finalize_features(struct virtio_device *vdev) /* Give virtio_ring a chance to accept features. */ vring_transport_features(vdev); - for (i = 0; i < sizeof(*vdev->features) / sizeof(features->features); - i++) { - int highbits = i % 2 ? 32 : 0; - features->index = i; - features->features = cpu_to_le32(vdev->features[i / 2] - >> highbits); - /* Write the feature bits to the host. */ - ccw->cmd_code = CCW_CMD_WRITE_FEAT; - ccw->flags = 0; - ccw->count = sizeof(*features); - ccw->cda = (__u32)(unsigned long)features; - ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_WRITE_FEAT); - } + features->index = 0; + features->features = cpu_to_le32(vdev->features); + /* Write the feature bits to the host. */ + ccw->cmd_code = CCW_CMD_WRITE_FEAT; + ccw->flags = 0; + ccw->count = sizeof(*features); + ccw->cda = (__u32)(unsigned long)features; + ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_WRITE_FEAT); + out_free: kfree(features); kfree(ccw); diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c index fed0ce1..601efc3 100644 --- a/drivers/virtio/virtio.c +++ b/drivers/virtio/virtio.c @@ -49,9 +49,9 @@ static ssize_t features_show(struct device *_d, /* We actually represent this as a bitstring, as it could be * arbitrary length in future. */ - for (i = 0; i < ARRAY_SIZE(dev->features)*BITS_PER_LONG; i++) + for (i = 0; i < sizeof(dev->features)*8; i++) len += sprintf(buf+len, "%c", - test_bit(i, dev->features) ? '1' : '0'); + dev->features & (1ULL << i) ? '1' : '0'); len += sprintf(buf+len, "\n"); return len; } @@ -131,18 +131,18 @@ static int virtio_dev_probe(struct device *_d) device_features = dev->config->get_features(dev); /* Features supported by both device and driver into dev->features. */ - memset(dev->features, 0, sizeof(dev->features)); + dev->features = 0; for (i = 0; i < drv->feature_table_size; i++) { unsigned int f = drv->feature_table[i]; BUG_ON(f >= 32); if (device_features & (1 << f)) - set_bit(f, dev->features); + dev->features |= (1 << f); } /* Transport features always preserved to pass to finalize_features. */ for (i = VIRTIO_TRANSPORT_F_START; i < VIRTIO_TRANSPORT_F_END; i++) if (device_features & (1 << i)) - set_bit(i, dev->features); + dev->features |= (1 << i); dev->config->finalize_features(dev); diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c index c600ccf..de47734 100644 --- a/drivers/virtio/virtio_mmio.c +++ b/drivers/virtio/virtio_mmio.c @@ -155,16 +155,12 @@ static u32 vm_get_features(struct virtio_device *vdev) static void vm_finalize_features(struct virtio_device *vdev) { struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev); - int i; /* Give virtio_ring a chance to accept features. */ vring_transport_features(vdev); - for (i = 0; i < ARRAY_SIZE(vdev->features); i++) { - writel(i, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES_SEL); - writel(vdev->features[i], - vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES); - } + writel(0, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES_SEL); + writel(vdev->features, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES); } static void vm_get(struct virtio_device *vdev, unsigned offset, diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio/virtio_pci.c index 3d1463c..bb8d0ab 100644 --- a/drivers/virtio/virtio_pci.c +++ b/drivers/virtio/virtio_pci.c @@ -123,8 +123,7 @@ static void vp_finalize_features(struct virtio_device *vdev) vring_transport_features(vdev); /* We only support 32 feature bits. */ - BUILD_BUG_ON(ARRAY_SIZE(vdev->features) != 1); - iowrite32(vdev->features[0], vp_dev->ioaddr+VIRTIO_PCI_GUEST_FEATURES); + iowrite32(vdev->features, vp_dev->ioaddr+VIRTIO_PCI_GUEST_FEATURES); } /* virtio config->get() implementation */ diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 4d08f45a..94eb463 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -835,7 +835,7 @@ void vring_transport_features(struct virtio_device *vdev) break; default: /* We don't understand this bit. */ - clear_bit(i, vdev->features); + vdev->features &= ~(1 << i); } } } diff --git a/include/linux/virtio.h b/include/linux/virtio.h index b46671e..4b8380a 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -93,8 +93,7 @@ struct virtio_device { const struct virtio_config_ops *config; const struct vringh_config_ops *vringh_config; struct list_head vqs; - /* Note that this is a Linux set_bit-style bitmap. */ - unsigned long features[1]; + u32 features; void *priv; }; diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h index e8f8f71..d300c02 100644 --- a/include/linux/virtio_config.h +++ b/include/linux/virtio_config.h @@ -93,7 +93,7 @@ static inline bool virtio_has_feature(const struct virtio_device *vdev, if (fbit < VIRTIO_TRANSPORT_F_START) virtio_check_driver_offered_feature(vdev, fbit); - return test_bit(fbit, vdev->features); + return vdev->features & (1 << fbit); } static inline diff --git a/tools/virtio/linux/virtio.h b/tools/virtio/linux/virtio.h index 5a2d1f0..72bff70 100644 --- a/tools/virtio/linux/virtio.h +++ b/tools/virtio/linux/virtio.h @@ -6,31 +6,11 @@ /* TODO: empty stubs for now. Broken but enough for virtio_ring.c */ #define list_add_tail(a, b) do {} while (0) #define list_del(a) do {} while (0) - -#define BIT_WORD(nr) ((nr) / BITS_PER_LONG) -#define BITS_PER_BYTE 8 -#define BITS_PER_LONG (sizeof(long) * BITS_PER_BYTE) -#define BIT_MASK(nr) (1UL << ((nr) % BITS_PER_LONG)) - -/* TODO: Not atomic as it should be: - * we don't use this for anything important. */ -static inline void clear_bit(int nr, volatile unsigned long *addr) -{ - unsigned long mask = BIT_MASK(nr); - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); - - *p &= ~mask; -} - -static inline int test_bit(int nr, const volatile unsigned long *addr) -{ - return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); -} /* end of stubs */ struct virtio_device { void *dev; - unsigned long features[1]; + u32 features; }; struct virtqueue { diff --git a/tools/virtio/linux/virtio_config.h b/tools/virtio/linux/virtio_config.h index 5049967..1f1636b 100644 --- a/tools/virtio/linux/virtio_config.h +++ b/tools/virtio/linux/virtio_config.h @@ -2,5 +2,5 @@ #define VIRTIO_TRANSPORT_F_END 32 #define virtio_has_feature(dev, feature) \ - test_bit((feature), (dev)->features) + ((dev)->features & (1 << feature)) diff --git a/tools/virtio/virtio_test.c b/tools/virtio/virtio_test.c index 00ea679..db3437c 100644 --- a/tools/virtio/virtio_test.c +++ b/tools/virtio/virtio_test.c @@ -60,7 +60,7 @@ void vhost_vq_setup(struct vdev_info *dev, struct vq_info *info) { struct vhost_vring_state state = { .index = info->idx }; struct vhost_vring_file file = { .index = info->idx }; - unsigned long long features = dev->vdev.features[0]; + unsigned long long features = dev->vdev.features; struct vhost_vring_addr addr = { .index = info->idx, .desc_user_addr = (uint64_t)(unsigned long)info->vring.desc, @@ -113,8 +113,7 @@ static void vdev_info_init(struct vdev_info* dev, unsigned long long features) { int r; memset(dev, 0, sizeof *dev); - dev->vdev.features[0] = features; - dev->vdev.features[1] = features >> 32; + dev->vdev.features = features; dev->buf_size = 1024; dev->buf = malloc(dev->buf_size); assert(dev->buf); diff --git a/tools/virtio/vringh_test.c b/tools/virtio/vringh_test.c index 14a4f4c..b6c9ee3 100644 --- a/tools/virtio/vringh_test.c +++ b/tools/virtio/vringh_test.c @@ -304,7 +304,7 @@ static int parallel_test(unsigned long features, close(to_guest[1]); close(to_host[0]); - gvdev.vdev.features[0] = features; + gvdev.vdev.features = features; gvdev.to_host_fd = to_host[1]; gvdev.notifies = 0; @@ -449,13 +449,13 @@ int main(int argc, char *argv[]) bool fast_vringh = false, parallel = false; getrange = getrange_iov; - vdev.features[0] = 0; + vdev.features = 0; while (argv[1]) { if (strcmp(argv[1], "--indirect") == 0) - vdev.features[0] |= (1 << VIRTIO_RING_F_INDIRECT_DESC); + vdev.features |= (1 << VIRTIO_RING_F_INDIRECT_DESC); else if (strcmp(argv[1], "--eventidx") == 0) - vdev.features[0] |= (1 << VIRTIO_RING_F_EVENT_IDX); + vdev.features |= (1 << VIRTIO_RING_F_EVENT_IDX); else if (strcmp(argv[1], "--slow-range") == 0) getrange = getrange_slow; else if (strcmp(argv[1], "--fast-vringh") == 0) @@ -468,7 +468,7 @@ int main(int argc, char *argv[]) } if (parallel) - return parallel_test(vdev.features[0], getrange, fast_vringh); + return parallel_test(vdev.features, getrange, fast_vringh); if (posix_memalign(&__user_addr_min, PAGE_SIZE, USER_MEM) != 0) abort(); @@ -483,7 +483,7 @@ int main(int argc, char *argv[]) /* Set up host side. */ vring_init(&vrh.vring, RINGSIZE, __user_addr_min, ALIGN); - vringh_init_user(&vrh, vdev.features[0], RINGSIZE, true, + vringh_init_user(&vrh, vdev.features, RINGSIZE, true, vrh.vring.desc, vrh.vring.avail, vrh.vring.used); /* No descriptor to get yet... */ @@ -652,13 +652,13 @@ int main(int argc, char *argv[]) } /* Test weird (but legal!) indirect. */ - if (vdev.features[0] & (1 << VIRTIO_RING_F_INDIRECT_DESC)) { + if (vdev.features & (1 << VIRTIO_RING_F_INDIRECT_DESC)) { char *data = __user_addr_max - USER_MEM/4; struct vring_desc *d = __user_addr_max - USER_MEM/2; struct vring vring; /* Force creation of direct, which we modify. */ - vdev.features[0] &= ~(1 << VIRTIO_RING_F_INDIRECT_DESC); + vdev.features &= ~(1 << VIRTIO_RING_F_INDIRECT_DESC); vq = vring_new_virtqueue(0, RINGSIZE, ALIGN, &vdev, true, __user_addr_min, never_notify_host, -- 1.7.9.5
Cornelia Huck
2014-Oct-07 14:39 UTC
[PATCH RFC 02/11] virtio: add support for 64 bit features.
From: Rusty Russell <rusty at rustcorp.com.au> Change the u32 to a u64, and make sure to use 1ULL everywhere! Cc: Brian Swetland <swetland at google.com> Cc: Christian Borntraeger <borntraeger at de.ibm.com> [Thomas Huth: fix up virtio-ccw get_features] Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com> Acked-by: Pawel Moll <pawel.moll at arm.com> Acked-by: Ohad Ben-Cohen <ohad at wizery.com> --- drivers/char/virtio_console.c | 2 +- drivers/lguest/lguest_device.c | 10 +++++----- drivers/remoteproc/remoteproc_virtio.c | 5 ++++- drivers/s390/kvm/kvm_virtio.c | 10 +++++----- drivers/s390/kvm/virtio_ccw.c | 29 ++++++++++++++++++++++++----- drivers/virtio/virtio.c | 12 ++++++------ drivers/virtio/virtio_mmio.c | 14 +++++++++----- drivers/virtio/virtio_pci.c | 5 ++--- drivers/virtio/virtio_ring.c | 2 +- include/linux/virtio.h | 2 +- include/linux/virtio_config.h | 8 ++++---- tools/virtio/linux/virtio.h | 2 +- tools/virtio/linux/virtio_config.h | 2 +- 13 files changed, 64 insertions(+), 39 deletions(-) diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c index c4a437e..f9c6288 100644 --- a/drivers/char/virtio_console.c +++ b/drivers/char/virtio_console.c @@ -355,7 +355,7 @@ static inline bool use_multiport(struct ports_device *portdev) */ if (!portdev->vdev) return 0; - return portdev->vdev->features & (1 << VIRTIO_CONSOLE_F_MULTIPORT); + return portdev->vdev->features & (1ULL << VIRTIO_CONSOLE_F_MULTIPORT); } static DEFINE_SPINLOCK(dma_bufs_lock); diff --git a/drivers/lguest/lguest_device.c b/drivers/lguest/lguest_device.c index c831c47..4d29bcd 100644 --- a/drivers/lguest/lguest_device.c +++ b/drivers/lguest/lguest_device.c @@ -94,17 +94,17 @@ static unsigned desc_size(const struct lguest_device_desc *desc) } /* This gets the device's feature bits. */ -static u32 lg_get_features(struct virtio_device *vdev) +static u64 lg_get_features(struct virtio_device *vdev) { unsigned int i; - u32 features = 0; + u64 features = 0; struct lguest_device_desc *desc = to_lgdev(vdev)->desc; u8 *in_features = lg_features(desc); /* We do this the slow but generic way. */ - for (i = 0; i < min(desc->feature_len * 8, 32); i++) + for (i = 0; i < min(desc->feature_len * 8, 64); i++) if (in_features[i / 8] & (1 << (i % 8))) - features |= (1 << i); + features |= (1ULL << i); return features; } @@ -144,7 +144,7 @@ static void lg_finalize_features(struct virtio_device *vdev) memset(out_features, 0, desc->feature_len); bits = min_t(unsigned, desc->feature_len, sizeof(vdev->features)) * 8; for (i = 0; i < bits; i++) { - if (vdev->features & (1 << i)) + if (vdev->features & (1ULL << i)) out_features[i / 8] |= (1 << (i % 8)); } diff --git a/drivers/remoteproc/remoteproc_virtio.c b/drivers/remoteproc/remoteproc_virtio.c index dafaf38..627737e 100644 --- a/drivers/remoteproc/remoteproc_virtio.c +++ b/drivers/remoteproc/remoteproc_virtio.c @@ -207,7 +207,7 @@ static void rproc_virtio_reset(struct virtio_device *vdev) } /* provide the vdev features as retrieved from the firmware */ -static u32 rproc_virtio_get_features(struct virtio_device *vdev) +static u64 rproc_virtio_get_features(struct virtio_device *vdev) { struct rproc_vdev *rvdev = vdev_to_rvdev(vdev); struct fw_rsc_vdev *rsc; @@ -227,6 +227,9 @@ static void rproc_virtio_finalize_features(struct virtio_device *vdev) /* Give virtio_ring a chance to accept features */ vring_transport_features(vdev); + /* Make sure we don't have any features > 32 bits! */ + BUG_ON((u32)vdev->features != vdev->features); + /* * Remember the finalized features of our vdev, and provide it * to the remote processor once it is powered on. diff --git a/drivers/s390/kvm/kvm_virtio.c b/drivers/s390/kvm/kvm_virtio.c index d747ca4..6d4cbea 100644 --- a/drivers/s390/kvm/kvm_virtio.c +++ b/drivers/s390/kvm/kvm_virtio.c @@ -80,16 +80,16 @@ static unsigned desc_size(const struct kvm_device_desc *desc) } /* This gets the device's feature bits. */ -static u32 kvm_get_features(struct virtio_device *vdev) +static u64 kvm_get_features(struct virtio_device *vdev) { unsigned int i; - u32 features = 0; + u64 features = 0; struct kvm_device_desc *desc = to_kvmdev(vdev)->desc; u8 *in_features = kvm_vq_features(desc); - for (i = 0; i < min(desc->feature_len * 8, 32); i++) + for (i = 0; i < min(desc->feature_len * 8, 64); i++) if (in_features[i / 8] & (1 << (i % 8))) - features |= (1 << i); + features |= (1ULL << i); return features; } @@ -106,7 +106,7 @@ static void kvm_finalize_features(struct virtio_device *vdev) memset(out_features, 0, desc->feature_len); bits = min_t(unsigned, desc->feature_len, sizeof(vdev->features)) * 8; for (i = 0; i < bits; i++) { - if (vdev->features & (1 << i)) + if (vdev->features & (1ULL << i)) out_features[i / 8] |= (1 << (i % 8)); } } diff --git a/drivers/s390/kvm/virtio_ccw.c b/drivers/s390/kvm/virtio_ccw.c index c5acd19..4173b59 100644 --- a/drivers/s390/kvm/virtio_ccw.c +++ b/drivers/s390/kvm/virtio_ccw.c @@ -660,11 +660,12 @@ static void virtio_ccw_reset(struct virtio_device *vdev) kfree(ccw); } -static u32 virtio_ccw_get_features(struct virtio_device *vdev) +static u64 virtio_ccw_get_features(struct virtio_device *vdev) { struct virtio_ccw_device *vcdev = to_vc_device(vdev); struct virtio_feature_desc *features; - int ret, rc; + int ret; + u64 rc; struct ccw1 *ccw; ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL); @@ -677,7 +678,6 @@ static u32 virtio_ccw_get_features(struct virtio_device *vdev) goto out_free; } /* Read the feature bits from the host. */ - /* TODO: Features > 32 bits */ features->index = 0; ccw->cmd_code = CCW_CMD_READ_FEAT; ccw->flags = 0; @@ -691,6 +691,16 @@ static u32 virtio_ccw_get_features(struct virtio_device *vdev) rc = le32_to_cpu(features->features); + /* Read second half feature bits from the host. */ + features->index = 1; + ccw->cmd_code = CCW_CMD_READ_FEAT; + ccw->flags = 0; + ccw->count = sizeof(*features); + ccw->cda = (__u32)(unsigned long)features; + ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_FEAT); + if (ret == 0) + rc |= (u64)le32_to_cpu(features->features) << 32; + out_free: kfree(features); kfree(ccw); @@ -715,8 +725,17 @@ static void virtio_ccw_finalize_features(struct virtio_device *vdev) vring_transport_features(vdev); features->index = 0; - features->features = cpu_to_le32(vdev->features); - /* Write the feature bits to the host. */ + features->features = cpu_to_le32((u32)vdev->features); + /* Write the first half of the feature bits to the host. */ + ccw->cmd_code = CCW_CMD_WRITE_FEAT; + ccw->flags = 0; + ccw->count = sizeof(*features); + ccw->cda = (__u32)(unsigned long)features; + ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_WRITE_FEAT); + + features->index = 1; + features->features = cpu_to_le32(vdev->features >> 32); + /* Write the second half of the feature bits to the host. */ ccw->cmd_code = CCW_CMD_WRITE_FEAT; ccw->flags = 0; ccw->count = sizeof(*features); diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c index 601efc3..cfd5d00 100644 --- a/drivers/virtio/virtio.c +++ b/drivers/virtio/virtio.c @@ -122,7 +122,7 @@ static int virtio_dev_probe(struct device *_d) int err, i; struct virtio_device *dev = dev_to_virtio(_d); struct virtio_driver *drv = drv_to_virtio(dev->dev.driver); - u32 device_features; + u64 device_features; /* We have a driver! */ add_status(dev, VIRTIO_CONFIG_S_DRIVER); @@ -134,15 +134,15 @@ static int virtio_dev_probe(struct device *_d) dev->features = 0; for (i = 0; i < drv->feature_table_size; i++) { unsigned int f = drv->feature_table[i]; - BUG_ON(f >= 32); - if (device_features & (1 << f)) - dev->features |= (1 << f); + BUG_ON(f >= 64); + if (device_features & (1ULL << f)) + dev->features |= (1ULL << f); } /* Transport features always preserved to pass to finalize_features. */ for (i = VIRTIO_TRANSPORT_F_START; i < VIRTIO_TRANSPORT_F_END; i++) - if (device_features & (1 << i)) - dev->features |= (1 << i); + if (device_features & (1ULL << i)) + dev->features |= (1ULL << i); dev->config->finalize_features(dev); diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c index de47734..17f8367 100644 --- a/drivers/virtio/virtio_mmio.c +++ b/drivers/virtio/virtio_mmio.c @@ -142,14 +142,16 @@ struct virtio_mmio_vq_info { /* Configuration interface */ -static u32 vm_get_features(struct virtio_device *vdev) +static u64 vm_get_features(struct virtio_device *vdev) { struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev); + u64 features; - /* TODO: Features > 32 bits */ writel(0, vm_dev->base + VIRTIO_MMIO_HOST_FEATURES_SEL); - - return readl(vm_dev->base + VIRTIO_MMIO_HOST_FEATURES); + features = readl(vm_dev->base + VIRTIO_MMIO_HOST_FEATURES); + writel(1, vm_dev->base + VIRTIO_MMIO_HOST_FEATURES_SEL); + features |= ((u64)readl(vm_dev->base + VIRTIO_MMIO_HOST_FEATURES) << 32); + return features; } static void vm_finalize_features(struct virtio_device *vdev) @@ -160,7 +162,9 @@ static void vm_finalize_features(struct virtio_device *vdev) vring_transport_features(vdev); writel(0, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES_SEL); - writel(vdev->features, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES); + writel((u32)vdev->features, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES); + writel(1, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES_SEL); + writel(vdev->features >> 32, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES); } static void vm_get(struct virtio_device *vdev, unsigned offset, diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio/virtio_pci.c index bb8d0ab..3b272d9 100644 --- a/drivers/virtio/virtio_pci.c +++ b/drivers/virtio/virtio_pci.c @@ -105,12 +105,11 @@ static struct virtio_pci_device *to_vp_device(struct virtio_device *vdev) } /* virtio config->get_features() implementation */ -static u32 vp_get_features(struct virtio_device *vdev) +static u64 vp_get_features(struct virtio_device *vdev) { struct virtio_pci_device *vp_dev = to_vp_device(vdev); - /* When someone needs more than 32 feature bits, we'll need to - * steal a bit to indicate that the rest are somewhere else. */ + /* We only support 32 feature bits. */ return ioread32(vp_dev->ioaddr + VIRTIO_PCI_HOST_FEATURES); } diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 94eb463..1cfb5ba 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -835,7 +835,7 @@ void vring_transport_features(struct virtio_device *vdev) break; default: /* We don't understand this bit. */ - vdev->features &= ~(1 << i); + vdev->features &= ~(1ULL << i); } } } diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 4b8380a..a24b41f 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -93,7 +93,7 @@ struct virtio_device { const struct virtio_config_ops *config; const struct vringh_config_ops *vringh_config; struct list_head vqs; - u32 features; + u64 features; void *priv; }; diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h index d300c02..a0e16d8 100644 --- a/include/linux/virtio_config.h +++ b/include/linux/virtio_config.h @@ -66,7 +66,7 @@ struct virtio_config_ops { vq_callback_t *callbacks[], const char *names[]); void (*del_vqs)(struct virtio_device *); - u32 (*get_features)(struct virtio_device *vdev); + u64 (*get_features)(struct virtio_device *vdev); void (*finalize_features)(struct virtio_device *vdev); const char *(*bus_name)(struct virtio_device *vdev); int (*set_vq_affinity)(struct virtqueue *vq, int cpu); @@ -86,14 +86,14 @@ static inline bool virtio_has_feature(const struct virtio_device *vdev, { /* Did you forget to fix assumptions on max features? */ if (__builtin_constant_p(fbit)) - BUILD_BUG_ON(fbit >= 32); + BUILD_BUG_ON(fbit >= 64); else - BUG_ON(fbit >= 32); + BUG_ON(fbit >= 64); if (fbit < VIRTIO_TRANSPORT_F_START) virtio_check_driver_offered_feature(vdev, fbit); - return vdev->features & (1 << fbit); + return vdev->features & (1ULL << fbit); } static inline diff --git a/tools/virtio/linux/virtio.h b/tools/virtio/linux/virtio.h index 72bff70..8eb6421 100644 --- a/tools/virtio/linux/virtio.h +++ b/tools/virtio/linux/virtio.h @@ -10,7 +10,7 @@ struct virtio_device { void *dev; - u32 features; + u64 features; }; struct virtqueue { diff --git a/tools/virtio/linux/virtio_config.h b/tools/virtio/linux/virtio_config.h index 1f1636b..a254c2b 100644 --- a/tools/virtio/linux/virtio_config.h +++ b/tools/virtio/linux/virtio_config.h @@ -2,5 +2,5 @@ #define VIRTIO_TRANSPORT_F_END 32 #define virtio_has_feature(dev, feature) \ - ((dev)->features & (1 << feature)) + ((dev)->features & (1ULL << feature)) -- 1.7.9.5
Cornelia Huck
2014-Oct-07 14:39 UTC
[PATCH RFC 03/11] virtio: endianess conversion helpers
Provide helper functions that convert from/to LE for virtio devices that are not operating in legacy mode. We check for the VERSION_1 feature bit to determine that. Based on original patches by Rusty Russell and Thomas Huth. Reviewed-by: David Hildenbrand <dahi at linux.vnet.ibm.com> Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com> --- drivers/virtio/virtio.c | 4 ++++ include/linux/virtio.h | 40 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/virtio_config.h | 3 +++ 3 files changed, 47 insertions(+) diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c index cfd5d00..8f74cd6 100644 --- a/drivers/virtio/virtio.c +++ b/drivers/virtio/virtio.c @@ -144,6 +144,10 @@ static int virtio_dev_probe(struct device *_d) if (device_features & (1ULL << i)) dev->features |= (1ULL << i); + /* Version 1.0 compliant devices set the VIRTIO_F_VERSION_1 bit */ + if (device_features & (1ULL << VIRTIO_F_VERSION_1)) + dev->features |= (1ULL << VIRTIO_F_VERSION_1); + dev->config->finalize_features(dev); err = drv->probe(dev); diff --git a/include/linux/virtio.h b/include/linux/virtio.h index a24b41f..68cadd4 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -9,6 +9,7 @@ #include <linux/mod_devicetable.h> #include <linux/gfp.h> #include <linux/vringh.h> +#include <uapi/linux/virtio_config.h> /** * virtqueue - a queue to register buffers for sending or receiving. @@ -102,6 +103,11 @@ static inline struct virtio_device *dev_to_virtio(struct device *_dev) return container_of(_dev, struct virtio_device, dev); } +static inline bool virtio_device_legacy(const struct virtio_device *dev) +{ + return !(dev->features & (1ULL << VIRTIO_F_VERSION_1)); +} + int register_virtio_device(struct virtio_device *dev); void unregister_virtio_device(struct virtio_device *dev); @@ -149,4 +155,38 @@ void unregister_virtio_driver(struct virtio_driver *drv); #define module_virtio_driver(__virtio_driver) \ module_driver(__virtio_driver, register_virtio_driver, \ unregister_virtio_driver) + +/* + * v1.0 specifies LE headers, legacy was native endian. Therefore, we must + * convert from/to LE if and only if vdev is not legacy. + */ +static inline u16 virtio_to_cpu_u16(const struct virtio_device *vdev, u16 v) +{ + return virtio_device_legacy(vdev) ? v : le16_to_cpu(v); +} + +static inline u32 virtio_to_cpu_u32(const struct virtio_device *vdev, u32 v) +{ + return virtio_device_legacy(vdev) ? v : le32_to_cpu(v); +} + +static inline u64 virtio_to_cpu_u64(const struct virtio_device *vdev, u64 v) +{ + return virtio_device_legacy(vdev) ? v : le64_to_cpu(v); +} + +static inline u16 cpu_to_virtio_u16(const struct virtio_device *vdev, u16 v) +{ + return virtio_device_legacy(vdev) ? v : cpu_to_le16(v); +} + +static inline u32 cpu_to_virtio_u32(const struct virtio_device *vdev, u32 v) +{ + return virtio_device_legacy(vdev) ? v : cpu_to_le32(v); +} + +static inline u64 cpu_to_virtio_u64(const struct virtio_device *vdev, u64 v) +{ + return virtio_device_legacy(vdev) ? v : cpu_to_le64(v); +} #endif /* _LINUX_VIRTIO_H */ diff --git a/include/uapi/linux/virtio_config.h b/include/uapi/linux/virtio_config.h index 3ce768c..80e7381 100644 --- a/include/uapi/linux/virtio_config.h +++ b/include/uapi/linux/virtio_config.h @@ -54,4 +54,7 @@ /* Can the device handle any descriptor layout? */ #define VIRTIO_F_ANY_LAYOUT 27 +/* v1.0 compliant. */ +#define VIRTIO_F_VERSION_1 32 + #endif /* _UAPI_LINUX_VIRTIO_CONFIG_H */ -- 1.7.9.5
Cornelia Huck
2014-Oct-07 14:39 UTC
[PATCH RFC 04/11] virtio_ring: implement endian reversal based on VERSION_1 feature.
From: Rusty Russell <rusty at rustcorp.com.au> [Cornelia Huck: we don't need the vq->vring.num -> vq->ring_mask change] Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com> --- drivers/virtio/virtio_ring.c | 195 ++++++++++++++++++++++++++++++------------ 1 file changed, 138 insertions(+), 57 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 1cfb5ba..350c39b 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -145,42 +145,54 @@ static inline int vring_add_indirect(struct vring_virtqueue *vq, i = 0; for (n = 0; n < out_sgs; n++) { for (sg = sgs[n]; sg; sg = next(sg, &total_out)) { - desc[i].flags = VRING_DESC_F_NEXT; - desc[i].addr = sg_phys(sg); - desc[i].len = sg->length; - desc[i].next = i+1; + desc[i].flags = cpu_to_virtio_u16(vq->vq.vdev, + VRING_DESC_F_NEXT); + desc[i].addr = cpu_to_virtio_u64(vq->vq.vdev, + sg_phys(sg)); + desc[i].len = cpu_to_virtio_u32(vq->vq.vdev, + sg->length); + desc[i].next = cpu_to_virtio_u16(vq->vq.vdev, + i+1); i++; } } for (; n < (out_sgs + in_sgs); n++) { for (sg = sgs[n]; sg; sg = next(sg, &total_in)) { - desc[i].flags = VRING_DESC_F_NEXT|VRING_DESC_F_WRITE; - desc[i].addr = sg_phys(sg); - desc[i].len = sg->length; - desc[i].next = i+1; + desc[i].flags = cpu_to_virtio_u16(vq->vq.vdev, + VRING_DESC_F_NEXT| + VRING_DESC_F_WRITE); + desc[i].addr = cpu_to_virtio_u64(vq->vq.vdev, + sg_phys(sg)); + desc[i].len = cpu_to_virtio_u32(vq->vq.vdev, + sg->length); + desc[i].next = cpu_to_virtio_u16(vq->vq.vdev, i+1); i++; } } - BUG_ON(i != total_sg); /* Last one doesn't continue. */ - desc[i-1].flags &= ~VRING_DESC_F_NEXT; + desc[i-1].flags &= ~cpu_to_virtio_u16(vq->vq.vdev, VRING_DESC_F_NEXT); desc[i-1].next = 0; - /* We're about to use a buffer */ - vq->vq.num_free--; - /* Use a single buffer which doesn't continue */ head = vq->free_head; - vq->vring.desc[head].flags = VRING_DESC_F_INDIRECT; - vq->vring.desc[head].addr = virt_to_phys(desc); + vq->vring.desc[head].flags + cpu_to_virtio_u16(vq->vq.vdev, VRING_DESC_F_INDIRECT); + vq->vring.desc[head].addr + cpu_to_virtio_u64(vq->vq.vdev, virt_to_phys(desc)); /* kmemleak gives a false positive, as it's hidden by virt_to_phys */ kmemleak_ignore(desc); - vq->vring.desc[head].len = i * sizeof(struct vring_desc); + vq->vring.desc[head].len + cpu_to_virtio_u32(vq->vq.vdev, i * sizeof(struct vring_desc)); - /* Update free pointer */ + BUG_ON(i != total_sg); + + /* Update free pointer (we store this in native endian) */ vq->free_head = vq->vring.desc[head].next; + /* We've just used a buffer */ + vq->vq.num_free--; + return head; } @@ -199,6 +211,7 @@ static inline int virtqueue_add(struct virtqueue *_vq, struct scatterlist *sg; unsigned int i, n, avail, uninitialized_var(prev), total_sg; int head; + u16 nexti; START_USE(vq); @@ -253,26 +266,46 @@ static inline int virtqueue_add(struct virtqueue *_vq, vq->vq.num_free -= total_sg; head = i = vq->free_head; + for (n = 0; n < out_sgs; n++) { for (sg = sgs[n]; sg; sg = next(sg, &total_out)) { - vq->vring.desc[i].flags = VRING_DESC_F_NEXT; - vq->vring.desc[i].addr = sg_phys(sg); - vq->vring.desc[i].len = sg->length; + vq->vring.desc[i].flags + cpu_to_virtio_u16(vq->vq.vdev, + VRING_DESC_F_NEXT); + vq->vring.desc[i].addr + cpu_to_virtio_u64(vq->vq.vdev, sg_phys(sg)); + vq->vring.desc[i].len + cpu_to_virtio_u32(vq->vq.vdev, sg->length); + + /* We chained .next in native: fix endian. */ + nexti = vq->vring.desc[i].next; + vq->vring.desc[i].next + = virtio_to_cpu_u16(vq->vq.vdev, nexti); prev = i; - i = vq->vring.desc[i].next; + i = nexti; } } for (; n < (out_sgs + in_sgs); n++) { for (sg = sgs[n]; sg; sg = next(sg, &total_in)) { - vq->vring.desc[i].flags = VRING_DESC_F_NEXT|VRING_DESC_F_WRITE; - vq->vring.desc[i].addr = sg_phys(sg); - vq->vring.desc[i].len = sg->length; + vq->vring.desc[i].flags + cpu_to_virtio_u16(vq->vq.vdev, + VRING_DESC_F_NEXT| + VRING_DESC_F_WRITE); + vq->vring.desc[i].addr + cpu_to_virtio_u64(vq->vq.vdev, sg_phys(sg)); + vq->vring.desc[i].len + cpu_to_virtio_u32(vq->vq.vdev, sg->length); + /* We chained .next in native: fix endian. */ + nexti = vq->vring.desc[i].next; + vq->vring.desc[i].next + virtio_to_cpu_u16(vq->vq.vdev, nexti); prev = i; - i = vq->vring.desc[i].next; + i = nexti; } } /* Last one doesn't continue. */ - vq->vring.desc[prev].flags &= ~VRING_DESC_F_NEXT; + vq->vring.desc[prev].flags &+ ~cpu_to_virtio_u16(vq->vq.vdev, VRING_DESC_F_NEXT); /* Update free pointer */ vq->free_head = i; @@ -283,15 +316,16 @@ add_head: /* Put entry in available array (but don't update avail->idx until they * do sync). */ - avail = (vq->vring.avail->idx & (vq->vring.num-1)); - vq->vring.avail->ring[avail] = head; + avail = virtio_to_cpu_u16(vq->vq.vdev, vq->vring.avail->idx); + vq->vring.avail->ring[avail & (vq->vring.num - 1)] + cpu_to_virtio_u16(vq->vq.vdev, head); - /* Descriptors and available array need to be set before we expose the - * new available array entries. */ + /* Descriptors and available array need to be set + * before we expose the new available array entries. */ virtio_wmb(vq->weak_barriers); - vq->vring.avail->idx++; - vq->num_added++; + vq->vring.avail->idx = cpu_to_virtio_u16(vq->vq.vdev, avail + 1); + vq->num_added++; /* This is very unlikely, but theoretically possible. Kick * just in case. */ if (unlikely(vq->num_added == (1 << 16) - 1)) @@ -408,8 +442,9 @@ bool virtqueue_kick_prepare(struct virtqueue *_vq) * event. */ virtio_mb(vq->weak_barriers); - old = vq->vring.avail->idx - vq->num_added; - new = vq->vring.avail->idx; + new = virtio_to_cpu_u16(vq->vq.vdev, vq->vring.avail->idx); + + old = new - vq->num_added; vq->num_added = 0; #ifdef DEBUG @@ -421,10 +456,17 @@ bool virtqueue_kick_prepare(struct virtqueue *_vq) #endif if (vq->event) { - needs_kick = vring_need_event(vring_avail_event(&vq->vring), - new, old); + u16 avail; + + avail = virtio_to_cpu_u16(vq->vq.vdev, + vring_avail_event(&vq->vring)); + + needs_kick = vring_need_event(avail, new, old); } else { - needs_kick = !(vq->vring.used->flags & VRING_USED_F_NO_NOTIFY); + u16 flags; + + flags = virtio_to_cpu_u16(vq->vq.vdev, vq->vring.used->flags); + needs_kick = !(flags & VRING_USED_F_NO_NOTIFY); } END_USE(vq); return needs_kick; @@ -486,11 +528,20 @@ static void detach_buf(struct vring_virtqueue *vq, unsigned int head) i = head; /* Free the indirect table */ - if (vq->vring.desc[i].flags & VRING_DESC_F_INDIRECT) - kfree(phys_to_virt(vq->vring.desc[i].addr)); + if (vq->vring.desc[i].flags & + cpu_to_virtio_u16(vq->vq.vdev, VRING_DESC_F_INDIRECT)) { + kfree(phys_to_virt(virtio_to_cpu_u64(vq->vq.vdev, + vq->vring.desc[i].addr))); + } + + while (vq->vring.desc[i].flags & + cpu_to_virtio_u16(vq->vq.vdev, VRING_DESC_F_NEXT)) { + u16 next; - while (vq->vring.desc[i].flags & VRING_DESC_F_NEXT) { - i = vq->vring.desc[i].next; + /* Convert endian of next back to native. */ + next = virtio_to_cpu_u16(vq->vq.vdev, vq->vring.desc[i].next); + vq->vring.desc[i].next = next; + i = next; vq->vq.num_free++; } @@ -502,7 +553,8 @@ static void detach_buf(struct vring_virtqueue *vq, unsigned int head) static inline bool more_used(const struct vring_virtqueue *vq) { - return vq->last_used_idx != vq->vring.used->idx; + return vq->last_used_idx + != virtio_to_cpu_u16(vq->vq.vdev, vq->vring.used->idx); } /** @@ -527,6 +579,8 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) void *ret; unsigned int i; u16 last_used; + const int no_intr + cpu_to_virtio_u16(vq->vq.vdev, VRING_AVAIL_F_NO_INTERRUPT); START_USE(vq); @@ -545,8 +599,9 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) virtio_rmb(vq->weak_barriers); last_used = (vq->last_used_idx & (vq->vring.num - 1)); - i = vq->vring.used->ring[last_used].id; - *len = vq->vring.used->ring[last_used].len; + i = virtio_to_cpu_u32(vq->vq.vdev, vq->vring.used->ring[last_used].id); + *len = virtio_to_cpu_u32(vq->vq.vdev, + vq->vring.used->ring[last_used].len); if (unlikely(i >= vq->vring.num)) { BAD_RING(vq, "id %u out of range\n", i); @@ -561,10 +616,11 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) ret = vq->data[i]; detach_buf(vq, i); vq->last_used_idx++; + /* If we expect an interrupt for the next entry, tell host * by writing event index and flush out the write before * the read in the next get_buf call. */ - if (!(vq->vring.avail->flags & VRING_AVAIL_F_NO_INTERRUPT)) { + if (!(vq->vring.avail->flags & no_intr)) { vring_used_event(&vq->vring) = vq->last_used_idx; virtio_mb(vq->weak_barriers); } @@ -591,7 +647,8 @@ void virtqueue_disable_cb(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); - vq->vring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT; + vq->vring.avail->flags |+ cpu_to_virtio_u16(vq->vq.vdev, VRING_AVAIL_F_NO_INTERRUPT); } EXPORT_SYMBOL_GPL(virtqueue_disable_cb); @@ -619,8 +676,12 @@ unsigned virtqueue_enable_cb_prepare(struct virtqueue *_vq) /* Depending on the VIRTIO_RING_F_EVENT_IDX feature, we need to * either clear the flags bit or point the event index at the next * entry. Always do both to keep code simple. */ - vq->vring.avail->flags &= ~VRING_AVAIL_F_NO_INTERRUPT; - vring_used_event(&vq->vring) = last_used_idx = vq->last_used_idx; + vq->vring.avail->flags &+ cpu_to_virtio_u16(vq->vq.vdev, ~VRING_AVAIL_F_NO_INTERRUPT); + last_used_idx = vq->last_used_idx; + vring_used_event(&vq->vring) = cpu_to_virtio_u16(vq->vq.vdev, + last_used_idx); + END_USE(vq); return last_used_idx; } @@ -640,7 +701,9 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx) struct vring_virtqueue *vq = to_vvq(_vq); virtio_mb(vq->weak_barriers); - return (u16)last_used_idx != vq->vring.used->idx; + + return (u16)last_used_idx !+ virtio_to_cpu_u16(vq->vq.vdev, vq->vring.used->idx); } EXPORT_SYMBOL_GPL(virtqueue_poll); @@ -678,7 +741,7 @@ EXPORT_SYMBOL_GPL(virtqueue_enable_cb); bool virtqueue_enable_cb_delayed(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); - u16 bufs; + u16 bufs, used_idx; START_USE(vq); @@ -687,12 +750,17 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq) /* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to * either clear the flags bit or point the event index at the next * entry. Always do both to keep code simple. */ - vq->vring.avail->flags &= ~VRING_AVAIL_F_NO_INTERRUPT; + vq->vring.avail->flags &+ cpu_to_virtio_u16(vq->vq.vdev, ~VRING_AVAIL_F_NO_INTERRUPT); /* TODO: tune this threshold */ - bufs = (u16)(vq->vring.avail->idx - vq->last_used_idx) * 3 / 4; - vring_used_event(&vq->vring) = vq->last_used_idx + bufs; + bufs = (u16)(virtio_to_cpu_u16(vq->vq.vdev, vq->vring.avail->idx) + - vq->last_used_idx) * 3 / 4; + vring_used_event(&vq->vring) + cpu_to_virtio_u16(vq->vq.vdev, vq->last_used_idx + bufs); virtio_mb(vq->weak_barriers); - if (unlikely((u16)(vq->vring.used->idx - vq->last_used_idx) > bufs)) { + used_idx = virtio_to_cpu_u16(vq->vq.vdev, vq->vring.used->idx); + + if (unlikely((u16)(used_idx - vq->last_used_idx) > bufs)) { END_USE(vq); return false; } @@ -719,12 +787,19 @@ void *virtqueue_detach_unused_buf(struct virtqueue *_vq) START_USE(vq); for (i = 0; i < vq->vring.num; i++) { + u16 avail; + if (!vq->data[i]) continue; /* detach_buf clears data, so grab it now. */ buf = vq->data[i]; detach_buf(vq, i); - vq->vring.avail->idx--; + + /* AKA "vq->vring.avail->idx++" */ + avail = virtio_to_cpu_u16(vq->vq.vdev, vq->vring.avail->idx); + vq->vring.avail->idx = cpu_to_virtio_u16(vq->vq.vdev, + avail - 1); + END_USE(vq); return buf; } @@ -800,12 +875,18 @@ struct virtqueue *vring_new_virtqueue(unsigned int index, vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX); /* No callback? Tell other side not to bother us. */ - if (!callback) - vq->vring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT; + if (!callback) { + u16 flag; + + flag = cpu_to_virtio_u16(vq->vq.vdev, + VRING_AVAIL_F_NO_INTERRUPT); + vq->vring.avail->flags |= flag; + } /* Put everything in free lists. */ vq->free_head = 0; for (i = 0; i < num-1; i++) { + /* This is for our use, so always our endian. */ vq->vring.desc[i].next = i+1; vq->data[i] = NULL; } -- 1.7.9.5
Cornelia Huck
2014-Oct-07 14:39 UTC
[PATCH RFC 05/11] virtio_config: endian conversion for v1.0.
From: Rusty Russell <rusty at rustcorp.com.au> Reviewed-by: David Hildenbrand <dahi at linux.vnet.ibm.com> Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com> --- include/linux/virtio_config.h | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h index a0e16d8..ca22e3a 100644 --- a/include/linux/virtio_config.h +++ b/include/linux/virtio_config.h @@ -222,12 +222,13 @@ static inline u16 virtio_cread16(struct virtio_device *vdev, { u16 ret; vdev->config->get(vdev, offset, &ret, sizeof(ret)); - return ret; + return virtio_to_cpu_u16(vdev, ret); } static inline void virtio_cwrite16(struct virtio_device *vdev, unsigned int offset, u16 val) { + val = cpu_to_virtio_u16(vdev, val); vdev->config->set(vdev, offset, &val, sizeof(val)); } @@ -236,12 +237,13 @@ static inline u32 virtio_cread32(struct virtio_device *vdev, { u32 ret; vdev->config->get(vdev, offset, &ret, sizeof(ret)); - return ret; + return virtio_to_cpu_u32(vdev, ret); } static inline void virtio_cwrite32(struct virtio_device *vdev, unsigned int offset, u32 val) { + val = cpu_to_virtio_u32(vdev, val); vdev->config->set(vdev, offset, &val, sizeof(val)); } @@ -250,12 +252,13 @@ static inline u64 virtio_cread64(struct virtio_device *vdev, { u64 ret; vdev->config->get(vdev, offset, &ret, sizeof(ret)); - return ret; + return virtio_to_cpu_u64(vdev, ret); } static inline void virtio_cwrite64(struct virtio_device *vdev, unsigned int offset, u64 val) { + val = cpu_to_virtio_u64(vdev, val); vdev->config->set(vdev, offset, &val, sizeof(val)); } -- 1.7.9.5
Cornelia Huck
2014-Oct-07 14:39 UTC
[PATCH RFC 06/11] virtio: allow transports to get avail/used addresses
For virtio-1, we can theoretically have a more complex virtqueue layout with avail and used buffers not on a contiguous memory area with the descriptor table. For now, it's fine for a transport driver to stay with the old layout: It needs, however, a way to access the locations of the avail/used rings so it can register them with the host. Reviewed-by: David Hildenbrand <dahi at linux.vnet.ibm.com> Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com> --- drivers/virtio/virtio_ring.c | 16 ++++++++++++++++ include/linux/virtio.h | 3 +++ 2 files changed, 19 insertions(+) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 350c39b..dd0d4ec 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -961,4 +961,20 @@ void virtio_break_device(struct virtio_device *dev) } EXPORT_SYMBOL_GPL(virtio_break_device); +void *virtqueue_get_avail(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + return vq->vring.avail; +} +EXPORT_SYMBOL_GPL(virtqueue_get_avail); + +void *virtqueue_get_used(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + return vq->vring.used; +} +EXPORT_SYMBOL_GPL(virtqueue_get_used); + MODULE_LICENSE("GPL"); diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 68cadd4..f10e6e7 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -76,6 +76,9 @@ unsigned int virtqueue_get_vring_size(struct virtqueue *vq); bool virtqueue_is_broken(struct virtqueue *vq); +void *virtqueue_get_avail(struct virtqueue *vq); +void *virtqueue_get_used(struct virtqueue *vq); + /** * virtio_device - representation of a device using virtio * @index: unique position on the virtio bus -- 1.7.9.5
From: Rusty Russell <rusty at rustcorp.com.au> [Cornelia Huck: converted some missed fields] Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com> --- drivers/net/virtio_net.c | 31 +++++++++++++++++++------------ 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 59caa06..cd18946 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -353,13 +353,14 @@ err: } static struct sk_buff *receive_mergeable(struct net_device *dev, + struct virtnet_info *vi, struct receive_queue *rq, unsigned long ctx, unsigned int len) { void *buf = mergeable_ctx_to_buf_address(ctx); struct skb_vnet_hdr *hdr = buf; - int num_buf = hdr->mhdr.num_buffers; + u16 num_buf = virtio_to_cpu_u16(rq->vq->vdev, hdr->mhdr.num_buffers); struct page *page = virt_to_head_page(buf); int offset = buf - page_address(page); unsigned int truesize = max(len, mergeable_ctx_to_buf_truesize(ctx)); @@ -375,7 +376,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, ctx = (unsigned long)virtqueue_get_buf(rq->vq, &len); if (unlikely(!ctx)) { pr_debug("%s: rx error: %d buffers out of %d missing\n", - dev->name, num_buf, hdr->mhdr.num_buffers); + dev->name, num_buf, + virtio_to_cpu_u16(rq->vq->vdev, + hdr->mhdr.num_buffers)); dev->stats.rx_length_errors++; goto err_buf; } @@ -460,7 +463,7 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len) } if (vi->mergeable_rx_bufs) - skb = receive_mergeable(dev, rq, (unsigned long)buf, len); + skb = receive_mergeable(dev, vi, rq, (unsigned long)buf, len); else if (vi->big_packets) skb = receive_big(dev, rq, buf, len); else @@ -479,8 +482,8 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len) if (hdr->hdr.flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) { pr_debug("Needs csum!\n"); if (!skb_partial_csum_set(skb, - hdr->hdr.csum_start, - hdr->hdr.csum_offset)) + virtio_to_cpu_u16(vi->vdev, hdr->hdr.csum_start), + virtio_to_cpu_u16(vi->vdev, hdr->hdr.csum_offset))) goto frame_err; } else if (hdr->hdr.flags & VIRTIO_NET_HDR_F_DATA_VALID) { skb->ip_summed = CHECKSUM_UNNECESSARY; @@ -511,7 +514,8 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len) if (hdr->hdr.gso_type & VIRTIO_NET_HDR_GSO_ECN) skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN; - skb_shinfo(skb)->gso_size = hdr->hdr.gso_size; + skb_shinfo(skb)->gso_size = virtio_to_cpu_u16(vi->vdev, + hdr->hdr.gso_size); if (skb_shinfo(skb)->gso_size == 0) { net_warn_ratelimited("%s: zero gso size.\n", dev->name); goto frame_err; @@ -871,16 +875,19 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) if (skb->ip_summed == CHECKSUM_PARTIAL) { hdr->hdr.flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; - hdr->hdr.csum_start = skb_checksum_start_offset(skb); - hdr->hdr.csum_offset = skb->csum_offset; + hdr->hdr.csum_start = cpu_to_virtio_u16(vi->vdev, + skb_checksum_start_offset(skb)); + hdr->hdr.csum_offset = cpu_to_virtio_u16(vi->vdev, + skb->csum_offset); } else { hdr->hdr.flags = 0; hdr->hdr.csum_offset = hdr->hdr.csum_start = 0; } if (skb_is_gso(skb)) { - hdr->hdr.hdr_len = skb_headlen(skb); - hdr->hdr.gso_size = skb_shinfo(skb)->gso_size; + hdr->hdr.hdr_len = cpu_to_virtio_u16(vi->vdev, skb_headlen(skb)); + hdr->hdr.gso_size = cpu_to_virtio_u16(vi->vdev, + skb_shinfo(skb)->gso_size); if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV4; else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) @@ -1181,7 +1188,7 @@ static void virtnet_set_rx_mode(struct net_device *dev) sg_init_table(sg, 2); /* Store the unicast list and count in the front of the buffer */ - mac_data->entries = uc_count; + mac_data->entries = cpu_to_virtio_u32(vi->vdev, uc_count); i = 0; netdev_for_each_uc_addr(ha, dev) memcpy(&mac_data->macs[i++][0], ha->addr, ETH_ALEN); @@ -1192,7 +1199,7 @@ static void virtnet_set_rx_mode(struct net_device *dev) /* multicast list and count fill the end */ mac_data = (void *)&mac_data->macs[uc_count][0]; - mac_data->entries = mc_count; + mac_data->entries = cpu_to_virtio_u32(vi->vdev, mc_count); i = 0; netdev_for_each_mc_addr(ha, dev) memcpy(&mac_data->macs[i++][0], ha->addr, ETH_ALEN); -- 1.7.9.5
Note that we care only about the fields still in use for virtio v1.0. Reviewed-by: Thomas Huth <thuth at linux.vnet.ibm.com> Reviewed-by: David Hildenbrand <dahi at linux.vnet.ibm.com> Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com> --- drivers/block/virtio_blk.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 0a58140..08a8012 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -119,6 +119,10 @@ static int __virtblk_add_req(struct virtqueue *vq, sg_init_one(&status, &vbr->status, sizeof(vbr->status)); sgs[num_out + num_in++] = &status; + /* we only care about fields valid for virtio-1 */ + vbr->out_hdr.type = cpu_to_virtio_u32(vq->vdev, vbr->out_hdr.type); + vbr->out_hdr.sector = cpu_to_virtio_u64(vq->vdev, vbr->out_hdr.sector); + return virtqueue_add_sgs(vq, sgs, num_out, num_in, vbr, GFP_ATOMIC); } -- 1.7.9.5
Cornelia Huck
2014-Oct-07 14:39 UTC
[PATCH RFC 09/11] KVM: s390: Set virtio-ccw transport revision
From: Thomas Huth <thuth at linux.vnet.ibm.com> With the new SET-VIRTIO-REVISION command of the virtio 1.0 standard, we can now negotiate the virtio-ccw revision after setting a channel online. Note that we don't negotiate version 1 yet. [Cornelia Huck: reworked revision loop a bit] Reviewed-by: David Hildenbrand <dahi at linux.vnet.ibm.com> Signed-off-by: Thomas Huth <thuth at linux.vnet.ibm.com> Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com> --- drivers/s390/kvm/virtio_ccw.c | 63 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/drivers/s390/kvm/virtio_ccw.c b/drivers/s390/kvm/virtio_ccw.c index 4173b59..cbe2ba8 100644 --- a/drivers/s390/kvm/virtio_ccw.c +++ b/drivers/s390/kvm/virtio_ccw.c @@ -55,6 +55,7 @@ struct virtio_ccw_device { struct ccw_device *cdev; __u32 curr_io; int err; + unsigned int revision; /* Transport revision */ wait_queue_head_t wait_q; spinlock_t lock; struct list_head virtqueues; @@ -86,6 +87,15 @@ struct virtio_thinint_area { u8 isc; } __packed; +struct virtio_rev_info { + __u16 revision; + __u16 length; + __u8 data[]; +}; + +/* the highest virtio-ccw revision we support */ +#define VIRTIO_CCW_REV_MAX 0 + struct virtio_ccw_vq_info { struct virtqueue *vq; int num; @@ -122,6 +132,7 @@ static struct airq_info *airq_areas[MAX_AIRQ_AREAS]; #define CCW_CMD_WRITE_STATUS 0x31 #define CCW_CMD_READ_VQ_CONF 0x32 #define CCW_CMD_SET_IND_ADAPTER 0x73 +#define CCW_CMD_SET_VIRTIO_REV 0x83 #define VIRTIO_CCW_DOING_SET_VQ 0x00010000 #define VIRTIO_CCW_DOING_RESET 0x00040000 @@ -134,6 +145,7 @@ static struct airq_info *airq_areas[MAX_AIRQ_AREAS]; #define VIRTIO_CCW_DOING_READ_VQ_CONF 0x02000000 #define VIRTIO_CCW_DOING_SET_CONF_IND 0x04000000 #define VIRTIO_CCW_DOING_SET_IND_ADAPTER 0x08000000 +#define VIRTIO_CCW_DOING_SET_VIRTIO_REV 0x10000000 #define VIRTIO_CCW_INTPARM_MASK 0xffff0000 static struct virtio_ccw_device *to_vc_device(struct virtio_device *vdev) @@ -934,6 +946,7 @@ static void virtio_ccw_int_handler(struct ccw_device *cdev, case VIRTIO_CCW_DOING_RESET: case VIRTIO_CCW_DOING_READ_VQ_CONF: case VIRTIO_CCW_DOING_SET_IND_ADAPTER: + case VIRTIO_CCW_DOING_SET_VIRTIO_REV: vcdev->curr_io &= ~activity; wake_up(&vcdev->wait_q); break; @@ -1053,6 +1066,51 @@ static int virtio_ccw_offline(struct ccw_device *cdev) return 0; } +static int virtio_ccw_set_transport_rev(struct virtio_ccw_device *vcdev) +{ + struct virtio_rev_info *rev; + struct ccw1 *ccw; + int ret; + + ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL); + if (!ccw) + return -ENOMEM; + rev = kzalloc(sizeof(*rev), GFP_DMA | GFP_KERNEL); + if (!rev) { + kfree(ccw); + return -ENOMEM; + } + + /* Set transport revision */ + ccw->cmd_code = CCW_CMD_SET_VIRTIO_REV; + ccw->flags = 0; + ccw->count = sizeof(*rev); + ccw->cda = (__u32)(unsigned long)rev; + + vcdev->revision = VIRTIO_CCW_REV_MAX; + do { + rev->revision = vcdev->revision; + /* none of our supported revisions carry payload */ + rev->length = 0; + ret = ccw_io_helper(vcdev, ccw, + VIRTIO_CCW_DOING_SET_VIRTIO_REV); + if (ret == -EOPNOTSUPP) { + if (vcdev->revision == 0) + /* + * The host device does not support setting + * the revision: let's operate it in legacy + * mode. + */ + ret = 0; + else + vcdev->revision--; + } + } while (ret == -EOPNOTSUPP); + + kfree(ccw); + kfree(rev); + return ret; +} static int virtio_ccw_online(struct ccw_device *cdev) { @@ -1093,6 +1151,11 @@ static int virtio_ccw_online(struct ccw_device *cdev) spin_unlock_irqrestore(get_ccwdev_lock(cdev), flags); vcdev->vdev.id.vendor = cdev->id.cu_type; vcdev->vdev.id.device = cdev->id.cu_model; + + ret = virtio_ccw_set_transport_rev(vcdev); + if (ret) + goto out_free; + ret = register_virtio_device(&vcdev->vdev); if (ret) { dev_warn(&cdev->dev, "Failed to register virtio device: %d\n", -- 1.7.9.5
Cornelia Huck
2014-Oct-07 14:39 UTC
[PATCH RFC 10/11] KVM: s390: virtio-ccw revision 1 SET_VQ
The CCW_CMD_SET_VQ command has a different format for revision 1+ devices, allowing to specify a more complex virtqueue layout. For now, we stay however with the old layout and simply use the new command format for virtio-1 devices. Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com> --- drivers/s390/kvm/virtio_ccw.c | 54 ++++++++++++++++++++++++++++++++--------- 1 file changed, 42 insertions(+), 12 deletions(-) diff --git a/drivers/s390/kvm/virtio_ccw.c b/drivers/s390/kvm/virtio_ccw.c index cbe2ba8..f97d3fb 100644 --- a/drivers/s390/kvm/virtio_ccw.c +++ b/drivers/s390/kvm/virtio_ccw.c @@ -68,13 +68,22 @@ struct virtio_ccw_device { void *airq_info; }; -struct vq_info_block { +struct vq_info_block_legacy { __u64 queue; __u32 align; __u16 index; __u16 num; } __packed; +struct vq_info_block { + __u64 desc; + __u32 res0; + __u16 index; + __u16 num; + __u64 avail; + __u64 used; +} __packed; + struct virtio_feature_desc { __u32 features; __u8 index; @@ -100,7 +109,10 @@ struct virtio_ccw_vq_info { struct virtqueue *vq; int num; void *queue; - struct vq_info_block *info_block; + union { + struct vq_info_block s; + struct vq_info_block_legacy l; + } *info_block; int bit_nr; struct list_head node; long cookie; @@ -411,13 +423,22 @@ static void virtio_ccw_del_vq(struct virtqueue *vq, struct ccw1 *ccw) spin_unlock_irqrestore(&vcdev->lock, flags); /* Release from host. */ - info->info_block->queue = 0; - info->info_block->align = 0; - info->info_block->index = index; - info->info_block->num = 0; + if (vcdev->revision == 0) { + info->info_block->l.queue = 0; + info->info_block->l.align = 0; + info->info_block->l.index = index; + info->info_block->l.num = 0; + ccw->count = sizeof(info->info_block->l); + } else { + info->info_block->s.desc = 0; + info->info_block->s.index = index; + info->info_block->s.num = 0; + info->info_block->s.avail = 0; + info->info_block->s.used = 0; + ccw->count = sizeof(info->info_block->s); + } ccw->cmd_code = CCW_CMD_SET_VQ; ccw->flags = 0; - ccw->count = sizeof(*info->info_block); ccw->cda = (__u32)(unsigned long)(info->info_block); ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_SET_VQ | index); @@ -500,13 +521,22 @@ static struct virtqueue *virtio_ccw_setup_vq(struct virtio_device *vdev, } /* Register it with the host. */ - info->info_block->queue = (__u64)info->queue; - info->info_block->align = KVM_VIRTIO_CCW_RING_ALIGN; - info->info_block->index = i; - info->info_block->num = info->num; + if (vcdev->revision == 0) { + info->info_block->l.queue = (__u64)info->queue; + info->info_block->l.align = KVM_VIRTIO_CCW_RING_ALIGN; + info->info_block->l.index = i; + info->info_block->l.num = info->num; + ccw->count = sizeof(info->info_block->l); + } else { + info->info_block->s.desc = (__u64)info->queue; + info->info_block->s.index = i; + info->info_block->s.num = info->num; + info->info_block->s.avail = (__u64)virtqueue_get_avail(vq); + info->info_block->s.used = (__u64)virtqueue_get_used(vq); + ccw->count = sizeof(info->info_block->s); + } ccw->cmd_code = CCW_CMD_SET_VQ; ccw->flags = 0; - ccw->count = sizeof(*info->info_block); ccw->cda = (__u32)(unsigned long)(info->info_block); err = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_SET_VQ | i); if (err) { -- 1.7.9.5
Cornelia Huck
2014-Oct-07 14:39 UTC
[PATCH RFC 11/11] KVM: s390: enable virtio-ccw revision 1
Now that virtio-ccw has everything needed to support virtio 1.0 in place, try to enable it if the host supports it. Reviewed-by: David Hildenbrand <dahi at linux.vnet.ibm.com> Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com> --- drivers/s390/kvm/virtio_ccw.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/s390/kvm/virtio_ccw.c b/drivers/s390/kvm/virtio_ccw.c index f97d3fb..a2e0c33 100644 --- a/drivers/s390/kvm/virtio_ccw.c +++ b/drivers/s390/kvm/virtio_ccw.c @@ -103,7 +103,7 @@ struct virtio_rev_info { }; /* the highest virtio-ccw revision we support */ -#define VIRTIO_CCW_REV_MAX 0 +#define VIRTIO_CCW_REV_MAX 1 struct virtio_ccw_vq_info { struct virtqueue *vq; -- 1.7.9.5
Cornelia Huck <cornelia.huck at de.ibm.com> writes:> Note that we care only about the fields still in use for virtio v1.0. > > Reviewed-by: Thomas Huth <thuth at linux.vnet.ibm.com> > Reviewed-by: David Hildenbrand <dahi at linux.vnet.ibm.com> > Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com>Hi Cornelia, These patches all look good; I'm a bit nervous about our testing missing some conversion, so we'll need qemu patches for PCI so we can test on other platforms too. Thanks, Rusty.
Michael S. Tsirkin
2014-Oct-22 09:04 UTC
[Qemu-devel] [PATCH RFC 03/11] virtio: endianess conversion helpers
On Tue, Oct 07, 2014 at 04:39:44PM +0200, Cornelia Huck wrote:> Provide helper functions that convert from/to LE for virtio devices that > are not operating in legacy mode. We check for the VERSION_1 feature bit > to determine that. > > Based on original patches by Rusty Russell and Thomas Huth. > > Reviewed-by: David Hildenbrand <dahi at linux.vnet.ibm.com> > Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com>I'm worried that this might miss some conversion. Let's add new typedefs __virtio16/__virtio32/__virtio64 instead. This way we can use static checkers to catch bugs. This is what my patch does, let me try to split it up so parts are reusable for you. Also if we do this, then virtio32_to_cpu is a better API since it's closer to the type name.> --- > drivers/virtio/virtio.c | 4 ++++ > include/linux/virtio.h | 40 ++++++++++++++++++++++++++++++++++++ > include/uapi/linux/virtio_config.h | 3 +++ > 3 files changed, 47 insertions(+) > > diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c > index cfd5d00..8f74cd6 100644 > --- a/drivers/virtio/virtio.c > +++ b/drivers/virtio/virtio.c > @@ -144,6 +144,10 @@ static int virtio_dev_probe(struct device *_d) > if (device_features & (1ULL << i)) > dev->features |= (1ULL << i); > > + /* Version 1.0 compliant devices set the VIRTIO_F_VERSION_1 bit */ > + if (device_features & (1ULL << VIRTIO_F_VERSION_1)) > + dev->features |= (1ULL << VIRTIO_F_VERSION_1); > + > dev->config->finalize_features(dev); > > err = drv->probe(dev); > diff --git a/include/linux/virtio.h b/include/linux/virtio.h > index a24b41f..68cadd4 100644 > --- a/include/linux/virtio.h > +++ b/include/linux/virtio.h > @@ -9,6 +9,7 @@ > #include <linux/mod_devicetable.h> > #include <linux/gfp.h> > #include <linux/vringh.h> > +#include <uapi/linux/virtio_config.h> > > /** > * virtqueue - a queue to register buffers for sending or receiving. > @@ -102,6 +103,11 @@ static inline struct virtio_device *dev_to_virtio(struct device *_dev) > return container_of(_dev, struct virtio_device, dev); > } > > +static inline bool virtio_device_legacy(const struct virtio_device *dev) > +{ > + return !(dev->features & (1ULL << VIRTIO_F_VERSION_1)); > +} > + > int register_virtio_device(struct virtio_device *dev); > void unregister_virtio_device(struct virtio_device *dev); > > @@ -149,4 +155,38 @@ void unregister_virtio_driver(struct virtio_driver *drv); > #define module_virtio_driver(__virtio_driver) \ > module_driver(__virtio_driver, register_virtio_driver, \ > unregister_virtio_driver) > + > +/* > + * v1.0 specifies LE headers, legacy was native endian. Therefore, we must > + * convert from/to LE if and only if vdev is not legacy. > + */ > +static inline u16 virtio_to_cpu_u16(const struct virtio_device *vdev, u16 v) > +{ > + return virtio_device_legacy(vdev) ? v : le16_to_cpu(v); > +} > + > +static inline u32 virtio_to_cpu_u32(const struct virtio_device *vdev, u32 v) > +{ > + return virtio_device_legacy(vdev) ? v : le32_to_cpu(v); > +} > + > +static inline u64 virtio_to_cpu_u64(const struct virtio_device *vdev, u64 v) > +{ > + return virtio_device_legacy(vdev) ? v : le64_to_cpu(v); > +} > + > +static inline u16 cpu_to_virtio_u16(const struct virtio_device *vdev, u16 v) > +{ > + return virtio_device_legacy(vdev) ? v : cpu_to_le16(v); > +} > + > +static inline u32 cpu_to_virtio_u32(const struct virtio_device *vdev, u32 v) > +{ > + return virtio_device_legacy(vdev) ? v : cpu_to_le32(v); > +} > + > +static inline u64 cpu_to_virtio_u64(const struct virtio_device *vdev, u64 v) > +{ > + return virtio_device_legacy(vdev) ? v : cpu_to_le64(v); > +} > #endif /* _LINUX_VIRTIO_H */Would be nicer to allow callers to pass in the legacy flag I think? This way they can keep it on stack to avoid re-reading features all the time ...> diff --git a/include/uapi/linux/virtio_config.h b/include/uapi/linux/virtio_config.h > index 3ce768c..80e7381 100644 > --- a/include/uapi/linux/virtio_config.h > +++ b/include/uapi/linux/virtio_config.h > @@ -54,4 +54,7 @@ > /* Can the device handle any descriptor layout? */ > #define VIRTIO_F_ANY_LAYOUT 27 > > +/* v1.0 compliant. */ > +#define VIRTIO_F_VERSION_1 32 > + > #endif /* _UAPI_LINUX_VIRTIO_CONFIG_H */ > -- > 1.7.9.5 >
Michael S. Tsirkin
2014-Oct-22 14:02 UTC
[PATCH RFC 04/11] virtio_ring: implement endian reversal based on VERSION_1 feature.
On Tue, Oct 07, 2014 at 04:39:45PM +0200, Cornelia Huck wrote:> From: Rusty Russell <rusty at rustcorp.com.au> > > [Cornelia Huck: we don't need the vq->vring.num -> vq->ring_mask change] > Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> > Signed-off-by: Cornelia Huck <cornelia.huck at de.ibm.com> > --- > drivers/virtio/virtio_ring.c | 195 ++++++++++++++++++++++++++++++------------ > 1 file changed, 138 insertions(+), 57 deletions(-) > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c > index 1cfb5ba..350c39b 100644 > --- a/drivers/virtio/virtio_ring.c > +++ b/drivers/virtio/virtio_ring.c > @@ -145,42 +145,54 @@ static inline int vring_add_indirect(struct vring_virtqueue *vq, > i = 0; > for (n = 0; n < out_sgs; n++) { > for (sg = sgs[n]; sg; sg = next(sg, &total_out)) { > - desc[i].flags = VRING_DESC_F_NEXT; > - desc[i].addr = sg_phys(sg); > - desc[i].len = sg->length; > - desc[i].next = i+1; > + desc[i].flags = cpu_to_virtio16(vq->vq.vdev, > + VRING_DESC_F_NEXT); > + desc[i].addr = cpu_to_virtio64(vq->vq.vdev, > + sg_phys(sg)); > + desc[i].len = cpu_to_virtio32(vq->vq.vdev, > + sg->length); > + desc[i].next = cpu_to_virtio16(vq->vq.vdev, > + i+1); > i++; > } > } > for (; n < (out_sgs + in_sgs); n++) { > for (sg = sgs[n]; sg; sg = next(sg, &total_in)) { > - desc[i].flags = VRING_DESC_F_NEXT|VRING_DESC_F_WRITE; > - desc[i].addr = sg_phys(sg); > - desc[i].len = sg->length; > - desc[i].next = i+1; > + desc[i].flags = cpu_to_virtio16(vq->vq.vdev, > + VRING_DESC_F_NEXT| > + VRING_DESC_F_WRITE); > + desc[i].addr = cpu_to_virtio64(vq->vq.vdev, > + sg_phys(sg)); > + desc[i].len = cpu_to_virtio32(vq->vq.vdev, > + sg->length); > + desc[i].next = cpu_to_virtio16(vq->vq.vdev, i+1); > i++; > } > } > - BUG_ON(i != total_sg); > > /* Last one doesn't continue. */ > - desc[i-1].flags &= ~VRING_DESC_F_NEXT; > + desc[i-1].flags &= ~cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT); > desc[i-1].next = 0; > > - /* We're about to use a buffer */ > - vq->vq.num_free--; > - > /* Use a single buffer which doesn't continue */ > head = vq->free_head; > - vq->vring.desc[head].flags = VRING_DESC_F_INDIRECT; > - vq->vring.desc[head].addr = virt_to_phys(desc); > + vq->vring.desc[head].flags > + cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT); > + vq->vring.desc[head].addr > + cpu_to_virtio64(vq->vq.vdev, virt_to_phys(desc)); > /* kmemleak gives a false positive, as it's hidden by virt_to_phys */ > kmemleak_ignore(desc); > - vq->vring.desc[head].len = i * sizeof(struct vring_desc); > + vq->vring.desc[head].len > + cpu_to_virtio32(vq->vq.vdev, i * sizeof(struct vring_desc)); > > - /* Update free pointer */ > + BUG_ON(i != total_sg); > +Why move the BUG_ON here? I think I'll move it back ...> + /* Update free pointer (we store this in native endian) */ > vq->free_head = vq->vring.desc[head].next; > > + /* We've just used a buffer */ > + vq->vq.num_free--; > + > return head; > } > > @@ -199,6 +211,7 @@ static inline int virtqueue_add(struct virtqueue *_vq, > struct scatterlist *sg; > unsigned int i, n, avail, uninitialized_var(prev), total_sg; > int head; > + u16 nexti; > > START_USE(vq); > > @@ -253,26 +266,46 @@ static inline int virtqueue_add(struct virtqueue *_vq, > vq->vq.num_free -= total_sg; > > head = i = vq->free_head; > + > for (n = 0; n < out_sgs; n++) { > for (sg = sgs[n]; sg; sg = next(sg, &total_out)) { > - vq->vring.desc[i].flags = VRING_DESC_F_NEXT; > - vq->vring.desc[i].addr = sg_phys(sg); > - vq->vring.desc[i].len = sg->length; > + vq->vring.desc[i].flags > + cpu_to_virtio16(vq->vq.vdev, > + VRING_DESC_F_NEXT); > + vq->vring.desc[i].addr > + cpu_to_virtio64(vq->vq.vdev, sg_phys(sg)); > + vq->vring.desc[i].len > + cpu_to_virtio32(vq->vq.vdev, sg->length); > + > + /* We chained .next in native: fix endian. */ > + nexti = vq->vring.desc[i].next; > + vq->vring.desc[i].next > + = virtio_to_cpu_u16(vq->vq.vdev, nexti); > prev = i; > - i = vq->vring.desc[i].next; > + i = nexti; > } > } > for (; n < (out_sgs + in_sgs); n++) { > for (sg = sgs[n]; sg; sg = next(sg, &total_in)) { > - vq->vring.desc[i].flags = VRING_DESC_F_NEXT|VRING_DESC_F_WRITE; > - vq->vring.desc[i].addr = sg_phys(sg); > - vq->vring.desc[i].len = sg->length; > + vq->vring.desc[i].flags > + cpu_to_virtio16(vq->vq.vdev, > + VRING_DESC_F_NEXT| > + VRING_DESC_F_WRITE); > + vq->vring.desc[i].addr > + cpu_to_virtio64(vq->vq.vdev, sg_phys(sg)); > + vq->vring.desc[i].len > + cpu_to_virtio32(vq->vq.vdev, sg->length); > + /* We chained .next in native: fix endian. */ > + nexti = vq->vring.desc[i].next; > + vq->vring.desc[i].next > + virtio_to_cpu_u16(vq->vq.vdev, nexti); > prev = i; > - i = vq->vring.desc[i].next; > + i = nexti; > } > } > /* Last one doesn't continue. */ > - vq->vring.desc[prev].flags &= ~VRING_DESC_F_NEXT; > + vq->vring.desc[prev].flags &> + ~cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT); > > /* Update free pointer */ > vq->free_head = i; > @@ -283,15 +316,16 @@ add_head: > > /* Put entry in available array (but don't update avail->idx until they > * do sync). */ > - avail = (vq->vring.avail->idx & (vq->vring.num-1)); > - vq->vring.avail->ring[avail] = head; > + avail = virtio_to_cpu_u16(vq->vq.vdev, vq->vring.avail->idx); > + vq->vring.avail->ring[avail & (vq->vring.num - 1)] > + cpu_to_virtio16(vq->vq.vdev, head); > > - /* Descriptors and available array need to be set before we expose the > - * new available array entries. */ > + /* Descriptors and available array need to be set > + * before we expose the new available array entries. */ > virtio_wmb(vq->weak_barriers); > - vq->vring.avail->idx++; > - vq->num_added++; > + vq->vring.avail->idx = cpu_to_virtio16(vq->vq.vdev, avail + 1); > > + vq->num_added++; > /* This is very unlikely, but theoretically possible. Kick > * just in case. */ > if (unlikely(vq->num_added == (1 << 16) - 1)) > @@ -408,8 +442,9 @@ bool virtqueue_kick_prepare(struct virtqueue *_vq) > * event. */ > virtio_mb(vq->weak_barriers); > > - old = vq->vring.avail->idx - vq->num_added; > - new = vq->vring.avail->idx; > + new = virtio_to_cpu_u16(vq->vq.vdev, vq->vring.avail->idx); > + > + old = new - vq->num_added; > vq->num_added = 0; > > #ifdef DEBUG > @@ -421,10 +456,17 @@ bool virtqueue_kick_prepare(struct virtqueue *_vq) > #endif > > if (vq->event) { > - needs_kick = vring_need_event(vring_avail_event(&vq->vring), > - new, old); > + u16 avail; > + > + avail = virtio_to_cpu_u16(vq->vq.vdev, > + vring_avail_event(&vq->vring)); > + > + needs_kick = vring_need_event(avail, new, old); > } else { > - needs_kick = !(vq->vring.used->flags & VRING_USED_F_NO_NOTIFY); > + u16 flags; > + > + flags = virtio_to_cpu_u16(vq->vq.vdev, vq->vring.used->flags); > + needs_kick = !(flags & VRING_USED_F_NO_NOTIFY); > } > END_USE(vq); > return needs_kick; > @@ -486,11 +528,20 @@ static void detach_buf(struct vring_virtqueue *vq, unsigned int head) > i = head; > > /* Free the indirect table */ > - if (vq->vring.desc[i].flags & VRING_DESC_F_INDIRECT) > - kfree(phys_to_virt(vq->vring.desc[i].addr)); > + if (vq->vring.desc[i].flags & > + cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)) { > + kfree(phys_to_virt(virtio_to_cpu_u64(vq->vq.vdev, > + vq->vring.desc[i].addr))); > + } > + > + while (vq->vring.desc[i].flags & > + cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT)) { > + u16 next; > > - while (vq->vring.desc[i].flags & VRING_DESC_F_NEXT) { > - i = vq->vring.desc[i].next; > + /* Convert endian of next back to native. */ > + next = virtio_to_cpu_u16(vq->vq.vdev, vq->vring.desc[i].next); > + vq->vring.desc[i].next = next; > + i = next; > vq->vq.num_free++; > } > > @@ -502,7 +553,8 @@ static void detach_buf(struct vring_virtqueue *vq, unsigned int head) > > static inline bool more_used(const struct vring_virtqueue *vq) > { > - return vq->last_used_idx != vq->vring.used->idx; > + return vq->last_used_idx > + != virtio_to_cpu_u16(vq->vq.vdev, vq->vring.used->idx); > } > > /** > @@ -527,6 +579,8 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) > void *ret; > unsigned int i; > u16 last_used; > + const int no_intr > + cpu_to_virtio16(vq->vq.vdev, VRING_AVAIL_F_NO_INTERRUPT); > > START_USE(vq); > > @@ -545,8 +599,9 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) > virtio_rmb(vq->weak_barriers); > > last_used = (vq->last_used_idx & (vq->vring.num - 1)); > - i = vq->vring.used->ring[last_used].id; > - *len = vq->vring.used->ring[last_used].len; > + i = virtio_to_cpu_u32(vq->vq.vdev, vq->vring.used->ring[last_used].id); > + *len = virtio_to_cpu_u32(vq->vq.vdev, > + vq->vring.used->ring[last_used].len); > > if (unlikely(i >= vq->vring.num)) { > BAD_RING(vq, "id %u out of range\n", i); > @@ -561,10 +616,11 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) > ret = vq->data[i]; > detach_buf(vq, i); > vq->last_used_idx++; > + > /* If we expect an interrupt for the next entry, tell host > * by writing event index and flush out the write before > * the read in the next get_buf call. */ > - if (!(vq->vring.avail->flags & VRING_AVAIL_F_NO_INTERRUPT)) { > + if (!(vq->vring.avail->flags & no_intr)) { > vring_used_event(&vq->vring) = vq->last_used_idx; > virtio_mb(vq->weak_barriers); > } > @@ -591,7 +647,8 @@ void virtqueue_disable_cb(struct virtqueue *_vq) > { > struct vring_virtqueue *vq = to_vvq(_vq); > > - vq->vring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT; > + vq->vring.avail->flags |> + cpu_to_virtio16(vq->vq.vdev, VRING_AVAIL_F_NO_INTERRUPT); > } > EXPORT_SYMBOL_GPL(virtqueue_disable_cb); > > @@ -619,8 +676,12 @@ unsigned virtqueue_enable_cb_prepare(struct virtqueue *_vq) > /* Depending on the VIRTIO_RING_F_EVENT_IDX feature, we need to > * either clear the flags bit or point the event index at the next > * entry. Always do both to keep code simple. */ > - vq->vring.avail->flags &= ~VRING_AVAIL_F_NO_INTERRUPT; > - vring_used_event(&vq->vring) = last_used_idx = vq->last_used_idx; > + vq->vring.avail->flags &> + cpu_to_virtio16(vq->vq.vdev, ~VRING_AVAIL_F_NO_INTERRUPT); > + last_used_idx = vq->last_used_idx; > + vring_used_event(&vq->vring) = cpu_to_virtio16(vq->vq.vdev, > + last_used_idx); > + > END_USE(vq); > return last_used_idx; > } > @@ -640,7 +701,9 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx) > struct vring_virtqueue *vq = to_vvq(_vq); > > virtio_mb(vq->weak_barriers); > - return (u16)last_used_idx != vq->vring.used->idx; > + > + return (u16)last_used_idx !> + virtio_to_cpu_u16(vq->vq.vdev, vq->vring.used->idx); > } > EXPORT_SYMBOL_GPL(virtqueue_poll); > > @@ -678,7 +741,7 @@ EXPORT_SYMBOL_GPL(virtqueue_enable_cb); > bool virtqueue_enable_cb_delayed(struct virtqueue *_vq) > { > struct vring_virtqueue *vq = to_vvq(_vq); > - u16 bufs; > + u16 bufs, used_idx; > > START_USE(vq); > > @@ -687,12 +750,17 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq) > /* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to > * either clear the flags bit or point the event index at the next > * entry. Always do both to keep code simple. */ > - vq->vring.avail->flags &= ~VRING_AVAIL_F_NO_INTERRUPT; > + vq->vring.avail->flags &> + cpu_to_virtio16(vq->vq.vdev, ~VRING_AVAIL_F_NO_INTERRUPT); > /* TODO: tune this threshold */ > - bufs = (u16)(vq->vring.avail->idx - vq->last_used_idx) * 3 / 4; > - vring_used_event(&vq->vring) = vq->last_used_idx + bufs; > + bufs = (u16)(virtio_to_cpu_u16(vq->vq.vdev, vq->vring.avail->idx) > + - vq->last_used_idx) * 3 / 4; > + vring_used_event(&vq->vring) > + cpu_to_virtio16(vq->vq.vdev, vq->last_used_idx + bufs); > virtio_mb(vq->weak_barriers); > - if (unlikely((u16)(vq->vring.used->idx - vq->last_used_idx) > bufs)) { > + used_idx = virtio_to_cpu_u16(vq->vq.vdev, vq->vring.used->idx); > + > + if (unlikely((u16)(used_idx - vq->last_used_idx) > bufs)) { > END_USE(vq); > return false; > } > @@ -719,12 +787,19 @@ void *virtqueue_detach_unused_buf(struct virtqueue *_vq) > START_USE(vq); > > for (i = 0; i < vq->vring.num; i++) { > + u16 avail; > + > if (!vq->data[i]) > continue; > /* detach_buf clears data, so grab it now. */ > buf = vq->data[i]; > detach_buf(vq, i); > - vq->vring.avail->idx--; > + > + /* AKA "vq->vring.avail->idx++" */ > + avail = virtio_to_cpu_u16(vq->vq.vdev, vq->vring.avail->idx); > + vq->vring.avail->idx = cpu_to_virtio16(vq->vq.vdev, > + avail - 1); > + > END_USE(vq); > return buf; > } > @@ -800,12 +875,18 @@ struct virtqueue *vring_new_virtqueue(unsigned int index, > vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX); > > /* No callback? Tell other side not to bother us. */ > - if (!callback) > - vq->vring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT; > + if (!callback) { > + u16 flag; > + > + flag = cpu_to_virtio16(vq->vq.vdev, > + VRING_AVAIL_F_NO_INTERRUPT); > + vq->vring.avail->flags |= flag; > + } > > /* Put everything in free lists. */ > vq->free_head = 0; > for (i = 0; i < num-1; i++) { > + /* This is for our use, so always our endian. */ > vq->vring.desc[i].next = i+1; > vq->data[i] = NULL; > } > -- > 1.7.9.5 > > _______________________________________________ > Virtualization mailing list > Virtualization at lists.linux-foundation.org > https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Seemingly Similar Threads
- [PATCH RFC 04/11] virtio_ring: implement endian reversal based on VERSION_1 feature.
- [PATCH RFC 04/11] virtio_ring: implement endian reversal based on VERSION_1 feature.
- [PATCH RFC 03/11] virtio: endianess conversion helpers
- [PATCH RFC 03/11] virtio: endianess conversion helpers
- [PATCH RFC 07/11] virtio_net: use v1.0 endian.