Displaying 9 results from an estimated 9 matches for "blk_get_queue".
2010 Apr 05
2
Kernel BUG
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos/attachments/20100405/b0bb4b91/attachment-0003.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: GilbertoNunesFerreira_html_185aa38c.jpg
Type: image/jpeg
Size: 16538 bytes
Desc: not available
URL:
2010 Apr 05
0
Kernel BUG
...Not tainted)
Apr 5 09:03:07 zebra kernel:
Apr 5 09:03:07 zebra kernel: Call Trace:
Apr 5 09:03:07 zebra kernel: [<ffffffff8023824d>] kref_get+0x38/0x3d
Apr 5 09:03:07 zebra kernel: [<ffffffff8025a440>] kobject_get+0x12/0x17
Apr 5 09:03:07 zebra kernel: [<ffffffff8024b57f>] blk_get_queue+0x1f/0x26
Apr 5 09:03:07 zebra kernel: [<ffffffff887198f7>]
:blkbk:dispatch_rw_block_io+0x4db/0x5a2
Apr 5 09:03:07 zebra kernel: [<ffffffff80340368>] __next_cpu+0x19/0x28
Apr 5 09:03:07 zebra kernel: [<ffffffff8026082b>] error_exit+0x0/0x6e
Apr 5 09:03:07 zebra kernel: [&l...
2008 Aug 09
4
Upgrade 3.0.3 to 3.2.1
Hi,
i''m prepering to upgrade my servers from xen 3.0.3 32-bit to 3.2.1 64-bit.
The old system:
Debian 4.0 i386 with included hypervisor 3.0.3 (pae) and dom0 kernel.
The new systen:
Debian lenny amd64 with the included hypervisor 3.2.1 and dom0 kernel from
Debian 4.0 amd64.
My domUs have a self compiled kernel out of the dom0 kernel of the old system
(mainly the dom0 kernel but
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all,
These 2 patches added virtio-nvme to kernel and qemu,
basically modified from virtio-blk and nvme code.
As title said, request for your comments.
Play it in Qemu with:
-drive file=disk.img,format=raw,if=none,id=D22 \
-device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
The goal is to have a full NVMe stack from VM guest(virtio-nvme)
to host(vhost_nvme) to LIO NVMe-over-fabrics
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all,
These 2 patches added virtio-nvme to kernel and qemu,
basically modified from virtio-blk and nvme code.
As title said, request for your comments.
Play it in Qemu with:
-drive file=disk.img,format=raw,if=none,id=D22 \
-device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
The goal is to have a full NVMe stack from VM guest(virtio-nvme)
to host(vhost_nvme) to LIO NVMe-over-fabrics
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...- uninit_q = blk_alloc_queue_node(GFP_KERNEL, node_id);
+ uninit_q = blk_alloc_queue_node(GFP_KERNEL, node_id, nr_queues);
if (!uninit_q)
return NULL;
q = blk_init_allocated_queue(uninit_q, rfn, lock);
if (!q)
blk_cleanup_queue(uninit_q);
return q;
}
@@ -631,122 +663,94 @@ bool blk_get_queue(struct request_queue *q)
if (likely(!blk_queue_dead(q))) {
__blk_get_queue(q);
return true;
}
return false;
}
EXPORT_SYMBOL(blk_get_queue);
-static inline void blk_free_request(struct request_queue *q, struct request *rq)
+static inline void blk_free_request(struct blk_queue_ctx *...
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...- uninit_q = blk_alloc_queue_node(GFP_KERNEL, node_id);
+ uninit_q = blk_alloc_queue_node(GFP_KERNEL, node_id, nr_queues);
if (!uninit_q)
return NULL;
q = blk_init_allocated_queue(uninit_q, rfn, lock);
if (!q)
blk_cleanup_queue(uninit_q);
return q;
}
@@ -631,122 +663,94 @@ bool blk_get_queue(struct request_queue *q)
if (likely(!blk_queue_dead(q))) {
__blk_get_queue(q);
return true;
}
return false;
}
EXPORT_SYMBOL(blk_get_queue);
-static inline void blk_free_request(struct request_queue *q, struct request *rq)
+static inline void blk_free_request(struct blk_queue_ctx *...
2016 Aug 17
20
[PATCH 00/15] Fix issue with KOBJ_ADD uevent versus disk attributes
This is an attempt to fix the issue that some disks' sysfs attributes are not
ready at the time its KOBJ_ADD event is sent.
The symptom is during device hotplug, udev may fail to find certain attributes,
such as serial or wwn, of the disk. As a result the /dev/disk/by-id entries are
not created.
The cause is device_add_disk emits the uevent before returning, and the callers
have to create
2016 Aug 17
20
[PATCH 00/15] Fix issue with KOBJ_ADD uevent versus disk attributes
This is an attempt to fix the issue that some disks' sysfs attributes are not
ready at the time its KOBJ_ADD event is sent.
The symptom is during device hotplug, udev may fail to find certain attributes,
such as serial or wwn, of the disk. As a result the /dev/disk/by-id entries are
not created.
The cause is device_add_disk emits the uevent before returning, and the callers
have to create