Displaying 20 results from an estimated 36 matches for "scsi_don".
Did you mean:
scsi_done
2009 Apr 17
0
problem with 5.3 upgrade or just bad timing?
...ome
of it is very slightly different too)
BUG: warning at drivers/ata/libata-core.c:4923/ata_qc_issue() (Tainted: G )
Call Trace:
[<ffffffff880b6625>] :libata:ata_qc_issue+0x61/0x4a9
[<ffffffff880bacf3>] :libata:ata_scsi_rw_xlat+0x119/0x188
[<ffffffff880735a6>] :scsi_mod:scsi_done+0x0/0x18
[<ffffffff880babda>] :libata:ata_scsi_rw_xlat+0x0/0x188
[<ffffffff880baea2>] :libata:ata_scsi_translate+0x140/0x16d
[<ffffffff880735a6>] :scsi_mod:scsi_done+0x0/0x18
[<ffffffff80299dd4>] keventd_create_kthread+0x0/0xc4
[<ffffffff880bda72>] :libata:ata_sc...
2009 Apr 18
2
libata-core kernel errors
...r kernel: Call Trace:
Apr 18 01:10:00 xenmaster kernel: <IRQ> [<ffffffff880b6625>]
:libata:ata_qc_issue+0x61/0x4a9
Apr 18 01:10:00 xenmaster kernel: [<ffffffff880bacf3>]
:libata:ata_scsi_rw_xlat+0x119/0x188
Apr 18 01:10:00 xenmaster kernel: [<ffffffff880735a6>]
:scsi_mod:scsi_done+0x0/0x18
Apr 18 01:10:00 xenmaster kernel: [<ffffffff880babda>]
:libata:ata_scsi_rw_xlat+0x0/0x188
Apr 18 01:10:00 xenmaster kernel: [<ffffffff880baea2>]
:libata:ata_scsi_translate+0x140/0x16d
Apr 18 01:10:00 xenmaster kernel: [<ffffffff880735a6>]
:scsi_mod:scsi_done+0x0/0x18...
2006 Jan 06
2
3ware disk failure -> hang
...:1
Jan 6 01:04:10 $SERVER kernel: [<c011fbe9>] __might_sleep+0x7d/0x88
Jan 6 01:04:10 $SERVER kernel: [<f885f056>] tw_ioctl+0x478/0xb07 [3w_xxxx]
Jan 6 01:04:10 $SERVER kernel: [<c011fec9>] autoremove_wake_function+0x0/0x2d
Jan 6 01:04:10 $SERVER kernel: [<f883f905>] scsi_done+0x0/0x16 [scsi_mod]
Jan 6 01:04:10 $SERVER kernel: [<f8860529>] tw_scsi_queue+0x163/0x1f1 [3w_xxxx]
Jan 6 01:04:10 $SERVER kernel: [<f883f748>] scsi_dispatch_cmd+0x1e9/0x24f [scsi_mod]
Jan 6 01:04:10 $SERVER kernel: [<f884417e>] scsi_request_fn+0x297/0x30d [scsi_mod]
Jan 6...
2020 Jul 09
1
[PATCH 12/24] scsi: virtio_scsi: Demote seemingly unintentional kerneldoc header
....c
index 0e0910c5b9424..56875467e4984 100644
--- a/drivers/scsi/virtio_scsi.c
+++ b/drivers/scsi/virtio_scsi.c
@@ -100,7 +100,7 @@ static void virtscsi_compute_resid(struct scsi_cmnd *sc, u32 resid)
scsi_set_resid(sc, resid);
}
-/**
+/*
* virtscsi_complete_cmd - finish a scsi_cmd and invoke scsi_done
*
* Called with vq_lock held.
--
2.25.1
2020 Jul 13
2
[PATCH v2 12/24] scsi: virtio_scsi: Demote seemingly unintentional kerneldoc header
....c
index 0e0910c5b9424..56875467e4984 100644
--- a/drivers/scsi/virtio_scsi.c
+++ b/drivers/scsi/virtio_scsi.c
@@ -100,7 +100,7 @@ static void virtscsi_compute_resid(struct scsi_cmnd *sc, u32 resid)
scsi_set_resid(sc, resid);
}
-/**
+/*
* virtscsi_complete_cmd - finish a scsi_cmd and invoke scsi_done
*
* Called with vq_lock held.
--
2.25.1
2020 Jul 13
2
[PATCH v2 12/24] scsi: virtio_scsi: Demote seemingly unintentional kerneldoc header
....c
index 0e0910c5b9424..56875467e4984 100644
--- a/drivers/scsi/virtio_scsi.c
+++ b/drivers/scsi/virtio_scsi.c
@@ -100,7 +100,7 @@ static void virtscsi_compute_resid(struct scsi_cmnd *sc, u32 resid)
scsi_set_resid(sc, resid);
}
-/**
+/*
* virtscsi_complete_cmd - finish a scsi_cmd and invoke scsi_done
*
* Called with vq_lock held.
--
2.25.1
2010 Apr 29
2
Hardware error or ocfs2 error?
...000000f8a0 ffff88014baebfd8 00000000000155c0
Apr 29 11:01:18 node06 kernel: [2569440.616161] 00000000000155c0 ffff88014ca38e20 ffff88014ca39118 00000001a0187b86
Apr 29 11:01:18 node06 kernel: [2569440.616192] Call Trace:
Apr 29 11:01:18 node06 kernel: [2569440.616223] [<ffffffffa01878a5>] ? scsi_done+0x0/0xc [scsi_mod]
Apr 29 11:01:18 node06 kernel: [2569440.616245] [<ffffffffa020f0fc>] ? qla2xxx_queuecommand+0x171/0x1de [qla2xxx]
Apr 29 11:01:18 node06 kernel: [2569440.616273] [<ffffffffa018d290>] ? scsi_request_fn+0x429/0x506 [scsi_mod]
Apr 29 11:01:18 node06 kernel: [2569440.6...
2020 Jul 13
0
[PATCH v2 12/24] scsi: virtio_scsi: Demote seemingly unintentional kerneldoc header
...> --- a/drivers/scsi/virtio_scsi.c
> +++ b/drivers/scsi/virtio_scsi.c
> @@ -100,7 +100,7 @@ static void virtscsi_compute_resid(struct scsi_cmnd *sc, u32 resid)
> scsi_set_resid(sc, resid);
> }
>
> -/**
> +/*
> * virtscsi_complete_cmd - finish a scsi_cmd and invoke scsi_done
> *
> * Called with vq_lock held.
> --
> 2.25.1
2005 Jul 06
2
Badness in local_bh_enable at kernel/softirq.c:140
...1 kernel: [<c01205a4>] local_bh_enable+0x68/0x83
Jul 6 15:20:32 iscsi-test1 kernel: [<c8973742>] iscsi_queuecommand+0x173/0x1e3
[iscsi_sfnet]
Jul 6 15:20:32 iscsi-test1 kernel: [<c02c9e7f>] scsi_dispatch_cmd+0x149/0x264
Jul 6 15:20:32 iscsi-test1 kernel: [<c02ca0ee>] scsi_done+0x0/0x26
Jul 6 15:20:32 iscsi-test1 kernel: [<c02cc8d0>] scsi_times_out+0x0/0xa0
Jul 6 15:20:32 iscsi-test1 kernel: [<c02cfa40>] scsi_request_fn+0x1fa/0x4ff
--
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensour...
2013 Feb 12
6
[PATCH v3 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends). The patches build on top of the new virtio APIs
at http://permalink.gmane.org/gmane.linux.kernel.virtualization/18431;
the new API simplifies the locking of the virtio-scsi driver nicely,
thus it makes sense to require them as a prerequisite.
2013 Feb 12
6
[PATCH v3 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends). The patches build on top of the new virtio APIs
at http://permalink.gmane.org/gmane.linux.kernel.virtualization/18431;
the new API simplifies the locking of the virtio-scsi driver nicely,
thus it makes sense to require them as a prerequisite.
2013 Mar 19
6
[PATCH V5 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V5: improving the grammar of 1/5 (Paolo)
move the dropping of sg_elems to 'virtio-scsi: use
2013 Mar 19
6
[PATCH V5 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V5: improving the grammar of 1/5 (Paolo)
move the dropping of sg_elems to 'virtio-scsi: use
2013 Mar 11
7
[PATCH V4 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V4: rebase on virtio ring rework patches (rusty's pending-rebases branch)
V3 and be found
2013 Mar 11
7
[PATCH V4 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V4: rebase on virtio ring rework patches (rusty's pending-rebases branch)
V3 and be found
2009 Sep 10
24
[Bug 23847] New: kernel BUG when using nouveau
...x43/0x72
[ 288.525438] [<c044d4b0>] cdrom_media_changed+0x28/0x2e
[ 288.525446] [<c0433e4f>] sr_block_media_changed+0x11/0x13
[ 288.525456] [<c02b1e7d>] check_disk_change+0x19/0x42
[ 288.525464] [<c044f23f>] cdrom_open+0x794/0x7f7
[ 288.525473] [<c0423ba6>] ? scsi_done+0x0/0xd
[ 288.525480] [<c043f4d1>] ? atapi_xlat+0x0/0x15f
[ 288.525491] [<c05bfe39>] ? schedule_timeout+0x17/0xbd
[ 288.525499] [<c0423ba6>] ? scsi_done+0x0/0xd
[ 288.525508] [<c03b0907>] ? kobject_put+0x37/0x3c
[ 288.525518] [<c041bf8a>] ? put_device+0xf/0x...
2013 Mar 20
7
[PATCH V6 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches, which
has already gone into virtio-next today.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V6: rework "redo allocation of target data"
2013 Mar 20
7
[PATCH V6 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches, which
has already gone into virtio-next today.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V6: rework "redo allocation of target data"
2013 Mar 23
10
[PATCH V7 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches, which
has already gone into virtio-next today.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V7: respin to fix the patch apply error
V6: rework
2013 Mar 23
10
[PATCH V7 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches, which
has already gone into virtio-next today.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V7: respin to fix the patch apply error
V6: rework