Displaying 20 results from an estimated 2000 matches similar to: "[PATCH v2] scsi: virtio_scsi: unplug LUNs when events missed"
2019 Sep 04
0
[PATCH] scsi: virtio_scsi: unplug LUNs when events missed
On Tue, Sep 03, 2019 at 05:04:20PM +0000, Matt Lupfer wrote:
> The event handler calls scsi_scan_host() when events are missed, which
> will hotplug new LUNs. However, this function won't remove any
> unplugged LUNs. The result is that hotunplug doesn't work properly when
> the number of unplugged LUNs exceeds the event queue size (currently 8).
>
> Scan existing LUNs
2019 Sep 11
0
[PATCH v2] scsi: virtio_scsi: unplug LUNs when events missed
Matt,
> The event handler calls scsi_scan_host() when events are missed, which
> will hotplug new LUNs. However, this function won't remove any
> unplugged LUNs. The result is that hotunplug doesn't work properly
> when the number of unplugged LUNs exceeds the event queue size
> (currently 8).
>
> Scan existing LUNs when events are missed to check if they are still
2020 Jul 29
3
[PATCH 0/1] virtio-scsi: fix missing unplug events when all LUNs are unplugged at the same time
virtio-scsi currently has limit of 8 outstanding notifications so when more that
8 LUNs are unplugged, some are missed.
Commit 5ff843721467 ("scsi: virtio_scsi: unplug LUNs when events missed")
Fixed this by checking the 'event overflow' bit and manually scanned the bus
to see which LUNs are still there.
However there is a corner case when all LUNs are unplugged.
In this case
2020 Jul 29
0
[PATCH 1/1] scsi: virtio-scsi: handle correctly case when all LUNs were unplugged
Commit 5ff843721467 ("scsi: virtio_scsi: unplug LUNs when events missed"),
almost fixed the case of mass unpluging of LUNs, but it missed a
corner case in which all the LUNs are unplugged at the same time.
In this case INQUIRY ends with DID_BAD_TARGET.
Detect this and unplug the LUN.
Signed-off-by: Maxim Levitsky <mlevitsk at redhat.com>
---
drivers/scsi/virtio_scsi.c | 10
2007 Nov 07
9
How To change server recovery timeout
Hi,
Our lustre environment is:
2.6.9-55.0.9.EL_lustre.1.6.3smp
I would like to change recovery timeout from default value 250s to
something longer
I tried example from manual:
set_timeout <secs> Sets the timeout (obd_timeout) for a server
to wait before failing recovery.
We performed that experiment on our test lustre installation with one
OST.
storage02 is our OSS
[root at
2012 Jul 06
2
[PATCH] virtio-scsi: Add vdrv->scan for post VIRTIO_CONFIG_S_DRIVER_OK LUN scanning
From: Nicholas Bellinger <nab at linux-iscsi.org>
This patch changes virtio-scsi to use a new virtio_driver->scan() callback
so that scsi_scan_host() can be properly invoked once virtio_dev_probe() has
set add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK) to signal active virtio-ring
operation, instead of from within virtscsi_probe().
This fixes a bug where SCSI LUN scanning for both
2012 Jul 06
2
[PATCH] virtio-scsi: Add vdrv->scan for post VIRTIO_CONFIG_S_DRIVER_OK LUN scanning
From: Nicholas Bellinger <nab at linux-iscsi.org>
This patch changes virtio-scsi to use a new virtio_driver->scan() callback
so that scsi_scan_host() can be properly invoked once virtio_dev_probe() has
set add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK) to signal active virtio-ring
operation, instead of from within virtscsi_probe().
This fixes a bug where SCSI LUN scanning for both
2012 Jul 11
2
[PATCH-v2] virtio-scsi: Add vdrv->scan for post VIRTIO_CONFIG_S_DRIVER_OK LUN scanning
From: Nicholas Bellinger <nab at linux-iscsi.org>
This patch changes virtio-scsi to use a new virtio_driver->scan() callback
so that scsi_scan_host() can be properly invoked once virtio_dev_probe() has
set add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK) to signal active virtio-ring
operation, instead of from within virtscsi_probe().
This fixes a bug where SCSI LUN scanning for both
2012 Jul 11
2
[PATCH-v2] virtio-scsi: Add vdrv->scan for post VIRTIO_CONFIG_S_DRIVER_OK LUN scanning
From: Nicholas Bellinger <nab at linux-iscsi.org>
This patch changes virtio-scsi to use a new virtio_driver->scan() callback
so that scsi_scan_host() can be properly invoked once virtio_dev_probe() has
set add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK) to signal active virtio-ring
operation, instead of from within virtscsi_probe().
This fixes a bug where SCSI LUN scanning for both
2010 Sep 02
2
blk_rq_check_limits errors
On Thursday, September 02, 2010, Frank Heckes wrote:
> Hi all,
>
> for some of our OSSes a massive amount of errors like:
>
> Sep 2 20:28:15 jf61o02 kernel: blk_rq_check_limits: over max size
> limit.
>
> appearing in /var/log/messages (and dmesg). Does anyone have got a clue
> how-to get of the root cause? Many thanks in advance.
linux/block/blk-core.c
int
2003 Jul 23
1
Qlogic Fibrechannel and detecting new LUNs on the fly
Hi there,
I've installed a Qlogic HBA in a 4.8-STABLE machine and I'm able to present
LUNs to this machine from an HDS array and use them. That all works
perfectly. However, the only time I pick up new LUNs is on a reboot.
Seems like I should be able to rescan the fabric and pick up new LUNs, but
there doesn't seem to be a way to do this. I tried camcontrol rescan, but
that
2008 Nov 30
1
pvscsi and report luns command
I''m updating the gplpv pvscsi driver to support the pvscsi driver in
3.3, and it''s not working properly.
The first command windows issues is a ''report luns'' command, with a 16
byte buffer. The response that Dom0 gives me says that this was executed
successfully. Windows then issues another ''report luns'' command that is
also returned from Dom0
2010 Sep 10
11
Large directory performance
We have been struggling with our Lustre performance for some time now especially with large directories. I recently did some informal benchmarking (on a live system so I know results are not scientifically valid) and noticed a huge drop in performance of reads(stat operations) past 20k files in a single directory. I''m using bonnie++, disabling IO testing (-s 0) and just creating, reading,
2014 Oct 04
2
Mounting LUNs from a SAN array - LUN mappings to devices in /dev/ - are they static?
Hi All :)
I am currently involved in a project in which there is a SAN array (Sun
Storagetek 2540) which exports LUNs for some servers with Centos 5.2 x86. I
will be performing a migration to Centos 5.9 x86_64 in some time and am
gathering needed info now :)
I am trying to find the place in the OS where there is the information
about LUN mappings to /dev/ devices.
For example on array level I
2007 Jul 05
0
Centos 4 and LUNs
Hi,
How much LUNs does Centos 4 support?
we have Dell's with 2 HBA and Multipath working, from what i readed, i can only have 128 LUNs. So with Multipath i can only have 64 devices?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos/attachments/20070705/3c5569c0/attachment.html>
2004 Aug 24
3
Bell Canada Caller-ID
Has anyone gotten CID from Bell Canada to work properly with *?
We have our * box down at our datacentre in St Louis, and whenever we
call it from a Bell Canada Telephone line, all we see is '' for the CID.
I did some digging on google and the mailing lists and couldn't find
anything pertaining directly to Bell-Canada and * CID, but didn't find
much. I did however find :
2011 Feb 01
1
Setting up persistent LUNs
Hello everyone,
I am trying to setup persistent LUNs and having problems.
I've been following instructions I found on the web and they refer to editing /etc/scsi_id.config file and adding options=-g line there. After doing so, I should be able to run scsi_id -g -s /dev/sd* and get proper results.
I've modified file /etc/scsi_id.config appropriately:
[root at psrwjmsafs1 etc]# grep
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all "n" LUNs.
If that''s
2007 Apr 27
2
Scrubbing a zpool built on LUNs
I''m building a system with two Apple RAIDs attached. I have hardware RAID5 configured so no RAIDZ or RAIDZ2, just a basic zpool pointing at the four LUNs representing the four RAID controllers. For on-going maintenance, will a zpool scrub be of any benefit? From what I''ve read with this layer of abstration ZFS is only maintaining the metadata and not the actual data on the
2018 Sep 21
2
[PATCH] vhost/scsi: truncate T10 PI iov_iter to prot_bytes
On Wed, Aug 22, 2018 at 01:21:53PM -0600, Greg Edwards wrote:
> Commands with protection information included were not truncating the
> protection iov_iter to the number of protection bytes in the command.
> This resulted in vhost_scsi mis-calculating the size of the protection
> SGL in vhost_scsi_calc_sgls(), and including both the protection and
> data SG entries in the protection