Eugene Vilensky wrote:> Greetings,
>
> I've hit this exact 'bug':
>
> https://bugzilla.redhat.com/show_bug.cgi?id=491311
>
> I need to remove the mappings manually. I assume this is done via
'multipath
> -F' followed by a 'multipath -v2' ? Has anyone experienced
doing this on a
> production system? We can do it during hours of low activity, but we would
> prefer to keep the databases on this host online at the time. The LUNs
> themselves are completely removed from the host and are not visible on the
> HBA bus.
Just wondering what sort of impact this has to your system? If the
paths are gone they won't be used, so what does it matter?
I have a couple batch processes that run nightly:
- One of them takes a consistent snapshot of a standby mysql database
and exports the read-write snapshot to a QA host nightly
- The other takes another consistent snapshot of a standby mysql
database and exports the read-write snapshot to a backup server
In both cases the process involves removing the previous snapshots
from the QA and backup servers respectively, before re-creating new
snapshots and presenting them back to the original servers on the
same LUN IDs. Part of the process I do delete *all* device mappings
for the snapshotted luns on the destination servers with these
commands:
for i in `/sbin/dmsetup ls | grep p1 | awk '{print $1}'`; do
dmsetup remove $i;
done
for i in `/sbin/dmsetup ls | awk '{print $1}'`; do
dmsetup remove $i;
done
I just restart the multipathing service after I present new LUNs
to the system. Both systems do this daily and have been for about
two months now and it works great.
In these two cases currently I am using VMWare raw device mapping
on the remote systems so while I'm using multipath, there is only
1 path(visible to the VM, the MPIO is handled by the host). Prior
to that I used software iSCSI on CentOS 5.2 (no 5.3 yet), and I
did the same thing, I did the same thing because I found restarting
software iSCSI on CentOS 5.2 to be unreliable(more than one kernel
panic during testing). The reason I use MPIO with only 1 path is
so that I can maintain a consistent configuration across systems,
don't need to worry about who has one path or who has 2 or who
has 4, treat them all the same, since multipathing is automatic.
On CentOS 4.x with software iSCSI I didn't remove the paths I just
let them go stale. I restarted software iSCSI and multipath as
part of the snapshot process(software iSCSI is more solid as far
as restarting goes under 4.x, had two panics in 6 months with multiple
systems restarting every day). Thankfully I use LVM because the device
names changed all the time, at some point I was up to like /dev/sddg.
But if your removing dead paths, or even restarting multipath on
a system to detect new ones I have not had this have any noticeable
impact to the system production or not.
I think device mapper will even prevent you from removing a device
that is still in use.
[root at dc1-backup01:/var/log-ng]# dmsetup ls
350002ac005ce0714 (253, 0)
350002ac005d00714 (253, 2)
350002ac005d00714p1 (253, 10)
350002ac005cf0714 (253, 1)
350002ac005d10714 (253, 4)
350002ac005ce0714p1 (253, 7)
san--p--mysql002b--db-san--p--mysql002b--db (253, 17)
350002ac005d10714p1 (253, 9)
350002ac005d20714 (253, 3)
san--p--mysql002b--log-san--p--mysql002b--log (253, 13)
san--p--pd1mysql001b--log-san--p--pd1mysql001b--log (253, 14)
san--p--mysql001b--log-san--p--mysql001b--log (253, 16)
350002ac005d30714 (253, 5)
350002ac005cf0714p1 (253, 8)
350002ac005d20714p1 (253, 6)
san--p--pd1mysql001b--db-san--p--pd1mysql001b--db (253, 12)
san--p--mysql001b--db-san--p--mysql001b--db (253, 15)
350002ac005d30714p1 (253, 11)
[root at dc1-backup01:/var/log-ng]# dmsetup remove 350002ac005d20714p1
device-mapper: remove ioctl failed: Device or resource busy
Command failed
nate