Hi,Phil
when i execute these commands,i have umounted the client successfully.There
is no more client connection ,only one client is mounted.
i will send you the information in /var/log/messages,thank u very much.
I think maybe i should wait a while to stop mds and ost after umount client
:-)
>From: Phil Schwan <phil@clusterfs.com>
>To: Tad Lake <tad_lake@hotmail.com>
>CC: lustre-discuss@lists.clusterfs.com
>Subject: Re: [Lustre-discuss] problem on umount
>Date: Fri, 28 May 2004 09:15:51 -0400
>
>Hi Tad--
>
>You didn''t tell me if there are any Lustre messages on the console
of
>these nodes (or in /var/log/messages). But this might be enough,
I''m
>not completely sure.
>
>Which node is your client node? How are you cleaning it up?
>
>This lconf output hints that your MDS and OSSs have clients which are
>still connected (the console output would perhaps confirm this). If
>clients are still connected, you have to shut down the services with -f.
>
>Did you reboot some clients without unmounting them, or unmount them
>with -f?
>
>When you unmount clients or the MDS with -f, it will not attempt to
>clean up its connections gracefully. Some day we will probably fix this
>to try just once, but it''s not all that serious an issue.
>
>-Phil
>
>On Fri, 2004-05-28 at 04:54, Tad Lake wrote:
> > Hi,this is my example for umount.
> > node7 is mds and ost,while node8 is ost.I use -v in lconf.umount
client
is> > ok .
> > Could u give me some advices?
> > The sequence is :
> > node7 : lconf -d -v --node node7 giga.xml
> > lconf -d -v -f --node node7 giga.xml
> > node8 : lconf -d -v --node node8 giga.xml
> > lconf -d -v -f --node node8 giga.xml
> >
> >
> > node7:
> > lconf -d -v --node node7 giga.xml
> > configuring for host: [''node7'']
> > add_local NET_node7_tcp_UUID
> > find_local_routes: []
> > Service: mdsdev MDD_mds1_node7 MDD_mds1_node7_UUID
> > MDSDEV: mds1 mds1_UUID
> > + /usr/sbin/lctl
> > ignore_errors
> > cfg_device $mds1
> > cleanup
> > detach
> > quit
> > + losetup /dev/loop0
> > + losetup /dev/loop1
> > + losetup -d /dev/loop1
> > unable to clean loop device: /dev/loop1 for file: /tmp/mds
> > ioctl: LOOP_CLR_FD: Device or resource busy
> > Service: osd OSD_OST_node7_node7 OSD_OST_node7_node7_UUID
> > OSD: OST_node7 OST_node7_UUID
> > + /usr/sbin/lctl
> > ignore_errors
> > cfg_device $OST_node7
> > cleanup
> > detach
> > quit
> > + losetup /dev/loop0
> > + losetup -d /dev/loop0
> > unable to clean loop device: /dev/loop0 for file: /tmp/ost
> > ioctl: LOOP_CLR_FD: Device or resource busy
> > Service: ldlm ldlm ldlm_UUID
> > Service: network NET_node7_tcp NET_node7_tcp_UUID
> > Service: mdsdev MDD_mds1_node7 MDD_mds1_node7_UUID
> > Service: osd OSD_OST_node7_node7 OSD_OST_node7_node7_UUID
> > Service: ldlm ldlm ldlm_UUID
> > unloading module: ptlrpc
> > + /sbin/rmmod ptlrpc
> > ! unable to unload module: ptlrpc
> > ptlrpc: Device or resource busy
> > unloading module: obdclass
> > + /sbin/rmmod obdclass
> > ! unable to unload module: obdclass
> > obdclass: Device or resource busy
> > unloading module: lvfs
> > + /sbin/rmmod lvfs
> > ! unable to unload module: lvfs
> > lvfs: Device or resource busy
> > Service: network NET_node7_tcp NET_node7_tcp_UUID
> >
> >
> > lconf -d -v -f --node node7 giga.xml
> > configuring for host: [''node7'']
> > add_local NET_node7_tcp_UUID
> > find_local_routes: []
> > + /usr/sbin/lctl
> > set_timeout 5
> > quit
> > Service: mdsdev MDD_mds1_node7 MDD_mds1_node7_UUID
> > MDSDEV: mds1 mds1_UUID
> > + /usr/sbin/lctl
> > ignore_errors
> > cfg_device $mds1
> > cleanup force
> > detach
> > quit
> > + /usr/sbin/lctl
> > ignore_errors
> > cfg_device $MDT
> > cleanup force
> > detach
> > quit
> > + losetup /dev/loop0
> > + losetup /dev/loop1
> > + losetup -d /dev/loop1
> > Service: osd OSD_OST_node7_node7 OSD_OST_node7_node7_UUID
> > OSD: OST_node7 OST_node7_UUID
> > + /usr/sbin/lctl
> > ignore_errors
> > cfg_device $OST_node7
> > cleanup force
> > detach
> > quit
> > + /usr/sbin/lctl
> > ignore_errors
> > cfg_device $OSS
> > cleanup force
> > detach
> > quit
> > + losetup /dev/loop0
> > + losetup -d /dev/loop0
> > Service: ldlm ldlm ldlm_UUID
> > Service: network NET_node7_tcp NET_node7_tcp_UUID
> > NETWORK: NET_node7_tcp NET_node7_tcp_UUID tcp node7 988
> > killing process 2941
> > unable to kill acceptor
> > Service: mdsdev MDD_mds1_node7 MDD_mds1_node7_UUID
> > unloading module: fsfilt_ext3
> > + /sbin/rmmod fsfilt_ext3
> > unloading module: mds
> > + /sbin/rmmod mds
> > unloading module: lov
> > + /sbin/rmmod lov
> > unloading module: osc
> > + /sbin/rmmod osc
> > unloading module: mdc
> > + /sbin/rmmod mdc
> > Service: osd OSD_OST_node7_node7 OSD_OST_node7_node7_UUID
> > unloading module: obdfilter
> > + /sbin/rmmod obdfilter
> > unloading module: ost
> > + /sbin/rmmod ost
> > Service: ldlm ldlm ldlm_UUID
> > unloading module: ptlrpc
> > + /sbin/rmmod ptlrpc
> > unloading module: obdclass
> > + /sbin/rmmod obdclass
> > unloading module: lvfs
> > + /sbin/rmmod lvfs
> > Service: network NET_node7_tcp NET_node7_tcp_UUID
> > unloading module: ksocknal
> > + /sbin/rmmod ksocknal
> > unloading module: portals
> > + /sbin/rmmod portals
> >
> >
> >
> > node8
> > lconf -d -v --node node8 giga.xml
> > configuring for host: [''node8'']
> > add_local NET_node8_tcp_UUID
> > find_local_routes: []
> > Service: osd OSD_OST_node8_node8 OSD_OST_node8_node8_UUID
> > OSD: OST_node8 OST_node8_UUID
> > + /usr/sbin/lctl
> > ignore_errors
> > cfg_device $OST_node8
> > cleanup
> > detach
> > quit
> > + losetup /dev/loop0
> > + losetup -d /dev/loop0
> > unable to clean loop device: /dev/loop0 for file: /tmp/ost
> > ioctl: LOOP_CLR_FD: Device or resource busy
> > Service: ldlm ldlm ldlm_UUID
> > Service: network NET_node8_tcp NET_node8_tcp_UUID
> > Service: osd OSD_OST_node8_node8 OSD_OST_node8_node8_UUID
> > Service: ldlm ldlm ldlm_UUID
> > unloading module: ptlrpc
> > + /sbin/rmmod ptlrpc
> > ! unable to unload module: ptlrpc
> > ptlrpc: Device or resource busy
> > unloading module: obdclass
> > + /sbin/rmmod obdclass
> > ! unable to unload module: obdclass
> > obdclass: Device or resource busy
> > unloading module: lvfs
> > + /sbin/rmmod lvfs
> > ! unable to unload module: lvfs
> > lvfs: Device or resource busy
> > Service: network NET_node8_tcp NET_node8_tcp_UUID
> >
> >
> >
> > lconf -d -v -f --node node8 giga.xml
> > configuring for host: [''node8'']
> > add_local NET_node8_tcp_UUID
> > find_local_routes: []
> > + /usr/sbin/lctl
> > set_timeout 5
> > quit
> > Service: osd OSD_OST_node8_node8 OSD_OST_node8_node8_UUID
> > OSD: OST_node8 OST_node8_UUID
> > + /usr/sbin/lctl
> > ignore_errors
> > cfg_device $OST_node8
> > cleanup force
> > detach
> > quit
> > + /usr/sbin/lctl
> > ignore_errors
> > cfg_device $OSS
> > cleanup force
> > detach
> > quit
> > + losetup /dev/loop0
> > + losetup -d /dev/loop0
> > Service: ldlm ldlm ldlm_UUID
> > Service: network NET_node8_tcp NET_node8_tcp_UUID
> > NETWORK: NET_node8_tcp NET_node8_tcp_UUID tcp node8 988
> > killing process 2699
> > unable to kill acceptor
> > Service: osd OSD_OST_node8_node8 OSD_OST_node8_node8_UUID
> > unloading module: obdfilter
> > + /sbin/rmmod obdfilter
> > unloading module: fsfilt_ext3
> > + /sbin/rmmod fsfilt_ext3
> > unloading module: ost
> > + /sbin/rmmod ost
> > Service: ldlm ldlm ldlm_UUID
> > unloading module: ptlrpc
> > + /sbin/rmmod ptlrpc
> > unloading module: obdclass
> > + /sbin/rmmod obdclass
> > unloading module: lvfs
> > + /sbin/rmmod lvfs
> > Service: network NET_node8_tcp NET_node8_tcp_UUID
> > unloading module: ksocknal
> > + /sbin/rmmod ksocknal
> > unloading module: portals
> > + /sbin/rmmod portals
> >
> >
> >
> > >From: Phil Schwan <phil@clusterfs.com>
> > >To: Tad Lake <tad_lake@hotmail.com>
> > >CC: lustre-discuss@lists.clusterfs.com
> > >Subject: Re: [Lustre-discuss] problem on umount
> > >Date: Mon, 24 May 2004 15:35:30 -0400
> > >
> > >On Tue, 2004-05-18 at 09:44, Tad Lake wrote:
> > > > i have install lustre 1.0.4 on ia64 machine.In my config
file,i
have 1> > > > ost,1mds,1client.
> > > > But when i use
> > > > lconf -d --node client xxx.xml
> > > > it tell me device busy.
> > > > why does it happen?
> > > > i am sure there is no any thread in /mnt/lustre ,and it is
empty.
> > >
> > >It''s hard to say without more information; what is the
exact output of
> > >lconf? Add "-v" for more detail. Which messages are on
your console,
> > >or in /var/log/messages?
> > >
> > >-Phil
> > >
> > >_______________________________________________
> > >Lustre-discuss mailing list
> > >Lustre-discuss@lists.clusterfs.com
> > >https://lists.clusterfs.com/mailman/listinfo/lustre-discuss
> >
> > _________________________________________________________________
> > 涓庤仈鏈虹殑鏈嬪弸杩涜浜ゆ祦锛岃浣跨敤 MSN Messenger:
http://messenger.msn.com/cn>
>_______________________________________________
>Lustre-discuss mailing list
>Lustre-discuss@lists.clusterfs.com
>https://lists.clusterfs.com/mailman/listinfo/lustre-discuss
_________________________________________________________________
免费下载 MSN Explorer: http://explorer.msn.com/lccn/