Hi all, I accidentally added an ost using an fsname belonging to another fs than what was intended. I have deactivated it on both mds and clients. Now I would like to erase all tracks of it and reformat the ost. As far as I can see the only option is to deactivate it, thus preventing further use of it. How can I completely delete it ? I guess rm -rf /proc/fs/lustre/osc/fs-OST0004-osc/ is not an option. Thanks, ThomasJ
Hi, ? 2010-11-17???5:18? Thomas Johansson ???> Hi all, > > I accidentally added an ost using an fsname belonging to another fs than what was intended.I am not sure I understand - Do you have multiple filesystems sharing the same MGS? What exactly are the steps that you use an OST for another filesystem?> I have deactivated it on both mds and clients. Now I would like to erase all tracks of it and > reformat the ost. > As far as I can see the only option is to deactivate it, thus preventing further use of it. > How can I completely delete it ? > I guess rm -rf /proc/fs/lustre/osc/fs-OST0004-osc/ is not an option.To remove an OST, first you need to deactiviate (temporarily) corresponding osc device on your MDT, then move out files that have objects on that OST to a different dir, then permanently deactivate the OST using "lctl conf_param".> > Thanks, > ThomasJ > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discuss
Thanks Wang,> Hi, > > ?? 2010-11-17??????5:18?? Thomas Johansson ????? > >> Hi all, >> >> I accidentally added an ost using an fsname belonging to another fs than >> what was intended. > > I am not sure I understand - Do you have multiple filesystems sharing the > same MGS?Yes 5 filesystems on 4 OSS:s and 2 MDS in active/passive failover. Some 100 TB of space in total.> What exactly are the steps that you use an OST for another filesystem?The steps ? We need to keep them separate, for some reasons.> >> I have deactivated it on both mds and clients. Now I would like to erase >> all tracks of it and >> reformat the ost. >> As far as I can see the only option is to deactivate it, thus preventing >> further use of it. >> How can I completely delete it ? >> I guess rm -rf /proc/fs/lustre/osc/fs-OST0004-osc/ is not an option. > > To remove an OST, first you need to deactiviate (temporarily) > corresponding osc device on your MDT, then move out files that have > objects on that OST to a different dir, then permanently deactivate the > OST using "lctl conf_param".The deactivation part has already been done, on all parts. As this was a mistake, I simply want the presence of the ost/osc out, not ever showing up again. Now it''s showing up as INACTIVE as a part of the filesystem I mistakingly added it to. As it seems, this can''t be done. Or can it? BR ThomasJ> >> >> Thanks, >> ThomasJ >> _______________________________________________ >> Lustre-discuss mailing list >> Lustre-discuss at lists.lustre.org >> http://lists.lustre.org/mailman/listinfo/lustre-discuss > >
Hello, ? 2010-11-19???3:21? Thomas Johansson ???>>>...>>> >>> I accidentally added an ost using an fsname belonging to another fs than >>> what was intended. >> >> I am not sure I understand - Do you have multiple filesystems sharing the >> same MGS? > > Yes 5 filesystems on 4 OSS:s and 2 MDS in active/passive failover. > Some 100 TB of space in total.Probably you misunderstood me. You seems to be using 1 filesystem(MDSx2/OSSx4) with 5 clients. Making 5 lustre filesystems out of 4OSS/2MDS is a mission-impossible.>...> > The deactivation part has already been done, on all parts. > As this was a mistake, I simply want the presence of the ost/osc out, > not ever showing up again. Now it''s showing up as INACTIVE as a part > of the filesystem I mistakingly added it to. > As it seems, this can''t be done. Or can it? >An OST can only be deactivated rather than being removed so that later on you can reactivate it. Currently there is no way to really remove an OST from the system as there involves complicated steps that can not be done in an ''atomic'' way. If you are very uncomfortable with the INACTIVE message, you can shutdown your filesystem and do writeconf on all your server targets, then restart them in proper order to regenerate the configuration logs. Make sure you have all your files/data striped over the inactive ost are moved out before that. Note that writeconf is not designed for this purpose so please use it with caution. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20101122/c5c86ad7/attachment-0001.html
Wang Yibin wrote:> Hello, > > ? 2010-11-19???3:21? Thomas Johansson ??? > >>> I am not sure I understand - Do you have multiple filesystems >>> sharing the >>> same MGS? >> >> Yes 5 filesystems on 4 OSS:s and 2 MDS in active/passive failover. >> Some 100 TB of space in total. > > Probably you misunderstood me. You seems to be using 1 > filesystem(MDSx2/OSSx4) with 5 clients. > Making 5 lustre filesystems out of 4OSS/2MDS is a mission-impossible.No, while it is not often done, there is nothing to prevent 5 Lustre file systems from running on 4 OSS nodes and 2 MDS nodes. In addition to the MGS, each file system needs one MDT and 1 or more OSTs. An OSS can serve up OSTs for multiple file systems, and an MDS node can serve up MDTs for multiple file systems (and a node could even be both an MDS and OSS at the same time). Now, if there were a separate MGS for each file system, then it would be a different story... each node can really only serve up OSTs or MDTs for a single MGS. Kevin
We at our site have 3 filesystems sharing one MGS and importantly have got one node serving all the MDTs w/o any problems. 2010/11/23 Kevin Van Maren <kevin.van.maren at oracle.com>> Wang Yibin wrote: > > Hello, > > > > ? 2010-11-19???3:21? Thomas Johansson ??? > > > >>> I am not sure I understand - Do you have multiple filesystems > >>> sharing the > >>> same MGS? > >> > >> Yes 5 filesystems on 4 OSS:s and 2 MDS in active/passive failover. > >> Some 100 TB of space in total. > > > > Probably you misunderstood me. You seems to be using 1 > > filesystem(MDSx2/OSSx4) with 5 clients. > > Making 5 lustre filesystems out of 4OSS/2MDS is a mission-impossible. > > No, while it is not often done, there is nothing to prevent 5 Lustre > file systems from running on 4 OSS nodes and 2 MDS nodes. > In addition to the MGS, each file system needs one MDT and 1 or more > OSTs. An OSS can serve up OSTs for multiple file systems, and an MDS > node can serve up MDTs for multiple file systems (and a node could even > be both an MDS and OSS at the same time). > > Now, if there were a separate MGS for each file system, then it would be > a different story... each node can really only serve up OSTs or MDTs for > a single MGS. > > Kevin > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discuss >-- Regards-- Rishi Pathak National PARAM Supercomputing Facility Center for Development of Advanced Computing(C-DAC) Pune University Campus,Ganesh Khind Road Pune-Maharastra -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20101123/2f5f08d6/attachment-0001.html