Hi, Jumping in here, I see folks saying that taking down a single OSS causes clients to lose access to that file system until the OSS is back up and completed recovery. I will say agree in saying yes for the the OST''s on that OSS. However other OSTs on different OSS''s, if one set it up that way, will still be accessible. Our real-world example running 2.6.18-53.1.13.el5 CentOS lustre.1.6.4.3smp 64-bit (I know it''s old; a Lustre upgrade is in my plans) has an MGS that has three MDTs one each per OSS. That is usual and customary. Currently one OSS failed (power supply unit failure over last weekend--those go out in groups like car headlights). I could not unmount it properly because it cannot be powered up to respond to any requests. So I ignore it for now and on the clients I remove it from the /etc/mtab file. That is all. The other OSSes, with the associated MDTs are fine for the users. [larkoc at crew01 ~]$ df -h /crew2 Filesystem Size Used Avail Use% Mounted on ic-mds1 at o2ib:/crew2 19T 14T 4.3T 77% /crew2 [larkoc at crew01 ~]$ df -h /crewdat Filesystem Size Used Avail Use% Mounted on ic-mds1 at o2ib:/crew8 76T 17T 55T 24% /crewdat We have added storage to an OSS by telling it the MDT and name when initiating the the array so expansion is possible. We just did not configure our Lustre to be a single OSS/MDT set-up. I have also sucessfully mounted an OSS minus a bad sub-array using the "Dilger Method" of de-activating the bad OST. That info was in an earlier post. The result was a part of the OSS was available for recovery/use obtaining information stored only on the remaining OSTs. I have found the Lustre file system to be very scalable and relatively robust (my location has experienced some major hardware failures, some catastrophic, completely outside the range and scope of Lustre). My $.02 megan