hi all, Is it possible to duplicate some files or files in a directory on nodes, or something like that? So if I write a file make it available on node1 and node2, if node1 is died, I can find it on node2. Is it possible? Thank you, tamas
On Fri, 2008-07-04 at 16:34 +0200, Papp Tam?s wrote:> hi all, > > Is it possible to duplicate some files or files in a directory on > nodes, or something like that? > > > So if I write a file make it available on node1 and node2, if node1 is > died, I can find it on node2.Your question is not making much sense to me. Let me say that Lustre is a distributed filesystem. All files are available to all (client) nodes. b. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://lists.lustre.org/pipermail/lustre-discuss/attachments/20080704/6b69ae4b/attachment.bin
Brian J. Murrell wrote:> On Fri, 2008-07-04 at 16:34 +0200, Papp Tam?s wrote: > >> hi all, >> >> Is it possible to duplicate some files or files in a directory on >> nodes, or something like that? >> >> >> So if I write a file make it available on node1 and node2, if node1 is >> died, I can find it on node2. >> > > Your question is not making much sense to me. Let me say that Lustre is > a distributed filesystem. All files are available to all (client) > nodes. >I''d like this from http://osdir.com/ml/file-systems.lustre.user/2007-12/msg00010.html:>/> Is it possible to configure Lustre to write Objects to more than 1 node / >/> simultaneously such that I am guaranteed that if one node goes down that / >/> all files are still accessible?/ >/ That''s called RAID, and right now, no. It''s on the roadmap though./Is it available already. Can I check the roadmap somewhere? On clusterfs.com it was accessible easily, but on sun.com I don''t find it. Thank you, tamas
On Fri, 2008-07-04 at 17:27 +0200, Papp Tamas wrote:> > >/> Is it possible to configure Lustre to write Objects to more than 1 node / > >/> simultaneously such that I am guaranteed that if one node goes down that / > >/> all files are still accessible?/ > >/ That''s called RAID, and right now, no. It''s on the roadmap though./It''s called "SNS".> Is it available already.No.> Can I check the roadmap somewhere?I''m not sure about where a/the roadmap exists. Perhaps if one of our sales/marketing people are on this list they could answer more definitively. b. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://lists.lustre.org/pipermail/lustre-discuss/attachments/20080704/37b727fd/attachment.bin
On Jul 04, 2008 17:27 +0200, Papp Tamas wrote:> Brian J. Murrell wrote: > > On Fri, 2008-07-04 at 16:34 +0200, Papp Tam?s wrote: > >> Is it possible to duplicate some files or files in a directory on > >> nodes, or something like that? > >> > >> > >> So if I write a file make it available on node1 and node2, if node1 is > >> died, I can find it on node2. > > > > Your question is not making much sense to me. Let me say that Lustre is > > a distributed filesystem. All files are available to all (client) > > nodes. > > I''d like this from > http://osdir.com/ml/file-systems.lustre.user/2007-12/msg00010.html: > > >/> Is it possible to configure Lustre to write Objects to more than 1 node / > >/> simultaneously such that I am guaranteed that if one node goes down that / > >/> all files are still accessible?/ > >/ That''s called RAID, and right now, no. It''s on the roadmap though./ > > Is it available already. Can I check the roadmap somewhere? On clusterfs.com > it was accessible easily, but on sun.com I don''t find it.The Server Network Striping project has been put on hold, due to complexity of the design required to allow asynchronous writes to work with RAID-5/6 type layouts (data + parity). There has been some discussion about implementing only RAID-1 (mirroring), but whether that becomes a feature that is implemented depends on how many customers are interested in using RAID-1. Cheers, Andreas -- Andreas Dilger Sr. Staff Engineer, Lustre Group Sun Microsystems of Canada, Inc.
Andreas Dilger wrote:> The Server Network Striping project has been put on hold, due to complexity > of the design required to allow asynchronous writes to work with RAID-5/6 > type layouts (data + parity). There has been some discussion about > implementing only RAID-1 (mirroring), but whether that becomes a feature > that is implemented depends on how many customers are interested in using > RAID-1. >Thank you for the info. I have an other question. How can I disable az OSS from the cluster, so clisnt nodes do not want to make an access to it? I mean the following: The node get down, for example HW failure, its raid array has lost. I don''t care, I just want to disable or remove it from the cluster, and the restore the missing (0 sized) files. The following day I repair the node, so I want to make it available again. I know, there is lctl deactivate, but it is only for writing, I need a global set. Thank you, tamas
On Jul 06, 2008 21:44 +0200, Papp Tam?s wrote:> How can I disable az OSS from the cluster, so clisnt nodes do not want > to make an access to it? > > I mean the following: > > The node get down, for example HW failure, its raid array has lost. I > don''t care, I just want to disable or remove it from the cluster, and > the restore the missing (0 sized) files. The following day I repair the > node, so I want to make it available again. > > I know, there is lctl deactivate, but it is only for writing, I need a > global set.If you only do "lctl deactivate" on the MDS, then it will only stop the MDS from allocating new objects on this OST. If you also do "lctl deactivate" on the clients then they will return -EIO when accessing files on this OST instead of waiting for recovery. Cheers, Andreas -- Andreas Dilger Sr. Staff Engineer, Lustre Group Sun Microsystems of Canada, Inc.
On Sun, 2008-07-06 at 21:52 -0600, Andreas Dilger wrote:> > If you only do "lctl deactivate" on the MDS, then it will only stop > the MDS from allocating new objects on this OST. If you also do > "lctl deactivate" on the clients then they will return -EIO when > accessing files on this OST instead of waiting for recovery.Probably what would be nice is being able to do this on the MGS so that it gets added to the configuration and propagates to the clients immediately and survives a client reboot. b. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://lists.lustre.org/pipermail/lustre-discuss/attachments/20080707/1c60fef3/attachment.bin
On Jul 07, 2008 10:33 -0400, Brian J. Murrell wrote:> On Sun, 2008-07-06 at 21:52 -0600, Andreas Dilger wrote: > > If you only do "lctl deactivate" on the MDS, then it will only stop > > the MDS from allocating new objects on this OST. If you also do > > "lctl deactivate" on the clients then they will return -EIO when > > accessing files on this OST instead of waiting for recovery. > > Probably what would be nice is being able to do this on the MGS so that > it gets added to the configuration and propagates to the clients > immediately and survives a client reboot.It''s already possible to deactivate an OST permanently on the MGS. The original email was asking what to do if the OST would be returned to service the next day or two. Cheers, Andreas -- Andreas Dilger Sr. Staff Engineer, Lustre Group Sun Microsystems of Canada, Inc.