On Tue, 2004-05-11 at 06:55, Bryan Bayerdorffer wrote:> Is lustre suitable for pooling the local storage of nodes in a symmetrical > cluster? I have an eight-node openMosix cluster and would like to use the > 140GB on each node as part of a single (LVM-like) logical volume, accessible > by all nodes. > > I saw the note that running an OST and client on the same node is prone to > deadlock. Can one work around this by splitting the cluster into two sets and > having one half serve their OSTs to the other half, and vice versa (e.g. odd > nodes provide space for even nodes, and the other way round).With the caveat about running clients and OSTs on the same node, yes, that setup should work just fine. If you run an MDS and a client on the same node you should be aware that if that node reboots, transparent recovery is not possible. Other clients will receive -EIO for any in-progress operations, but if you can live with that you''ll be fine. Thanks-- -Phil
I interested on running Lustre on a MOSIX cluster. Please let me know if are able to make it work. bm -----Original Message----- From: lustre-discuss-admin@lists.clusterfs.com [mailto:lustre-discuss-admin@lists.clusterfs.com] On Behalf Of Bryan Bayerdorffer Sent: Monday, May 10, 2004 6:56 PM To: lustre-discuss@lists.clusterfs.com Subject: [Lustre-discuss] pooling local disk with lustre Is lustre suitable for pooling the local storage of nodes in a symmetrical=20 cluster? I have an eight-node openMosix cluster and would like to use the=20 140GB on each node as part of a single (LVM-like) logical volume, accessible=20 by all nodes. I saw the note that running an OST and client on the same node is prone to=20 deadlock. Can one work around this by splitting the cluster into two sets and=20 having one half serve their OSTs to the other half, and vice versa (e.g. odd=20 nodes provide space for even nodes, and the other way round). --=20 .. ..-. ..- -.-. .- -. .-. . .- -.. - .... .. ... --. . - .- .-.. .. ..-. . Bryan Bayerdorffer bryan@meatspace.net bryan@spd.analog.com (Wit''s End Computation Center) (Analog Devices) "This isn''t right. This isn''t even wrong." -- Hans Bethe _______________________________________________ Lustre-discuss mailing list Lustre-discuss@lists.clusterfs.com https://lists.clusterfs.com/mailman/listinfo/lustre-discuss
Le jeu 20/05/2004 =E0 13:01, Phil Schwan a =E9crit :> With the caveat about running clients and OSTs on the same node, yes, > that setup should work just fine. >=20 > If you run an MDS and a client on the same node you should be aware that > if that node reboots, transparent recovery is not possible. Other > clients will receive -EIO for any in-progress operations, but if you can > live with that you''ll be fine.I''m also planning (actually, I''m testing a prototype) to use Lustre in my OpenMosix cluster as a distributed filesystem solution. I first tried PVFS2 but the lack of POSIX API made me look at Lustre ... Do you plan to improve this point (client and OST on the same node) in the future ? If so, Lustre might be a nice solution to build a googlefilesystem like distributed fs :) --=20 Arnaud Fontaine arnaud@crao.net ICQ: 82629567
Phil Schwan wrote:> On Tue, 2004-05-11 at 06:55, Bryan Bayerdorffer wrote: > >>Is lustre suitable for pooling the local storage of nodes in a symmetrical >>cluster? I have an eight-node openMosix cluster and would like to use the >>140GB on each node as part of a single (LVM-like) logical volume, accessible >>by all nodes. >> >>I saw the note that running an OST and client on the same node is prone to >>deadlock. Can one work around this by splitting the cluster into two sets and >>having one half serve their OSTs to the other half, and vice versa (e.g. odd >>nodes provide space for even nodes, and the other way round). > > > With the caveat about running clients and OSTs on the same node, yes, > that setup should work just fine.I''m not sure this answers my question, given that I said "and vice versa". Let me clarify: I was suggesting running both an OST and client on every node but they''d be for separate filesystems. Let''s say OST 1 and 3 comprise filesystem ''odd'' and OST 2 and 4 comprise ''even''. Then four nodes would be running Node OST (fs client) 1 1 even 2 2 odd 3 3 even 4 4 odd So, nodes 1 and 3 provide OSTs for clients 2 and 4, and nodes 2 and 4 provide OSTs for clients 1 and 3. Would this avoid the deadlock scenario or is it independent of how the data is partitioned?> > If you run an MDS and a client on the same node you should be aware that > if that node reboots, transparent recovery is not possible. Other > clients will receive -EIO for any in-progress operations, but if you can > live with that you''ll be fine.
On Thu, 2004-05-20 at 11:40, Arnaud Fontaine wrote:> > Do you plan to improve this point (client and OST on the same node) in > the future ?Yes, we certainly will, but I can''t promise when. Maybe in Lustre 1.6.x? -Phil
On Thu, 2004-05-20 at 16:36, Bryan Bayerdorffer wrote:> I''m not sure this answers my question, given that I said "and vice versa". > Let me clarify: I was suggesting running both an OST and client on every node > but they''d be for separate filesystems. Let''s say OST 1 and 3 comprise > filesystem ''odd'' and OST 2 and 4 comprise ''even''. Then four nodes would be > running > > Node OST (fs client) > 1 1 even > 2 2 odd > 3 3 even > 4 4 odd > > So, nodes 1 and 3 provide OSTs for clients 2 and 4, and nodes 2 and 4 provide > OSTs for clients 1 and 3. Would this avoid the deadlock scenario or is it > independent of how the data is partitioned?Sorry, my answer was also not clear: I believe that what you have proposed will work without deadlock, although we have not tested this precise configuration ourselves. Please keep us posted! -Phil
Is lustre suitable for pooling the local storage of nodes in a symmetrical cluster? I have an eight-node openMosix cluster and would like to use the 140GB on each node as part of a single (LVM-like) logical volume, accessible by all nodes. I saw the note that running an OST and client on the same node is prone to deadlock. Can one work around this by splitting the cluster into two sets and having one half serve their OSTs to the other half, and vice versa (e.g. odd nodes provide space for even nodes, and the other way round). -- .. ..-. ..- -.-. .- -. .-. . .- -.. - .... .. ... --. . - .- .-.. .. ..-. . Bryan Bayerdorffer bryan@meatspace.net bryan@spd.analog.com (Wit''s End Computation Center) (Analog Devices) "This isn''t right. This isn''t even wrong." -- Hans Bethe