Cedric Lemarchand
2017-Feb-01 12:41 UTC
[Gluster-users] Always writeable distributed volume
Short answer : I think you need to add an arbiter node, this way the cluster keeps being writable when there is at least 2 nodes presents (eg 1 data node is down). This solve the split brain case where only 2 nodes are involved in the setup. Cheers -- C?dric Lemarchand> Le 1 f?vr. 2017 ? 13:18, Jesper Led Lauridsen TS Infra server <JLY at dr.dk> a ?crit : > > Hi, > > I am wondering if it is possible to create an always writeable distributed volume. > > Reading the documentation I can figure out how. So is it possible? > If I understand the docs correctly. The DHT determines based on a hash of the filename, which brick to place the file. And if you have two bricks and loose one brick I can't create files DHT determines should be places on the failed brick. > > I have tried creating a distributed volume on two bricks/nodes. And as the feared I can't write files DHT determines should be placed on a failed node. I am well aware and can accept that I can't access or re-create files already created on the failed node. But I would like to write new files. > > Is there a setting/feature I can enable that allows me to create file on the available/online bricks even if DHT determines that the file should be placed on the unavailable/failed? > > My use case is: I want to use a Gluster volume for temporary storage that is always available, as in I can always mount and write to it . We have a lot of media file that is transcoded on user request and there need at temporary storage for this operation. All I need is a temporary, fast and always accessible storage with no data security/replica. > > Regards > Jesper > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users
Jesper Led Lauridsen TS Infra server
2017-Feb-01 14:00 UTC
[Gluster-users] Always writeable distributed volume
Arbiter, isn't that only used where you want replica, but same storage space. I would like a distributed volume where I can write, even if one of the bricks fail. No replication. Thanks Jesper> -----Oprindelig meddelelse----- > Fra: Cedric Lemarchand [mailto:yipikai7 at gmail.com] > Sendt: 1. februar 2017 13:41 > Til: Jesper Led Lauridsen TS Infra server <JLY at dr.dk> > Cc: gluster-users at gluster.org > Emne: Re: [Gluster-users] Always writeable distributed volume > > Short answer : I think you need to add an arbiter node, this way the cluster > keeps being writable when there is at least 2 nodes presents (eg 1 data node > is down). This solve the split brain case where only 2 nodes are involved in > the setup. > > Cheers > > -- > C?dric Lemarchand > > > Le 1 f?vr. 2017 ? 13:18, Jesper Led Lauridsen TS Infra server <JLY at dr.dk> a > ?crit : > > > > Hi, > > > > I am wondering if it is possible to create an always writeable distributed > volume. > > > > Reading the documentation I can figure out how. So is it possible? > > If I understand the docs correctly. The DHT determines based on a hash of > the filename, which brick to place the file. And if you have two bricks and > loose one brick I can't create files DHT determines should be places on the > failed brick. > > > > I have tried creating a distributed volume on two bricks/nodes. And as the > feared I can't write files DHT determines should be placed on a failed node. I > am well aware and can accept that I can't access or re-create files already > created on the failed node. But I would like to write new files. > > > > Is there a setting/feature I can enable that allows me to create file on the > available/online bricks even if DHT determines that the file should be placed > on the unavailable/failed? > > > > My use case is: I want to use a Gluster volume for temporary storage that is > always available, as in I can always mount and write to it . We have a lot of > media file that is transcoded on user request and there need at temporary > storage for this operation. All I need is a temporary, fast and always > accessible storage with no data security/replica. > > > > Regards > > Jesper > > > > _______________________________________________ > > Gluster-users mailing list > > Gluster-users at gluster.org > > http://lists.gluster.org/mailman/listinfo/gluster-users