Olivier Lambert
2016-Nov-17 22:17 UTC
[Gluster-users] corruption using gluster and iSCSI with LIO
Sure: # gluster volume info gv0 Volume Name: gv0 Type: Replicate Volume ID: 2f8658ed-0d9d-4a6f-a00b-96e9d3470b53 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 10.0.0.1:/bricks/brick1/gv0 Brick2: 10.0.0.2:/bricks/brick1/gv0 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet features.shard: on features.shard-block-size: 16MB # gluster volume status gv0 Status of volume: gv0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.0.0.1:/bricks/brick1/gv0 49153 0 Y 1246 Brick 10.0.0.2:/bricks/brick1/gv0 49154 0 Y 1866 Self-heal Daemon on localhost N/A N/A Y 1241 Self-heal Daemon on 10.0.0.2 N/A N/A Y 2440 Task Status of Volume gv0 ------------------------------------------------------------------------------ There are no active volume tasks On Thu, Nov 17, 2016 at 11:03 PM, Lindsay Mathieson <lindsay.mathieson at gmail.com> wrote:> On 18/11/2016 6:00 AM, Olivier Lambert wrote: >> >> First off, thanks for this great product:) >> >> I have a corruption issue when using Glusterfs with LIO iSCSI target: > > > Could you post the results of: > > gluster volume info <volname> > > gluster volume status <volname> > > > thnaks > > -- > Lindsay Mathieson > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users
Lindsay Mathieson
2016-Nov-17 22:29 UTC
[Gluster-users] corruption using gluster and iSCSI with LIO
On 18/11/2016 8:17 AM, Olivier Lambert wrote:> gluster volume info gv0 > > Volume Name: gv0 > Type: Replicate > Volume ID: 2f8658ed-0d9d-4a6f-a00b-96e9d3470b53 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: 10.0.0.1:/bricks/brick1/gv0 > Brick2: 10.0.0.2:/bricks/brick1/gv0 > Options Reconfigured: > nfs.disable: on > performance.readdir-ahead: on > transport.address-family: inet > features.shard: on > features.shard-block-size: 16MBWhen hosting VM's its essential to set these options: network.remote-dio: enable cluster.eager-lock: enable performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.stat-prefetch: on performance.strict-write-ordering: off cluster.server-quorum-type: server cluster.quorum-type: auto cluster.data-self-heal: on Also with replica two and quorum on (required) your volume will become read-only when one node goes down to prevent the possibility of split-brain - you *really* want to avoid that :) I'd recommend a replica 3 volume, that way 1 node can go down, but the other two still form a quorum and will remain r/w. If the extra disks are not possible, then a Arbiter volume can be setup - basically dummy files on the third node. -- Lindsay Mathieson