How did you do the upgrade?
On Thu, Sep 20, 2018 at 11:01 AM Raghavendra Gowdappa <rgowdapp at
redhat.com>
wrote:
>
>
> On Thu, Sep 20, 2018 at 1:29 AM, Raghavendra Gowdappa <rgowdapp at
redhat.com
> > wrote:
>
>> Can you give volume info? Looks like you are using 2 way replica.
>>
>
> Yes indeed.
> gluster volume create gvol0 replica 2 gfs01:/glusterdata/brick1/gvol0
> gfs02:/glusterdata/brick2/gvol0
>
> +Pranith. +Ravi.
>
> Not sure whether 2 way replication has caused this. From what I understand
> we need either 3 way replication or arbiter for correct resolution of
heals.
>
>
>> On Wed, Sep 19, 2018 at 9:39 AM, Johan Karlsson <Johan.Karlsson at
dgc.se>
>> wrote:
>>
>>> I have two servers setup with glusterFS in replica mode, a single
volume
>>> exposed via a mountpoint. The servers are running Ubuntu 16.04 LTS
>>>
>>> After a package upgrade + reboot of both servers, it was discovered
that
>>> the data was completely gone. New data written on the volume via
the
>>> mountpoint is replicated correctly, and gluster status/info
commands states
>>> that everything is ok (no split brain scenario or any healing
needed etc).
>>> But the previous data is completely gone, not even present on any
of the
>>> bricks.
>>>
>>> The following upgrade was done:
>>>
>>> glusterfs-server:amd64 (4.1.0-ubuntu1~xenial3 ->
4.1.4-ubuntu1~xenial1)
>>> glusterfs-client:amd64 (4.1.0-ubuntu1~xenial3 ->
4.1.4-ubuntu1~xenial1)
>>> glusterfs-common:amd64 (4.1.0-ubuntu1~xenial3 ->
4.1.4-ubuntu1~xenial1)
>>>
>>> The logs only show that connection between the servers was lost,
which
>>> is expected.
>>>
>>> I can't even determine if it was the package upgrade or the
reboot that
>>> caused this issue, but I've tried to recreate the issue without
success.
>>>
>>> Any idea what could have gone wrong, or if I have done some wrong
during
>>> the setup. For reference, this is how I've done the setup:
>>>
>>> ---
>>> Add a separate disk with a single partition on both servers
(/dev/sdb1)
>>>
>>> Add gfs hostnames for direct communication without DNS, on both
servers:
>>>
>>> /etc/hosts
>>>
>>> 192.168.4.45 gfs01
>>> 192.168.4.46 gfs02
>>>
>>> On gfs01, create a new LVM Volume Group:
>>> vgcreate gfs01-vg /dev/sdb1
>>>
>>> And on the gfs02:
>>> vgcreate gfs02-vg /dev/sdb1
>>>
>>> Create logical volumes named "brick" on the servers:
>>>
>>> gfs01:
>>> lvcreate -l 100%VG -n brick1 gfs01-vg
>>> gfs02:
>>> lvcreate -l 100%VG -n brick2 gfs02-vg
>>>
>>> Format the volumes with ext4 filesystem:
>>>
>>> gfs01:
>>> mkfs.ext4 /dev/gfs01-vg/brick1
>>> gfs02:
>>> mkfs.ext4 /dev/gfs02-vg/brick2
>>>
>>> Create a mountpoint for the bricks on the servers:
>>>
>>> gfs01:
>>> mkdir -p /glusterdata/brick1
>>> gds02:
>>> mkdir -p /glusterdata/brick2
>>>
>>> Make a permanent mount on the servers:
>>>
>>> gfs01:
>>> /dev/gfs01-vg/brick1 /glusterdata/brick1 ext4 defaults
>>> 0 0
>>> gfs02:
>>> /dev/gfs02-vg/brick2 /glusterdata/brick2 ext4 defaults
>>> 0 0
>>>
>>> Mount it:
>>> mount -a
>>>
>>> Create a gluster volume mount point on the bricks on the servers:
>>>
>>> gfs01:
>>> mkdir -p /glusterdata/brick1/gvol0
>>> gfs02:
>>> mkdir -p /glusterdata/brick2/gvol0
>>>
>>> From each server, peer probe the other one:
>>>
>>> gluster peer probe gfs01
>>> peer probe: success
>>>
>>> gluster peer probe gfs02
>>> peer probe: success
>>>
>>> From any single server, create the gluster volume as a
"replica" with
>>> two nodes; gfs01 and gfs02:
>>>
>>> gluster volume create gvol0 replica 2
gfs01:/glusterdata/brick1/gvol0
>>> gfs02:/glusterdata/brick2/gvol0
>>>
>>> Start the volume:
>>>
>>> gluster volume start gvol0
>>>
>>> On each server, mount the gluster filesystem on the /filestore
mount
>>> point:
>>>
>>> gfs01:
>>> mount -t glusterfs gfs01:/gvol0 /filestore
>>> gfs02:
>>> mount -t glusterfs gfs02:/gvol0 /filestore
>>>
>>> Make the mount permanent on the servers:
>>>
>>> /etc/fstab
>>>
>>> gfs01:
>>> gfs01:/gvol0 /filestore glusterfs defaults,_netdev 0 0
>>> gfs02:
>>> gfs02:/gvol0 /filestore glusterfs defaults,_netdev 0 0
>>> ---
>>>
>>> Regards,
>>>
>>> Johan Karlsson
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
--
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20180920/98b342d6/attachment.html>