max.degraaf at kpn.com
2019-Jan-25 12:32 UTC
[Gluster-users] Brick stays offline after update from 4.1.6-1.el7 to 4.1.7-1.el7
We have 2 nodes running CentOS 7.3. Running just fine with glusterfs 4.1.6-1.el7. This morning update both to 4.1.7-1.el7 and the only brick configured stays offline. gluster peer status show no problems: Number of Peers: 1 Hostname: 10.159.241.35 Uuid: 7453dbec-44fb-4e57-9471-6e653d287d3b State: Peer in Cluster (Connected) Number of Peers: 1 Hostname: 10.159.241.3 Uuid: 8f0e75bd-c782-4d21-aaf3-2d8a27e8a714 State: Peer in Cluster (Connected) gluster volume status show the bricks offline: Status of volume: gst Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick grpprdaalcgst01.cloudprod.local:/apps /glusterfs-gst/gst 49152 0 Y 8827 Brick grpprdapdcgst01.cloudprod.local:/apps /glusterfs-gst/gst N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 8818 Self-heal Daemon on grpprdapdcgst01.cloudpr od.local N/A N/A Y 28111 Task Status of Volume gst ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: gst Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick grpprdaalcgst01.cloudprod.local:/apps /glusterfs-gst/gst 49152 0 Y 8827 Brick grpprdapdcgst01.cloudprod.local:/apps /glusterfs-gst/gst N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 28111 Self-heal Daemon on 10.159.241.3 N/A N/A Y 8818 Task Status of Volume gst ------------------------------------------------------------------------------ There are no active volume tasks Any idea on a fix? If not, how can we revert to 4.1.6-1.el7? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190125/8e97cdf1/attachment.html>
max.degraaf at kpn.com
2019-Jan-25 15:09 UTC
[Gluster-users] Brick stays offline after update from 4.1.6-1.el7 to 4.1.7-1.el7
Found it. Filesystem on 1 of the nodes was corrupt. Removing that brick, fixing the filesytem and adding the brick again solved the problem. ________________________________ From: Graaf, Max de Sent: Friday, January 25, 2019 1:32:41 PM To: Gluster-users Subject: Brick stays offline after update from 4.1.6-1.el7 to 4.1.7-1.el7 We have 2 nodes running CentOS 7.3. Running just fine with glusterfs 4.1.6-1.el7. This morning update both to 4.1.7-1.el7 and the only brick configured stays offline. gluster peer status show no problems: Number of Peers: 1 Hostname: 10.159.241.35 Uuid: 7453dbec-44fb-4e57-9471-6e653d287d3b State: Peer in Cluster (Connected) Number of Peers: 1 Hostname: 10.159.241.3 Uuid: 8f0e75bd-c782-4d21-aaf3-2d8a27e8a714 State: Peer in Cluster (Connected) gluster volume status show the bricks offline: Status of volume: gst Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick grpprdaalcgst01.cloudprod.local:/apps /glusterfs-gst/gst 49152 0 Y 8827 Brick grpprdapdcgst01.cloudprod.local:/apps /glusterfs-gst/gst N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 8818 Self-heal Daemon on grpprdapdcgst01.cloudpr od.local N/A N/A Y 28111 Task Status of Volume gst ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: gst Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick grpprdaalcgst01.cloudprod.local:/apps /glusterfs-gst/gst 49152 0 Y 8827 Brick grpprdapdcgst01.cloudprod.local:/apps /glusterfs-gst/gst N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 28111 Self-heal Daemon on 10.159.241.3 N/A N/A Y 8818 Task Status of Volume gst ------------------------------------------------------------------------------ There are no active volume tasks Any idea on a fix? If not, how can we revert to 4.1.6-1.el7? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190125/f922ace0/attachment.html>