Mauro M.
2015-Oct-10 16:05 UTC
[Gluster-users] Significant issues after update to 3.7.5 (Centos 6.7)
Hello, Today I received the update to 3.7.5 and since the update I began to have serious issues. My cluster has two bricks with replication. With both bricks up I could not start the volume that was stopped soon after the update. By taking one of the nodes down I managed finally to start the volume, but ... with the following error: [2015-10-10 09:40:59.600974] E [MSGID: 106123] [glusterd-syncop.c:1404:gd_commit_op_phase] 0-management: Commit of operation 'Volume Start' failed on localhost At which point clients could mount the filesystem, however with: # gluster volume status it showed the volume as stopped. If I stopped and started again the volume same problem, but, if I issued again a "volume start myvolume" at this point it would show as started! With both bricks up and running instead there is no way to start the volume once stopped. Only if I take one of the bricks down then I can start it with the procedure above. I am downgrading to 3.7.4. If you have not yet upgraded, BEWARE!
Mauro M.
2015-Oct-10 16:11 UTC
[Gluster-users] Significant issues after update to 3.7.5 (Centos 6.7)
Well ... I whish I could downgrade! I am using the epel repo that now has only the latest version. Does anybody know where to find archive copies? Thank you in advance! Mauro On Sat, October 10, 2015 17:05, Mauro M. wrote:> Hello, > > Today I received the update to 3.7.5 and since the update I began to have > serious issues. My cluster has two bricks with replication. > > With both bricks up I could not start the volume that was stopped soon > after the update. By taking one of the nodes down I managed finally to > start the volume, but ... with the following error: > > [2015-10-10 09:40:59.600974] E [MSGID: 106123] > [glusterd-syncop.c:1404:gd_commit_op_phase] 0-management: Commit of > operation 'Volume Start' failed on localhost > > At which point clients could mount the filesystem, however with: > # gluster volume status > it showed the volume as stopped. > > If I stopped and started again the volume same problem, but, if I issued > again a "volume start myvolume" at this point it would show as started! > > With both bricks up and running instead there is no way to start the > volume once stopped. Only if I take one of the bricks down then I can > start it with the procedure above. > > I am downgrading to 3.7.4. > > If you have not yet upgraded, BEWARE! > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >--
Atin Mukherjee
2015-Oct-10 16:15 UTC
[Gluster-users] Significant issues after update to 3.7.5 (Centos 6.7)
What has happened is here is one of the node acked negative which lead to an inconsistent state as GlusterD doesn't have transaction rollback mechanism. This is why subsequent commands on the volume failed. We'd need to see why the other node didn't behave correctly. What error was thrown at CLI when volume start failed. Could you attach glusterd & cmd_history.log files from both the nodes? -Atin Sent from one plus one On Oct 10, 2015 9:35 PM, "Mauro M." <gluster at ezplanet.net> wrote:> > Hello, > > Today I received the update to 3.7.5 and since the update I began to have > serious issues. My cluster has two bricks with replication. > > With both bricks up I could not start the volume that was stopped soon > after the update. By taking one of the nodes down I managed finally to > start the volume, but ... with the following error: > > [2015-10-10 09:40:59.600974] E [MSGID: 106123] > [glusterd-syncop.c:1404:gd_commit_op_phase] 0-management: Commit of > operation 'Volume Start' failed on localhost > > At which point clients could mount the filesystem, however with: > # gluster volume status > it showed the volume as stopped. > > If I stopped and started again the volume same problem, but, if I issued > again a "volume start myvolume" at this point it would show as started! > > With both bricks up and running instead there is no way to start the > volume once stopped. Only if I take one of the bricks down then I can > start it with the procedure above. > > I am downgrading to 3.7.4. > > If you have not yet upgraded, BEWARE! > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151010/13fc455b/attachment.html>