Sahina Bose
2017-Jul-03 09:25 UTC
[Gluster-users] [ovirt-users] Gluster issue with /var/lib/glusterd/peers/<ip> file
On Sun, Jul 2, 2017 at 5:38 AM, Mike DePaulo <mikedep333 at gmail.com> wrote:> Hi everyone, > > I have ovirt 4.1.1/4.1.2 running on 3 hosts with a gluster hosted engine. > > I was working on setting up a network for gluster storage and > migration. The addresses for it will be 10.0.20.x, rather than > 192.168.1.x for the management network. However, I switched gluster > storage and migration back over to the management network. > > I updated and rebooted one of my hosts (death-star, 10.0.20.52) and on > reboot, the glusterd service would start, but wouldn't seem to work. > The engine webgui reported that its bricks were down, and commands > like this would fail: > > [root at death-star glusterfs]# gluster pool list > pool list: failed > [root at death-star glusterfs]# gluster peer status > peer status: failed > > Upon further investigation, I had under /var/lib/glusterd/peers/ the 2 > existing UUID files, plus a new 3rd one: > [root at death-star peers]# cat 10.0.20.53 > uuid=00000000-0000-0000-0000-000000000000 > state=0 > hostname1=10.0.20.53 >[Adding gluster-users] How did you add this peer "10.0.20.53"? Is this another interface for an existing peer?> I moved that file out of there, restarted glusterd, and now gluster is > working again. > > I am guessing that this is a bug. Let me know if I should attach other > log files; I am not sure which ones. > > And yes, 10.0.20.53 is the IP of one of the other hosts. > > -Mike > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170703/9744346f/attachment.html>
Atin Mukherjee
2017-Jul-03 12:02 UTC
[Gluster-users] [ovirt-users] Gluster issue with /var/lib/glusterd/peers/<ip> file
Please attach glusterd & cmd_history log files from all the nodes. On Mon, Jul 3, 2017 at 2:55 PM, Sahina Bose <sabose at redhat.com> wrote:> > > On Sun, Jul 2, 2017 at 5:38 AM, Mike DePaulo <mikedep333 at gmail.com> wrote: > >> Hi everyone, >> >> I have ovirt 4.1.1/4.1.2 running on 3 hosts with a gluster hosted engine. >> >> I was working on setting up a network for gluster storage and >> migration. The addresses for it will be 10.0.20.x, rather than >> 192.168.1.x for the management network. However, I switched gluster >> storage and migration back over to the management network. >> >> I updated and rebooted one of my hosts (death-star, 10.0.20.52) and on >> reboot, the glusterd service would start, but wouldn't seem to work. >> The engine webgui reported that its bricks were down, and commands >> like this would fail: >> >> [root at death-star glusterfs]# gluster pool list >> pool list: failed >> [root at death-star glusterfs]# gluster peer status >> peer status: failed >> >> Upon further investigation, I had under /var/lib/glusterd/peers/ the 2 >> existing UUID files, plus a new 3rd one: >> [root at death-star peers]# cat 10.0.20.53 >> uuid=00000000-0000-0000-0000-000000000000 >> state=0 >> hostname1=10.0.20.53 >> > > [Adding gluster-users] > > How did you add this peer "10.0.20.53"? Is this another interface for an > existing peer? > > >> I moved that file out of there, restarted glusterd, and now gluster is >> working again. >> >> I am guessing that this is a bug. Let me know if I should attach other >> log files; I am not sure which ones. >> >> And yes, 10.0.20.53 is the IP of one of the other hosts. >> >> -Mike >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170703/76b7b58b/attachment.html>
Possibly Parallel Threads
- op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
- op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
- op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
- Failure while upgrading gluster to 3.10.1
- volume start causes glusterd to core dump in 3.5.0