Gianluca Cecchi
2017-Jul-06 06:38 UTC
[Gluster-users] op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Thu, Jul 6, 2017 at 6:55 AM, Atin Mukherjee <amukherj at redhat.com> wrote:> > >> > You can switch back to info mode the moment this is hit one more time with > the debug log enabled. What I'd need here is the glusterd log (with debug > mode) to figure out the exact cause of the failure. > > >> >> Let me know, >> thanks >> >> >Yes, but with the volume in the current state I cannot run the reset-brick command. I have another volume, named "iso", that I can use, but I would like to use it as clean after understanding the problem on "export" volume. Currently on "export" volume in fact I have this [root at ovirt01 ~]# gluster volume info export Volume Name: export Type: Replicate Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153 Status: Started Snapshot Count: 0 Number of Bricks: 0 x (2 + 1) = 1 Transport-type: tcp Bricks: Brick1: gl01.localdomain.local:/gluster/brick3/export Options Reconfigured: ... While on the other two nodes [root at ovirt02 ~]# gluster volume info export Volume Name: export Type: Replicate Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153 Status: Started Snapshot Count: 0 Number of Bricks: 0 x (2 + 1) = 2 Transport-type: tcp Bricks: Brick1: ovirt02.localdomain.local:/gluster/brick3/export Brick2: ovirt03.localdomain.local:/gluster/brick3/export Options Reconfigured: [root at ovirt03 ~]# gluster volume info export Volume Name: export Type: Replicate Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153 Status: Started Snapshot Count: 0 Number of Bricks: 0 x (2 + 1) = 2 Transport-type: tcp Bricks: Brick1: ovirt02.localdomain.local:/gluster/brick3/export Brick2: ovirt03.localdomain.local:/gluster/brick3/export Options Reconfigured: ... Eventually I can destroy and recreate this "export" volume again with the old names (ovirt0N.localdomain.local) if you give me the sequence of commands, then enable debug and retry the reset-brick command Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170706/d3ca3349/attachment.html>
Gianluca Cecchi
2017-Jul-06 11:56 UTC
[Gluster-users] op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Thu, Jul 6, 2017 at 8:38 AM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote:> > Eventually I can destroy and recreate this "export" volume again with the > old names (ovirt0N.localdomain.local) if you give me the sequence of > commands, then enable debug and retry the reset-brick command > > Gianluca >So it seems I was able to destroy and re-create. Now I see that the volume creation uses by default the new ip, so I reverted the hostnames roles in the commands after putting glusterd in debug mode on the host where I execute the reset-brick command (do I have to set debug for the the nodes too?) [root at ovirt01 ~]# gluster volume reset-brick export gl01.localdomain.local:/gluster/brick3/export start volume reset-brick: success: reset-brick start operation successful [root at ovirt01 ~]# gluster volume reset-brick export gl01.localdomain.local:/gluster/brick3/export ovirt01.localdomain.local:/gluster/brick3/export commit force volume reset-brick: failed: Commit failed on ovirt02.localdomain.local. Please check log file for details. Commit failed on ovirt03.localdomain.local. Please check log file for details. [root at ovirt01 ~]# See here the glusterd.log in zip format: https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/view?usp=sharing Time of the reset-brick operation in logfile is 2017-07-06 11:42 (BTW: can I have time in log not in UTC format, as I'm using CEST date in my system?) I see a difference, because the brick doesn't seems isolated as before... [root at ovirt01 glusterfs]# gluster volume info export Volume Name: export Type: Replicate Volume ID: e278a830-beed-4255-b9ca-587a630cbdbf Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt01.localdomain.local:/gluster/brick3/export Brick2: 10.10.2.103:/gluster/brick3/export Brick3: 10.10.2.104:/gluster/brick3/export (arbiter) [root at ovirt02 ~]# gluster volume info export Volume Name: export Type: Replicate Volume ID: e278a830-beed-4255-b9ca-587a630cbdbf Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt01.localdomain.local:/gluster/brick3/export Brick2: 10.10.2.103:/gluster/brick3/export Brick3: 10.10.2.104:/gluster/brick3/export (arbiter) And also in oVirt I see all 3 bricks online.... Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170706/f92ca76c/attachment.html>
Atin Mukherjee
2017-Jul-06 12:16 UTC
[Gluster-users] op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Thu, Jul 6, 2017 at 5:26 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote:> On Thu, Jul 6, 2017 at 8:38 AM, Gianluca Cecchi <gianluca.cecchi at gmail.com > > wrote: > >> >> Eventually I can destroy and recreate this "export" volume again with the >> old names (ovirt0N.localdomain.local) if you give me the sequence of >> commands, then enable debug and retry the reset-brick command >> >> Gianluca >> > > > So it seems I was able to destroy and re-create. > Now I see that the volume creation uses by default the new ip, so I > reverted the hostnames roles in the commands after putting glusterd in > debug mode on the host where I execute the reset-brick command (do I have > to set debug for the the nodes too?) >You have to set the log level to debug for glusterd instance where the commit fails and share the glusterd log of that particular node.> > > [root at ovirt01 ~]# gluster volume reset-brick export > gl01.localdomain.local:/gluster/brick3/export start > volume reset-brick: success: reset-brick start operation successful > > [root at ovirt01 ~]# gluster volume reset-brick export > gl01.localdomain.local:/gluster/brick3/export ovirt01.localdomain.local:/gluster/brick3/export > commit force > volume reset-brick: failed: Commit failed on ovirt02.localdomain.local. > Please check log file for details. > Commit failed on ovirt03.localdomain.local. Please check log file for > details. > [root at ovirt01 ~]# > > See here the glusterd.log in zip format: > https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/ > view?usp=sharing > > Time of the reset-brick operation in logfile is 2017-07-06 11:42 > (BTW: can I have time in log not in UTC format, as I'm using CEST date in > my system?) > > I see a difference, because the brick doesn't seems isolated as before... > > [root at ovirt01 glusterfs]# gluster volume info export > > Volume Name: export > Type: Replicate > Volume ID: e278a830-beed-4255-b9ca-587a630cbdbf > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1: ovirt01.localdomain.local:/gluster/brick3/export > Brick2: 10.10.2.103:/gluster/brick3/export > Brick3: 10.10.2.104:/gluster/brick3/export (arbiter) > > [root at ovirt02 ~]# gluster volume info export > > Volume Name: export > Type: Replicate > Volume ID: e278a830-beed-4255-b9ca-587a630cbdbf > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1: ovirt01.localdomain.local:/gluster/brick3/export > Brick2: 10.10.2.103:/gluster/brick3/export > Brick3: 10.10.2.104:/gluster/brick3/export (arbiter) > > And also in oVirt I see all 3 bricks online.... > > Gianluca > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170706/7db8d48f/attachment.html>
Possibly Parallel Threads
- op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
- op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
- op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
- op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
- op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)