Displaying 4 results from an estimated 4 matches for "shchhv02".
Did you mean:
shchhv01
2017 Dec 20
2
Upgrading from Gluster 3.8 to 3.12
...: regenerate volfiles on
op-version bump up" in 3.8.4 while bumping up the op-version the info and
volfiles were not regenerated which caused the tier-enabled entry to be
missing in the info file.
For now, you can copy the info file for the volumes where the mismatch
happened from shchhv01 to shchhv02 and restart glusterd service on
shchhv02. That should fix up this temporarily. Unfortunately this step
might need to be repeated for other nodes as well.
@Hari - Could you help in debugging this further.
On Wed, Dec 20, 2017 at 10:44 AM, Gustave Dahl <gustave at dahlfamily.net>
wrote:
&g...
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
...op-version bump up" in 3.8.4 while bumping up the op-version the info and
> volfiles were not regenerated which caused the tier-enabled entry to be
> missing in the info file.
>
> For now, you can copy the info file for the volumes where the mismatch
> happened from shchhv01 to shchhv02 and restart glusterd service on shchhv02.
> That should fix up this temporarily. Unfortunately this step might need to
> be repeated for other nodes as well.
>
> @Hari - Could you help in debugging this further.
>
>
>
> On Wed, Dec 20, 2017 at 10:44 AM, Gustave Dahl <gust...
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
...l sandbox and also have the same problem.
Current: 3.8.4
Volume Name: shchst01
Type: Distributed-Replicate
Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: shchhv01-sto:/data/brick3/shchst01
Brick2: shchhv02-sto:/data/brick3/shchst01
Brick3: shchhv03-sto:/data/brick3/shchst01
Brick4: shchhv01-sto:/data/brick1/shchst01
Brick5: shchhv02-sto:/data/brick1/shchst01
Brick6: shchhv03-sto:/data/brick1/shchst01
Brick7: shchhv02-sto:/data/brick2/shchst01
Brick8: shchhv03-sto:/data/brick2/shchst01
Brick9: shchhv0...
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
I have not done the upgrade yet. Since this is a production cluster I
need to make sure it stays up or schedule some downtime if it doesn't
doesn't. Thanks.
On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
> On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
> wrote:
>>
>> Hi,
>>