Displaying 4 results from an estimated 4 matches for "dahlfamily".
2017 Dec 20
2
Upgrading from Gluster 3.8 to 3.12
...ned from shchhv01 to shchhv02 and restart glusterd service on
shchhv02. That should fix up this temporarily. Unfortunately this step
might need to be repeated for other nodes as well.
@Hari - Could you help in debugging this further.
On Wed, Dec 20, 2017 at 10:44 AM, Gustave Dahl <gustave at dahlfamily.net>
wrote:
> I was attempting the same on a local sandbox and also have the same
> problem.
>
>
> Current: 3.8.4
>
> Volume Name: shchst01
> Type: Distributed-Replicate
> Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3
> Status: Started
> Snapshot Count: 0
>...
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
...start glusterd service on shchhv02.
> That should fix up this temporarily. Unfortunately this step might need to
> be repeated for other nodes as well.
>
> @Hari - Could you help in debugging this further.
>
>
>
> On Wed, Dec 20, 2017 at 10:44 AM, Gustave Dahl <gustave at dahlfamily.net>
> wrote:
>>
>> I was attempting the same on a local sandbox and also have the same
>> problem.
>>
>>
>> Current: 3.8.4
>>
>> Volume Name: shchst01
>> Type: Distributed-Replicate
>> Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3...
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
I was attempting the same on a local sandbox and also have the same problem.
Current: 3.8.4
Volume Name: shchst01
Type: Distributed-Replicate
Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: shchhv01-sto:/data/brick3/shchst01
Brick2: shchhv02-sto:/data/brick3/shchst01
Brick3:
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
I have not done the upgrade yet. Since this is a production cluster I
need to make sure it stays up or schedule some downtime if it doesn't
doesn't. Thanks.
On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
> On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
> wrote:
>>
>> Hi,
>>