Displaying 4 results from an estimated 4 matches for "shchst01".
2017 Dec 20
2
Upgrading from Gluster 3.8 to 3.12
...r nodes as well.
@Hari - Could you help in debugging this further.
On Wed, Dec 20, 2017 at 10:44 AM, Gustave Dahl <gustave at dahlfamily.net>
wrote:
> I was attempting the same on a local sandbox and also have the same
> problem.
>
>
> Current: 3.8.4
>
> Volume Name: shchst01
> Type: Distributed-Replicate
> Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 4 x 3 = 12
> Transport-type: tcp
> Bricks:
> Brick1: shchhv01-sto:/data/brick3/shchst01
> Brick2: shchhv02-sto:/data/brick3/shchst01
&...
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
I was attempting the same on a local sandbox and also have the same problem.
Current: 3.8.4
Volume Name: shchst01
Type: Distributed-Replicate
Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: shchhv01-sto:/data/brick3/shchst01
Brick2: shchhv02-sto:/data/brick3/shchst01
Brick3: shchhv03-sto:/data/brick3/shchst01
Bri...
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
...further.
>
>
>
> On Wed, Dec 20, 2017 at 10:44 AM, Gustave Dahl <gustave at dahlfamily.net>
> wrote:
>>
>> I was attempting the same on a local sandbox and also have the same
>> problem.
>>
>>
>> Current: 3.8.4
>>
>> Volume Name: shchst01
>> Type: Distributed-Replicate
>> Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 4 x 3 = 12
>> Transport-type: tcp
>> Bricks:
>> Brick1: shchhv01-sto:/data/brick3/shchst01
>> Brick2:...
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
I have not done the upgrade yet. Since this is a production cluster I
need to make sure it stays up or schedule some downtime if it doesn't
doesn't. Thanks.
On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
> On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
> wrote:
>>
>> Hi,
>>