Displaying 16 results from an estimated 16 matches for "gv2a2".
Did you mean:
gv2
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...the gluster volume mounted via FUSE. I've tried
also to create the volume with preallocated metadata, which moves the
problem a bit far away (in time). The volume is a replice 3 arbiter 1
volume hosted on XFS bricks.
Here are the informations:
[root at ovh-ov1 bricks]# gluster volume info gv2a2
Volume Name: gv2a2
Type: Replicate
Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/bricks/brick2/gv2a2
Brick2: gluster3:/bricks/brick3/gv2a2
Brick3: gluster2:/bricks/arbiter_brick_gv2a...
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...USE. I've
> tried also to create the volume with preallocated metadata, which
> moves the problem a bit far away (in time). The volume is a replice 3
> arbiter 1 volume hosted on XFS bricks.
>
> Here are the informations:
>
> [root at ovh-ov1 bricks]# gluster volume info gv2a2
>
> Volume Name: gv2a2
> Type: Replicate
> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/bricks/brick2/gv2a2
> Brick2: gluster3:/bricks/br...
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...d also to create the volume with preallocated metadata, which
>> moves the problem a bit far away (in time). The volume is a replice 3
>> arbiter 1 volume hosted on XFS bricks.
>>
>> Here are the informations:
>>
>> [root at ovh-ov1 bricks]# gluster volume info gv2a2
>>
>> Volume Name: gv2a2
>> Type: Replicate
>> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster1:/bricks/brick...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...a FUSE. I've tried
> also to create the volume with preallocated metadata, which moves the
> problem a bit far away (in time). The volume is a replice 3 arbiter 1
> volume hosted on XFS bricks.
>
> Here are the informations:
>
> [root at ovh-ov1 bricks]# gluster volume info gv2a2
>
> Volume Name: gv2a2
> Type: Replicate
> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/bricks/brick2/gv2a2
> Brick2: gluster3:/bricks/br...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...located metadata,
>>> which moves the problem a bit far away (in time). The volume is
>>> a replice 3 arbiter 1 volume hosted on XFS bricks.
>>>
>>> Here are the informations:
>>>
>>> [root at ovh-ov1 bricks]# gluster volume info gv2a2
>>>
>>> Volume Name: gv2a2
>>> Type: Replicate
>>> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x (2 + 1) = 3
>>> Transport-type:...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...gt; also to create the volume with preallocated metadata, which moves the
>> problem a bit far away (in time). The volume is a replice 3 arbiter 1
>> volume hosted on XFS bricks.
>>
>> Here are the informations:
>>
>> [root at ovh-ov1 bricks]# gluster volume info gv2a2
>>
>> Volume Name: gv2a2
>> Type: Replicate
>> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster1:/bricks/brick...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...m a bit far
>>>> away (in time). The volume is a replice 3 arbiter 1 volume
>>>> hosted on XFS bricks.
>>>>
>>>> Here are the informations:
>>>>
>>>> [root at ovh-ov1 bricks]# gluster volume info gv2a2
>>>>
>>>> Volume Name: gv2a2
>>>> Type: Replicate
>>>> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Brick...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...>> away (in time). The volume is a replice 3 arbiter 1 volume
>>>>> hosted on XFS bricks.
>>>>>
>>>>> Here are the informations:
>>>>>
>>>>> [root at ovh-ov1 bricks]# gluster volume info gv2a2
>>>>>
>>>>> Volume Name: gv2a2
>>>>> Type: Replicate
>>>>> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
>>>>> Status: Started
>>>>> Snapshot Count: 0
>>>>...
2018 Jan 16
1
Problem with Gluster 3.12.4, VM and sharding
Also to help isolate the component, could you answer these:
1. on a different volume with shard not enabled, do you see this issue?
2. on a plain 3-way replicated volume (no arbiter), do you see this issue?
On Tue, Jan 16, 2018 at 4:03 PM, Krutika Dhananjay <kdhananj at redhat.com>
wrote:
> Please share the volume-info output and the logs under /var/log/glusterfs/
> from all your
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...y (in time). The volume is a replice 3 arbiter 1
>>>>>> volume hosted on XFS bricks.
>>>>>>
>>>>>> Here are the informations:
>>>>>>
>>>>>> [root at ovh-ov1 bricks]# gluster volume info gv2a2
>>>>>>
>>>>>> Volume Name: gv2a2
>>>>>> Type: Replicate
>>>>>> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
>>>>>> Status: Started
>>>>>> Snapshot C...
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...olume with preallocated metadata, which moves the
>>> problem a bit far away (in time). The volume is a replice 3 arbiter 1
>>> volume hosted on XFS bricks.
>>>
>>> Here are the informations:
>>>
>>> [root at ovh-ov1 bricks]# gluster volume info gv2a2
>>>
>>> Volume Name: gv2a2
>>> Type: Replicate
>>> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x (2 + 1) = 3
>>> Transport-type: tcp
>>> Bricks:
>...
2018 Jan 19
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...;>>>>>> replice 3 arbiter 1 volume hosted on XFS bricks.
>>>>>>>
>>>>>>> Here are the informations:
>>>>>>>
>>>>>>> [root at ovh-ov1 bricks]# gluster volume info gv2a2
>>>>>>>
>>>>>>> Volume Name: gv2a2
>>>>>>> Type: Replicate
>>>>>>> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
>>>>>>> Status: Started
>>...
2018 Jan 17
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
Here's the volume info:
Volume Name: gv2a2
Type: Replicate
Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/bricks/brick2/gv2a2
Brick2: gluster3:/bricks/brick3/gv2a2
Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter)
Op...
2018 Jan 23
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
On 17 January 2018 at 16:04, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at trendservizi.it> wrote:
> Here's the volume info:
>
>
> Volume Name: gv2a2
> Type: Replicate
> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/bricks/brick2/gv2a2
> Brick2: gluster3:/bricks/brick3/gv2a2
> Brick3: glust...
2018 Jan 17
0
Strange messages in mnt-xxx.log
Hi,
On 16 January 2018 at 18:56, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at trendservizi.it> wrote:
> Hi,
>
> I'm testing gluster 3.12.4 and, by inspecting log files
> /var/log/glusterfs/mnt-gv0.log (gv0 is the volume name), I found many lines
> saying:
>
> [2018-01-15 09:45:41.066914] I [MSGID: 109063]
> [dht-layout.c:716:dht_layout_normalize]
2018 Jan 16
2
Strange messages in mnt-xxx.log
Hi,
I'm testing gluster 3.12.4 and, by inspecting log files
/var/log/glusterfs/mnt-gv0.log (gv0 is the volume name), I found many
lines saying:
[2018-01-15 09:45:41.066914] I [MSGID: 109063]
[dht-layout.c:716:dht_layout_normalize] 0-gv0-dht: Found anomalies in
(null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2018-01-15 09:45:45.755021] I [MSGID: 109063]