Displaying 11 results from an estimated 11 matches for "glusterfs3".
Did you mean:
glusterfs
2018 Jan 11
2
Sent and Received peer request (Connected)
...eer request
(Connected)" but show the proper "Peer in Cluster (Connected)" to the
third server.
[root at glusterfs1]# gluster peer status
Number of Peers: 2
Hostname: glusterfs2
Uuid: 0f4867d9-7be3-4dbc-83f6-ddcda58df607
State: Sent and Received peer request (Connected)
Hostname: glusterfs3
Uuid: 354fdb76-1205-4a5c-b335-66f2ee3e665f
State: Peer in Cluster (Connected)
[root at glusterfs2]# gluster peer status
Number of Peers: 2
Hostname: glusterfs3
Uuid: 354fdb76-1205-4a5c-b335-66f2ee3e665f
State: Peer in Cluster (Connected)
Hostname: glusterfs1
Uuid: 339533fd-5820-4077-a2a0-d39d2...
2018 Jan 15
0
Sent and Received peer request (Connected)
...quot;Peer in Cluster (Connected)" to the
> third server.
>
>
> [root at glusterfs1]# gluster peer status
> Number of Peers: 2
>
> Hostname: glusterfs2
> Uuid: 0f4867d9-7be3-4dbc-83f6-ddcda58df607
> State: Sent and Received peer request (Connected)
>
> Hostname: glusterfs3
> Uuid: 354fdb76-1205-4a5c-b335-66f2ee3e665f
> State: Peer in Cluster (Connected)
>
>
>
> [root at glusterfs2]# gluster peer status
> Number of Peers: 2
>
> Hostname: glusterfs3
> Uuid: 354fdb76-1205-4a5c-b335-66f2ee3e665f
> State: Peer in Cluster (Connected)
>
&...
2023 Feb 01
0
Corrupted object's [GFID], despite md5sum matches everywhere
...ntine/9be5eecf-5ad8-4256-8b08-879aecf65881
>
but then on brick2 and brick3:
user at glusterfs2:~$ sudo md5sum
/data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881
d4927e00e0db4498bcbbaedf3b5680ed
/data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881
user at glusterfs3:~$ sudo md5sum
/data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881
d4927e00e0db4498bcbbaedf3b5680ed
/data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881
The md5sum does NOT match the repaired server.
What in our logic is wrong, why is this happening?
Some cl...
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
We lost the attributes on all the bricks on servers glusterfs2 and glusterfs3 again.
[root at glusterfs2 Log_Files]# gluster volume info
Volume Name: StoragePool
Type: Distributed-Disperse
Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f
Status: Started
Number of Bricks: 20 x (2 + 1) = 60
Transport-type: tcp
Bricks:
Brick1: glusterfs1sds:/ws/disk1/ws_brick
Brick2: glusterfs...
2017 Jul 08
2
[Gluster-devel] gfid and volume-id extended attributes lost
...eals on root?
+Sanoj
Sanoj,
Is there any systemtap script we can use to detect which process is
removing these xattrs?
On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
> We lost the attributes on all the bricks on servers glusterfs2 and
> glusterfs3 again.
>
>
>
> [root at glusterfs2 Log_Files]# gluster volume info
>
>
>
> Volume Name: StoragePool
>
> Type: Distributed-Disperse
>
> Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f
>
> Status: Started
>
> Number of Bricks: 20 x (2 + 1) = 60
>
>...
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:25 PM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
> 3.7.19
>
These are the only callers for removexattr and only _posix_remove_xattr has
the potential to do removexattr as posix_removexattr already makes sure
that it is not gfid/volume-id. And surprise surprise _posix_remove_xattr
happens only from healing code of afr/ec. And this can only happen
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
...Is there any systemtap script we can use to detect which process is
> removing these xattrs?
>
> On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com
> > wrote:
>
>> We lost the attributes on all the bricks on servers glusterfs2 and
>> glusterfs3 again.
>>
>>
>>
>> [root at glusterfs2 Log_Files]# gluster volume info
>>
>>
>>
>> Volume Name: StoragePool
>>
>> Type: Distributed-Disperse
>>
>> Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f
>>
>> Status: Starte...
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
...script we can use to detect which process
>> is removing these xattrs?
>>
>> On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <
>> areddy at commvault.com> wrote:
>>
>>> We lost the attributes on all the bricks on servers glusterfs2 and
>>> glusterfs3 again.
>>>
>>>
>>>
>>> [root at glusterfs2 Log_Files]# gluster volume info
>>>
>>>
>>>
>>> Volume Name: StoragePool
>>>
>>> Type: Distributed-Disperse
>>>
>>> Volume ID: 149e976f-4e21-451c-bf...
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
...which process
>>> is removing these xattrs?
>>>
>>> On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <
>>> areddy at commvault.com> wrote:
>>>
>>>> We lost the attributes on all the bricks on servers glusterfs2 and
>>>> glusterfs3 again.
>>>>
>>>>
>>>>
>>>> [root at glusterfs2 Log_Files]# gluster volume info
>>>>
>>>>
>>>>
>>>> Volume Name: StoragePool
>>>>
>>>> Type: Distributed-Disperse
>>>>
&...
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
...oj,
Is there any systemtap script we can use to detect which process is removing these xattrs?
On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com<mailto:areddy at commvault.com>> wrote:
We lost the attributes on all the bricks on servers glusterfs2 and glusterfs3 again.
[root at glusterfs2 Log_Files]# gluster volume info
Volume Name: StoragePool
Type: Distributed-Disperse
Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f
Status: Started
Number of Bricks: 20 x (2 + 1) = 60
Transport-type: tcp
Bricks:
Brick1: glusterfs1sds:/ws/disk1/ws_brick
Brick2: glusterfs...
2017 Jul 13
0
[Gluster-devel] gfid and volume-id extended attributes lost
...Is there any systemtap script we can use to detect which process is
> removing these xattrs?
>
>
>
> On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com>
> wrote:
>
> We lost the attributes on all the bricks on servers glusterfs2 and
> glusterfs3 again.
>
>
>
> [root at glusterfs2 Log_Files]# gluster volume info
>
>
>
> Volume Name: StoragePool
>
> Type: Distributed-Disperse
>
> Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f
>
> Status: Started
>
> Number of Bricks: 20 x (2 + 1) = 60
>
>...