search for: glusterfs2

Displaying 11 results from an estimated 11 matches for "glusterfs2".

Did you mean: glusterfs
2018 Jan 11
2
Sent and Received peer request (Connected)
...onnected)" to both server 1 and 2, the first and second servers report to each other "Sent and Received peer request (Connected)" but show the proper "Peer in Cluster (Connected)" to the third server. [root at glusterfs1]# gluster peer status Number of Peers: 2 Hostname: glusterfs2 Uuid: 0f4867d9-7be3-4dbc-83f6-ddcda58df607 State: Sent and Received peer request (Connected) Hostname: glusterfs3 Uuid: 354fdb76-1205-4a5c-b335-66f2ee3e665f State: Peer in Cluster (Connected) [root at glusterfs2]# gluster peer status Number of Peers: 2 Hostname: glusterfs3 Uuid: 354fdb76-1205-...
2018 Jan 15
0
Sent and Received peer request (Connected)
...e first and second > servers report to each other "Sent and Received peer request > (Connected)" but show the proper "Peer in Cluster (Connected)" to the > third server. > > > [root at glusterfs1]# gluster peer status > Number of Peers: 2 > > Hostname: glusterfs2 > Uuid: 0f4867d9-7be3-4dbc-83f6-ddcda58df607 > State: Sent and Received peer request (Connected) > > Hostname: glusterfs3 > Uuid: 354fdb76-1205-4a5c-b335-66f2ee3e665f > State: Peer in Cluster (Connected) > > > > [root at glusterfs2]# gluster peer status > Number of...
2023 Feb 01
0
Corrupted object's [GFID], despite md5sum matches everywhere
...ecf65881 > > user at glusterfs1:~$ sudo md5sum > /data/brick1/gv0/.glusterfs/quarantine/9be5eecf-5ad8-4256-8b08-879aecf65881 > d41d8cd98f00b204e9800998ecf8427e > /data/brick1/gv0/.glusterfs/quarantine/9be5eecf-5ad8-4256-8b08-879aecf65881 > but then on brick2 and brick3: user at glusterfs2:~$ sudo md5sum /data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881 d4927e00e0db4498bcbbaedf3b5680ed /data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881 user at glusterfs3:~$ sudo md5sum /data/brick1/gv0/.glusterfs/9b/e5/9be5eecf-5ad8-4256-8b08-879aecf65881 d4...
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
We lost the attributes on all the bricks on servers glusterfs2 and glusterfs3 again. [root at glusterfs2 Log_Files]# gluster volume info Volume Name: StoragePool Type: Distributed-Disperse Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f Status: Started Number of Bricks: 20 x (2 + 1) = 60 Transport-type: tcp Bricks: Brick1: glusterfs1sds:/ws/disk1/ws_brick Br...
2017 Jul 08
2
[Gluster-devel] gfid and volume-id extended attributes lost
...e are any metadata heals on root? +Sanoj Sanoj, Is there any systemtap script we can use to detect which process is removing these xattrs? On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com> wrote: > We lost the attributes on all the bricks on servers glusterfs2 and > glusterfs3 again. > > > > [root at glusterfs2 Log_Files]# gluster volume info > > > > Volume Name: StoragePool > > Type: Distributed-Disperse > > Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f > > Status: Started > > Number of Bricks: 20 x (...
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:25 PM, Ankireddypalle Reddy <areddy at commvault.com> wrote: > 3.7.19 > These are the only callers for removexattr and only _posix_remove_xattr has the potential to do removexattr as posix_removexattr already makes sure that it is not gfid/volume-id. And surprise surprise _posix_remove_xattr happens only from healing code of afr/ec. And this can only happen
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
...> > Sanoj, > Is there any systemtap script we can use to detect which process is > removing these xattrs? > > On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com > > wrote: > >> We lost the attributes on all the bricks on servers glusterfs2 and >> glusterfs3 again. >> >> >> >> [root at glusterfs2 Log_Files]# gluster volume info >> >> >> >> Volume Name: StoragePool >> >> Type: Distributed-Disperse >> >> Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f >>...
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
...Is there any systemtap script we can use to detect which process >> is removing these xattrs? >> >> On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy < >> areddy at commvault.com> wrote: >> >>> We lost the attributes on all the bricks on servers glusterfs2 and >>> glusterfs3 again. >>> >>> >>> >>> [root at glusterfs2 Log_Files]# gluster volume info >>> >>> >>> >>> Volume Name: StoragePool >>> >>> Type: Distributed-Disperse >>> >>> Volu...
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
...mtap script we can use to detect which process >>> is removing these xattrs? >>> >>> On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy < >>> areddy at commvault.com> wrote: >>> >>>> We lost the attributes on all the bricks on servers glusterfs2 and >>>> glusterfs3 again. >>>> >>>> >>>> >>>> [root at glusterfs2 Log_Files]# gluster volume info >>>> >>>> >>>> >>>> Volume Name: StoragePool >>>> >>>> Type: Distrib...
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
...ot? +Sanoj Sanoj, Is there any systemtap script we can use to detect which process is removing these xattrs? On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com<mailto:areddy at commvault.com>> wrote: We lost the attributes on all the bricks on servers glusterfs2 and glusterfs3 again. [root at glusterfs2 Log_Files]# gluster volume info Volume Name: StoragePool Type: Distributed-Disperse Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f Status: Started Number of Bricks: 20 x (2 + 1) = 60 Transport-type: tcp Bricks: Brick1: glusterfs1sds:/ws/disk1/ws_brick Br...
2017 Jul 13
0
[Gluster-devel] gfid and volume-id extended attributes lost
...Sanoj, > > Is there any systemtap script we can use to detect which process is > removing these xattrs? > > > > On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com> > wrote: > > We lost the attributes on all the bricks on servers glusterfs2 and > glusterfs3 again. > > > > [root at glusterfs2 Log_Files]# gluster volume info > > > > Volume Name: StoragePool > > Type: Distributed-Disperse > > Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f > > Status: Started > > Number of Bricks: 20 x (...