Displaying 20 results from an estimated 24 matches for "pathinfos".
Did you mean:
pathinfo
2011 Jun 09
0
Samba on RHEL 6: Permission denied when mounting FUSE partition
...38:11,980 DEBUG FuseMethodsInvoker:401 - PathInfo for root is '0'
2011-06-02 16:38:11,980 DEBUG GetPathInfo:51 - Entering root and returning immediately with conf users: 3
2011-06-02 16:38:11,981 DEBUG GetPathInfo:75 - Mode is 16877
2011-06-02 16:38:11,981 DEBUG FuseMethodsInvoker:412 - PathInfos is Ok
2011-06-02 16:38:11,981 DEBUG FuseMethodsInvoker:413 - File mode is 16877
2011-06-02 16:38:11,998 DEBUG FuseMethodsInvoker:1162 - Entering javaAccess with path '/' -- mask '1'
2011-06-02 16:38:12,011 DEBUG FuseMethodsInvoker:370 - Entering getattr_pre with path '/*'...
2012 Mar 31
1
File Inventory on Bricks
On 02/14/2012 05:17 PM, Heiko Schr?ter wrote:
> Hello,
>
> is there a function inside gluster which gives you the information which file resides on which brick in a distributed setup ?
> Assume i want to find all files residing on rd24.
>
> The only method i can think of is logging into that raid and find all files locally.
>
try 'gefattr -n trusted.glusterfs.pathinfo
2024 Jan 25
1
Upgrade 10.4 -> 11.1 making problems
Good morning,
hope i got it right... using:
https://access.redhat.com/documentation/de-de/red_hat_gluster_storage/3.1/html/administration_guide/ch27s02
mount -t glusterfs -o aux-gfid-mount glusterpub1:/workdata /mnt/workdata
gfid 1:
getfattr -n trusted.glusterfs.pathinfo -e text
/mnt/workdata/.gfid/faf59566-10f5-4ddd-8b0c-a87bc6a334fb
getfattr: Removing leading '/' from absolute path
2024 Jan 27
1
Upgrade 10.4 -> 11.1 making problems
You don't need to mount it.
Like this :
# getfattr -d -e hex -m. /path/to/brick/.glusterfs/00/46/00462be8-3e61-4931-8bda-dae1645c639e
# file: 00/46/00462be8-3e61-4931-8bda-dae1645c639e
trusted.gfid=0x00462be83e6149318bdadae1645c639e
trusted.gfid2path.05fcbdafdeea18ab=0x30326333373930632d386637622d346436652d393464362d3936393132313930643131312f66696c656c6f636b696e672e7079
2018 Apr 18
0
Bitrot - Restoring bad file
On 04/17/2018 06:25 PM, Omar Kohl wrote:
> Hi,
>
> I have a question regarding bitrot detection.
>
> Following the RedHat manual (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/bitrot-restore_corrupt_file) I am trying out bad-file-restoration after bitrot.
>
> "gluster volume bitrot VOLNAME status" gets me the
2018 Apr 17
2
Bitrot - Restoring bad file
Hi,
I have a question regarding bitrot detection.
Following the RedHat manual (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/bitrot-restore_corrupt_file) I am trying out bad-file-restoration after bitrot.
"gluster volume bitrot VOLNAME status" gets me the GFIDs that are corrupt and on which Host this happens.
As far as I can tell
2017 Oct 19
2
gfid entries in volume heal info that do not heal
I've been following this particular thread as I have a similar issue
(RAID6 array failed out with 3 dead drives at once while a 12 TB load
was being copied into one mounted space - what a mess)
I have >700K GFID entries that have no path data:Example:getfattr -d -e
hex -m . .glusterfs/00/00/0000a5ef-5af7-401b-84b5-ff2a51c10421# file:
.glusterfs/00/00/0000a5ef-5af7-401b-84b5-
2017 Oct 23
2
gfid entries in volume heal info that do not heal
In my case I was able to delete the hard links in the .glusterfs folders of the bricks and it seems to have done the trick, thanks!
From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com]
Sent: Monday, October 23, 2017 1:52 AM
To: Jim Kinney <jim.kinney at gmail.com>; Matt Waymack <mwaymack at nsgdv.com>
Cc: gluster-users <Gluster-users at gluster.org>
Subject: Re:
2024 Jan 24
1
Upgrade 10.4 -> 11.1 making problems
Hi,
Can you find and check the files with gfids:
60465723-5dc0-4ebe-aced-9f2c12e52642faf59566-10f5-4ddd-8b0c-a87bc6a334fb
Use 'getfattr -d -e hex -m. ' command from https://docs.gluster.org/en/main/Troubleshooting/resolving-splitbrain/#analysis-of-the-output .
Best Regards,Strahil Nikolov
On Sat, Jan 20, 2024 at 9:44, Hu Bert<revirii at googlemail.com> wrote: Good morning,
2020 Jul 08
1
Dovecot - Xoauth2 - keycloak
Hello,
Still trying to make roundcube / Dovecot works with Keycloak.
Dovecot can't seem to validate the access_token that Roundcube gave.
-----
Jul 08 20:48:05 auth: Debug: http-client[1]: request [Req1: GET
2017 Oct 23
0
gfid entries in volume heal info that do not heal
Hi Jim & Matt,
Can you also check for the link count in the stat output of those hardlink
entries in the .glusterfs folder on the bricks.
If the link count is 1 on all the bricks for those entries, then they are
orphaned entries and you can delete those hardlinks.
To be on the safer side have a backup before deleting any of the entries.
Regards,
Karthik
On Fri, Oct 20, 2017 at 3:18 AM, Jim
2017 Oct 23
0
gfid entries in volume heal info that do not heal
I'm not so lucky. ALL of mine show 2 links and none have the attr data
that supplies the path to the original.
I have the inode from stat. Looking now to dig out the path/filename
from xfs_db on the specific inodes individually.
Is the hash of the filename or <path>/filename and if so relative to
where? /, <path from top of brick>, ?
On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack
2017 Oct 24
3
gfid entries in volume heal info that do not heal
Hi Jim,
Can you check whether the same hardlinks are present on both the bricks &
both of them have the link count 2?
If the link count is 2 then "find <brickpath> -samefile
<brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full
gfid>"
should give you the file path.
Regards,
Karthik
On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney
2017 Oct 24
0
gfid entries in volume heal info that do not heal
I have 14,734 GFIDS that are different. All the different ones are only
on the brick that was live during the outage and concurrent file copy-
in. The brick that was down at that time has no GFIDs that are not also
on the up brick.
As the bricks are 10TB, the find is going to be a long running process.
I'm running several finds at once with gnu parallel but it will still
take some time.
2017 Nov 06
0
gfid entries in volume heal info that do not heal
That took a while!
I have the following stats:
4085169 files in both bricks3162940 files only have a single hard link.
All of the files exist on both servers. bmidata2 (below) WAS running
when bmidata1 died.
gluster volume heal clifford statistics heal-countGathering count of
entries to be healed on volume clifford has been successful
Brick bmidata1:/data/glusterfs/clifford/brick/brickNumber of
2017 Oct 17
0
gfid entries in volume heal info that do not heal
Attached is the heal log for the volume as well as the shd log.
>> Run these commands on all the bricks of the replica pair to get the attrs set on the backend.
[root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
getfattr: Removing leading '/' from absolute path names
# file:
2019 Dec 28
1
GFS performance under heavy traffic
Hi David,
It seems that I have misread your quorum options, so just ignore that from my previous e-mail.
Best Regards,
Strahil NikolovOn Dec 27, 2019 15:38, Strahil <hunter86_bg at yahoo.com> wrote:
>
> Hi David,
>
> Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first.
2017 Oct 17
3
gfid entries in volume heal info that do not heal
Hi Matt,
Run these commands on all the bricks of the replica pair to get the attrs
set on the backend.
On the bricks of first replica set:
getfattr -d -e hex -m . <brick path>/.glusterfs/10/86/
108694db-c039-4b7c-bd3d-ad6a15d811a2
On the fourth replica set:
getfattr -d -e hex -m . <brick path>/.glusterfs/
e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
Also run the "gluster volume
2019 Dec 27
0
GFS performance under heavy traffic
Hi David,
Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first.
Also, the gluster client should remount in order to bump the gluster op-version.
What kind of workload do you have ?
I'm asking as there are predefined (and recommended) settings located at /var/lib/gluster/groups .
You
2019 Dec 24
1
GFS performance under heavy traffic
Hi David,
On Dec 24, 2019 02:47, David Cunningham <dcunningham at voisonics.com> wrote:
>
> Hello,
>
> In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node?
It makes sense, as no data is being generated towards