Displaying 20 results from an estimated 1000 matches similar to: "Corrupted object's [GFID], despite md5sum matches everywhere"
2018 Jan 15
0
Sent and Received peer request (Connected)
On Fri, 12 Jan 2018 at 01:34, Dj Merrill <gluster at deej.net> wrote:
> This morning I did a rolling update from the latest 3.7.x to 3.12.4,
> with no client activity. "Rolling" as in, shut down the Gluster
> services on the first server, update, reboot, wait until up and running,
> proceed to the next server. I anticipated that a 3.12 server might not
> properly
2017 Jul 27
0
GFID is null after adding large amounts of data
Hi Cluster Community,
we are seeing some problems when adding multiple terrabytes of data to a 2 node replicated GlusterFS installation.
The version is 3.8.11 on CentOS 7.
The machines are connected via 10Gbit LAN and are running 24/7. The OS is virtualized on VMWare.
After a restart of node-1 we see that the log files are growing to multiple Gigabytes a day.
Also there seem to be problems
2018 Jan 11
2
Sent and Received peer request (Connected)
This morning I did a rolling update from the latest 3.7.x to 3.12.4,
with no client activity. "Rolling" as in, shut down the Gluster
services on the first server, update, reboot, wait until up and running,
proceed to the next server. I anticipated that a 3.12 server might not
properly talk to a 3.7 server but since I had no client activity I was
not overly concerned.
All three servers
2017 Oct 16
0
gfid entries in volume heal info that do not heal
OK, so here?s my output of the volume info and the heal info. I have not yet tracked down physical location of these files, any tips to finding them would be appreciated, but I?m definitely just wanting them gone. I forgot to mention earlier that the cluster is running 3.12 and was upgraded from 3.10; these files were likely stuck like this when it was on 3.10.
[root at tpc-cent-glus1-081017 ~]#
2017 Nov 06
0
gfid entries in volume heal info that do not heal
That took a while!
I have the following stats:
4085169 files in both bricks3162940 files only have a single hard link.
All of the files exist on both servers. bmidata2 (below) WAS running
when bmidata1 died.
gluster volume heal clifford statistics heal-countGathering count of
entries to be healed on volume clifford has been successful
Brick bmidata1:/data/glusterfs/clifford/brick/brickNumber of
2017 Aug 29
0
GFID attir is missing after adding large amounts of data
This is strange, a couple of questions:
1. What volume type is this? What tuning have you done? gluster v info output would be helpful here.
2. How big are your bricks?
3. Can you write me a quick reproducer so I can try this in the lab? Is it just a single multi TB file you are untarring or many? If you give me the steps to repro, and I hit it, we can get a bug open.
4. Other than
2017 Oct 18
1
gfid entries in volume heal info that do not heal
Hey Matt,
>From the xattr output, it looks like the files are not present on the
arbiter brick & needs healing. But on the parent it does not have the
pending markers set for those entries.
The workaround for this is you need to do a lookup on the file which needs
heal from the mount, so it will create the entry on the arbiter brick and
then run the volume heal to do the healing.
Follow
2017 Aug 28
2
GFID attir is missing after adding large amounts of data
Hi Cluster Community,
we are seeing some problems when adding multiple terrabytes of data to a 2 node replicated GlusterFS installation.
The version is 3.8.11 on CentOS 7.
The machines are connected via 10Gbit LAN and are running 24/7. The OS is virtualized on VMWare.
After a restart of node-1 we see that the log files are growing to multiple Gigabytes a day.
Also there seem to be problems
2017 Oct 23
0
gfid entries in volume heal info that do not heal
Hi Jim & Matt,
Can you also check for the link count in the stat output of those hardlink
entries in the .glusterfs folder on the bricks.
If the link count is 1 on all the bricks for those entries, then they are
orphaned entries and you can delete those hardlinks.
To be on the safer side have a backup before deleting any of the entries.
Regards,
Karthik
On Fri, Oct 20, 2017 at 3:18 AM, Jim
2017 Sep 01
1
GFID attir is missing after adding large amounts of data
I re-added gluster-users to get some more eye on this.
----- Original Message -----
> From: "Christoph Sch?bel" <christoph.schaebel at dc-square.de>
> To: "Ben Turner" <bturner at redhat.com>
> Sent: Wednesday, August 30, 2017 8:18:31 AM
> Subject: Re: [Gluster-users] GFID attir is missing after adding large amounts of data
>
> Hello Ben,
>
2017 Oct 24
0
gfid entries in volume heal info that do not heal
I have 14,734 GFIDS that are different. All the different ones are only
on the brick that was live during the outage and concurrent file copy-
in. The brick that was down at that time has no GFIDs that are not also
on the up brick.
As the bricks are 10TB, the find is going to be a long running process.
I'm running several finds at once with gnu parallel but it will still
take some time.
2017 Oct 17
0
gfid entries in volume heal info that do not heal
Attached is the heal log for the volume as well as the shd log.
>> Run these commands on all the bricks of the replica pair to get the attrs set on the backend.
[root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
getfattr: Removing leading '/' from absolute path names
# file:
2017 Oct 19
2
gfid entries in volume heal info that do not heal
I've been following this particular thread as I have a similar issue
(RAID6 array failed out with 3 dead drives at once while a 12 TB load
was being copied into one mounted space - what a mess)
I have >700K GFID entries that have no path data:Example:getfattr -d -e
hex -m . .glusterfs/00/00/0000a5ef-5af7-401b-84b5-ff2a51c10421# file:
.glusterfs/00/00/0000a5ef-5af7-401b-84b5-
2017 Oct 23
2
gfid entries in volume heal info that do not heal
In my case I was able to delete the hard links in the .glusterfs folders of the bricks and it seems to have done the trick, thanks!
From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com]
Sent: Monday, October 23, 2017 1:52 AM
To: Jim Kinney <jim.kinney at gmail.com>; Matt Waymack <mwaymack at nsgdv.com>
Cc: gluster-users <Gluster-users at gluster.org>
Subject: Re:
2017 Oct 23
0
gfid entries in volume heal info that do not heal
I'm not so lucky. ALL of mine show 2 links and none have the attr data
that supplies the path to the original.
I have the inode from stat. Looking now to dig out the path/filename
from xfs_db on the specific inodes individually.
Is the hash of the filename or <path>/filename and if so relative to
where? /, <path from top of brick>, ?
On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack
2017 Oct 17
3
gfid entries in volume heal info that do not heal
Hi Matt,
Run these commands on all the bricks of the replica pair to get the attrs
set on the backend.
On the bricks of first replica set:
getfattr -d -e hex -m . <brick path>/.glusterfs/10/86/
108694db-c039-4b7c-bd3d-ad6a15d811a2
On the fourth replica set:
getfattr -d -e hex -m . <brick path>/.glusterfs/
e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
Also run the "gluster volume
2017 Oct 24
3
gfid entries in volume heal info that do not heal
Hi Jim,
Can you check whether the same hardlinks are present on both the bricks &
both of them have the link count 2?
If the link count is 2 then "find <brickpath> -samefile
<brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full
gfid>"
should give you the file path.
Regards,
Karthik
On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney
2007 Nov 26
0
Winbind / AIX 5.3 returns incomplete user informations
Hi,
We are facing a problem on AIX 5.3 (latest patch) where the following
behavior happens. Reproduced with versions of samba from 3.0.23 to
3.0.26a.
# Normal behavior :
# id and id username should return the same info
#
root@srv1:/# id
uid=0(root) gid=0(system)
groups=2(bin),3(sys),7(security),8(cron),10(audit),11(lp)
root@srv1:/# id root
uid=0(root) gid=0(system)
2013 Dec 10
1
Error after crash of Virtual Machine during migration
Greetings,
Legend:
storage-gfs-3-prd - the first gluster.
storage-1-saas - new gluster where "the first gluster" had to be
migrated.
storage-gfs-4-prd - the second gluster (which had to be migrated later).
I've started command replace-brick:
'gluster volume replace-brick sa_bookshelf storage-gfs-3-prd:/ydp/shared
storage-1-saas:/ydp/shared start'
During that Virtual
2015 Feb 05
0
Another Fedora decision
> On Feb 4, 2015, at 4:53 PM, Always Learning <centos at u64.u22.net> wrote:
>
> On C5 the default appears to be:-
>
> -rw-r--r-- 1 root root 1220 Jan 31 03:04 shadow
Nope:
# rpm -q --dump setup|grep shadow
/etc/gshadow 0 1329943062 d41d8cd98f00b204e9800998ecf8427e 0100400 root root 1 0 0 X
/etc/shadow 0 1329943062 d41d8cd98f00b204e9800998ecf8427e 0100400 root root 1 0 0