similar to: Recovering files "lost" during a rebalance on a Dispersed 3+1

Displaying 20 results from an estimated 6000 matches similar to: "Recovering files "lost" during a rebalance on a Dispersed 3+1"

2017 Jun 05
0
Rebalance failing on fix-layout
Hello, The past couple of weeks I had some issues with firmware on the OS hard drives in my gluster cluster. I have recently fixed the issue, and am bringing my bricks back into the volume. I am running gluster 3.7.6 and am running into the following issue: When I add the brick and rebalance, the operation fails after a couple minutes. The errors I find in the rebalance log is this: [2017-06-05
2011 Jun 28
0
[Gluster-devel] volume rebalance still broken
Replying and adding gluster-users. That seems more appropriate? ________________________________________ From: gluster-devel-bounces+jwalker=gluster.com at nongnu.org [gluster-devel-bounces+jwalker=gluster.com at nongnu.org] on behalf of Emmanuel Dreyfus [manu at netbsd.org] Sent: Tuesday, June 28, 2011 6:51 AM To: gluster-devel at nongnu.org Subject: [Gluster-devel] volume rebalance still broken
2017 Jun 06
1
Files Missing on Client Side; Still available on bricks
Hello, I am still working at recovering from a few failed OS hard drives on my gluster storage and have been removing, and re-adding bricks quite a bit. I noticed yesterday night that some of the directories are not visible when I access them through the client, but are still on the brick. For example: Client: # ls /scratch/dw Ethiopian_imputation HGDP Rolwaling Tibetan_Alignment Brick: #
2018 Mar 20
0
Disperse volume recovery and healing
On Tue, Mar 20, 2018 at 5:26 AM, Victor T <hero_of_nothing_1 at hotmail.com> wrote: > That makes sense. In the case of "file damage," it would show up as files > that could not be healed in logfiles or gluster volume heal [volume] info? > If the damage affects more bricks than the volume redundancy, then probably yes. These files or directories will appear in
2018 Feb 07
0
Fwd: Troubleshooting glusterfs
Hello Nithya! Thank you for your help on figuring this out! We changed our configuration and after having a successful test yesterday we have run into new issue today. The test including moderate read/write (~20-30 Mb/s) and scaling the storage was running about 3 hours and at some moment system got stuck: On the user level there are such errors when trying to work with filesystem: OSError:
2018 Apr 03
0
Sharding problem - multiple shard copies with mismatching gfids
Raghavendra, Sorry for the late follow up. I have some more data on the issue. The issue tends to happen when the shards are created. The easiest time to reproduce this is during an initial VM disk format. This is a log from a test VM that was launched, and then partitioned and formatted with LVM / XFS: [2018-04-03 02:05:00.838440] W [MSGID: 109048]
2018 Apr 06
1
Sharding problem - multiple shard copies with mismatching gfids
Sorry for the delay, Ian :). This looks to be a genuine issue which requires some effort in fixing it. Can you file a bug? I need following information attached to bug: * Client and bricks logs. If you can reproduce the issue, please set diagnostics.client-log-level and diagnostics.brick-log-level to TRACE. If you cannot reproduce the issue or if you cannot accommodate such big logs, please set
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
Pranith, Thanks for looking in to the issue. The bricks were mounted after the reboot. One more thing that I noticed was when the attributes were manually set when glusterd was up then on starting the volume the attributes were again lost. Had to stop glusterd set attributes and then start glusterd. After that the volume start succeeded. Thanks and Regards, Ram From: Pranith
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first that free disk space is available for the volume. On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote: > Herb, > What are the high and low watermarks for the tier set at ? > > # gluster volume get <vol> cluster.watermark-hi > > # gluster volume get
2018 Mar 26
1
Sharding problem - multiple shard copies with mismatching gfids
Ian, Do you've a reproducer for this bug? If not a specific one, a general outline of what operations where done on the file will help. regards, Raghavendra On Mon, Mar 26, 2018 at 12:55 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > > > On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com> > wrote: > >> The gfid mismatch
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
Did anything special happen on these two bricks? It can't happen in the I/O path: posix_removexattr() has: 0 if (!strcmp (GFID_XATTR_KEY, name)) { 1 gf_msg (this->name, GF_LOG_WARNING, 0, P_MSG_XATTR_NOT_REMOVED, 2 "Remove xattr called on gfid for file %s", real_path); 3 op_ret = -1; 4 goto
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
3.7.19 Thanks and Regards, Ram From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] Sent: Friday, July 07, 2017 11:54 AM To: Ankireddypalle Reddy Cc: Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost On Fri, Jul 7, 2017 at 9:20 PM, Ankireddypalle Reddy <areddy at
2018 Feb 05
0
Fwd: Troubleshooting glusterfs
On 5 February 2018 at 15:40, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi, > > > I see a lot of the following messages in the logs: > [2018-02-04 03:22:01.544446] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk] > 0-glusterfs: No change in volfile,continuing > [2018-02-04 07:41:16.189349] W [MSGID: 109011] > [dht-layout.c:186:dht_layout_search] 48-gv0-dht: no
2017 Oct 22
0
gluster tiering errors
Herb, What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi # gluster volume get <vol> cluster.watermark-low What is the size of the file that failed to migrate as per the following tierd log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed for
2017 Oct 27
0
gluster tiering errors
Herb, I'm trying to weed out issues here. So, I can see quota turned *on* and would like you to check the quota settings and test to see system behavior *if quota is turned off*. Although the file size that failed migration was 29K, I'm being a bit paranoid while weeding out issues. Are you still facing tiering errors ? I can see your response to Alex with the disk space consumption and
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
Hi, Sorry i did't confirm the results sooner. Yes, it's working fine without issues for me. If anyone else can confirm so we can be sure it's 100% resolved. -- Respectfully Mahdi A. Mahdi ________________________________ From: Krutika Dhananjay <kdhananj at redhat.com> Sent: Tuesday, June 6, 2017 9:17:40 AM To: Mahdi Adnan Cc: gluster-user; Gandalf Corvotempesta; Lindsay
2018 Mar 26
3
Sharding problem - multiple shard copies with mismatching gfids
On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com> wrote: > The gfid mismatch here is between the shard and its "link-to" file, the > creation of which happens at a layer below that of shard translator on the > stack. > > Adding DHT devs to take a look. > Thanks Krutika. I assume shard doesn't do any dentry operations like rename,
2017 Jul 07
2
[Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:20 PM, Ankireddypalle Reddy <areddy at commvault.com> wrote: > Pranith, > > Thanks for looking in to the issue. The bricks were > mounted after the reboot. One more thing that I noticed was when the > attributes were manually set when glusterd was up then on starting the > volume the attributes were again lost. Had to stop glusterd
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
Any additional tests would be great as a similiar bug was detected and fixed some months ago and after that, this bug arose?. Is still unclear to me why two very similiar bug was discovered in two different times for the same operation How this is possible? If you fixed the first bug, why the second one wasn't triggered on your test environment? Il 6 giu 2017 10:35 AM, "Mahdi
2017 May 17
3
Rebalance + VM corruption - current status and request for feedback
Hi, In the past couple of weeks, we've sent the following fixes concerning VM corruption upon doing rebalance - https://review.gluster.org/#/q/status:merged+project:glusterfs+branch:master+topic:bug-1440051 These fixes are very much part of the latest 3.10.2 release. Satheesaran within Red Hat also verified that they work and he's not seeing corruption issues anymore. I'd like to