Displaying 20 results from an estimated 2000 matches similar to: "Manually delete .glusterfs/changelogs directory ?"
2017 Aug 31
2
Manually delete .glusterfs/changelogs directory ?
Hi Mabi,
If you will not use that geo-replication volume session again, I believe it
is safe to manually delete the files in the brick directory using rm -rf.
However, the gluster documentation specifies that if the session is to be
permanently deleted, this is the command to use:
gluster volume geo-replication gv1 snode1::gv2 delete reset-sync-time
2017 Aug 30
0
Manually delete .glusterfs/changelogs directory ?
Hi, has anyone any advice to give about my question below? Thanks!
> -------- Original Message --------
> Subject: Manually delete .glusterfs/changelogs directory ?
> Local Time: August 16, 2017 5:59 PM
> UTC Time: August 16, 2017 3:59 PM
> From: mabi at protonmail.ch
> To: Gluster Users <gluster-users at gluster.org>
>
> Hello,
>
> I just deleted (permanently)
2017 Jul 29
2
Not possible to stop geo-rep after adding arbiter to replica 2
Hello
To my two node replica volume I have added an arbiter node for safety purpose. On that volume I also have geo replication running and would like to stop it is status "Faulty" and keeps trying over and over to sync without success. I am using GlusterFS 3.8.11.
So in order to stop geo-rep I use:
gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo stop
but it
2017 Aug 01
3
How to delete geo-replication session?
Hi,
I would like to delete a geo-replication session on my GluterFS 3.8.11 replicat 2 volume in order to re-create it. Unfortunately the "delete" command does not work as you can see below:
$ sudo gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo delete
Staging failed on arbiternode.domain.tld. Error: Geo-replication session between myvolume and
2017 Jul 29
1
Not possible to stop geo-rep after adding arbiter to replica 2
I managed to force stopping geo replication using the "force" parameter after the "stop" but there are still other issues related to the fact that my geo replication setup was created before I added the additional arbiter node to my replca.
For example when I would like to stop my volume I simply can't and I get the following error:
volume stop: myvolume: failed: Staging
2017 Jul 29
0
Not possible to stop geo-rep after adding arbiter to replica 2
Adding Rahul and Kothresh who are SME on geo replication
Thanks & Regards
Karan Sandha
On Sat, Jul 29, 2017 at 3:37 PM, mabi <mabi at protonmail.ch> wrote:
> Hello
>
> To my two node replica volume I have added an arbiter node for safety
> purpose. On that volume I also have geo replication running and would like
> to stop it is status "Faulty" and keeps
2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is
run(without any volume name)
gluster volume geo-replication status
Volume stop force should work even if Geo-replication session exists.
From the error it looks like node "arbiternode.domain.tld" in Master
cluster is down or not reachable.
regards
Aravinda VK
On 08/07/2017 10:01 PM, mabi wrote:
> Hi,
>
2017 Aug 07
0
How to delete geo-replication session?
Hi,
I would really like to get rid of this geo-replication session as I am stuck with it right now. For example I can't even stop my volume as it complains about that geo-replcation...
Can someone let me know how I can delete it?
Thanks
> -------- Original Message --------
> Subject: How to delete geo-replication session?
> Local Time: August 1, 2017 12:15 PM
> UTC Time: August
2017 Aug 08
0
How to delete geo-replication session?
When I run the "gluster volume geo-replication status" I see my geo replication session correctly including the volume name under the "VOL" column. I see my two nodes (node1 and node2) but not arbiternode as I have added it later after setting up geo-replication. For more details have a quick look at my previous post here:
2017 Aug 08
1
How to delete geo-replication session?
Sorry I missed your previous mail.
Please perform the following steps once a new node is added
- Run gsec create command again
gluster system:: execute gsec_create
- Run Geo-rep create command with force and run start force
gluster volume geo-replication <mastervol> <slavehost>::<slavevol>
create push-pem force
gluster volume geo-replication <mastervol>
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
Now I understand what you mean the the "-samefile" parameter of "find". As requested I have now run the following command on all 3 nodes with the ouput of all 3 nodes below:
sudo find /data/myvolume/brick -samefile /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -ls
node1:
8404683 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:33 PM, mabi wrote:
> Now I understand what you mean the the "-samefile" parameter of
> "find". As requested I have now run the following command on all 3
> nodes with the ouput of all 3 nodes below:
>
> sudo find /data/myvolume/brick -samefile
> /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397
> -ls
>
>
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
To quickly resume my current situation:
on node2 I have found the following file xattrop/indices file which matches the GFID of the "heal info" command (below is there output of "ls -lai":
2798404 ---------- 2 root root 0 Apr 28 22:51 /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397
As you can see this file has inode number 2798404, so I ran
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:00 PM, mabi wrote:
> To quickly resume my current situation:
>
> on node2 I have found the following file xattrop/indices file which
> matches the GFID of the "heal info" command (below is there output of
> "ls -lai":
>
> 2798404 ---------- 2 root root 0 Apr 28 22:51
>
2018 Feb 27
2
Failed to get quota limits
Hi Mabi,
The bugs is fixed from 3.11. For 3.10 it is yet to be backported and
made available.
The bug is https://bugzilla.redhat.com/show_bug.cgi?id=1418259.
On Sat, Feb 24, 2018 at 4:05 PM, mabi <mabi at protonmail.ch> wrote:
> Dear Hari,
>
> Thank you for getting back to me after having analysed the problem.
>
> As you said I tried to run "gluster volume quota
2018 Feb 27
0
Failed to get quota limits
Hi,
Thanks for the link to the bug. We should be hopefully moving soon onto 3.12 so I guess this bug is also fixed there.
Best regards,
M.
?
??????? Original Message ???????
On February 27, 2018 9:38 AM, Hari Gowtham <hgowtham at redhat.com> wrote:
> ??
>
> Hi Mabi,
>
> The bugs is fixed from 3.11. For 3.10 it is yet to be backported and
>
> made available.
>
2017 Aug 22
0
self-heal not working
On 08/22/2017 02:30 PM, mabi wrote:
> Thanks for the additional hints, I have the following 2 questions first:
>
> - In order to launch the index heal is the following command correct:
> gluster volume heal myvolume
>
Yes
> - If I run a "volume start force" will it have any short disruptions
> on my clients which mount the volume through FUSE? If yes, how long?
2018 Feb 23
2
Failed to get quota limits
Hi,
There is a bug in 3.10 which doesn't allow the quota list command to
output, if the last entry on the conf file is a stale entry.
The workaround for this is to remove the stale entry at the end. (If
the last two entries are stale then both have to be removed and so on
until the last entry on the conf file is a valid entry).
This can be avoided by adding a new limit. As the new limit you
2017 Aug 21
2
self-heal not working
Sure, it doesn't look like a split brain based on the output:
Brick node1.domain.tld:/data/myvolume/brick
Status: Connected
Number of entries in split-brain: 0
Brick node2.domain.tld:/data/myvolume/brick
Status: Connected
Number of entries in split-brain: 0
Brick node3.domain.tld:/srv/glusterfs/myvolume/brick
Status: Connected
Number of entries in split-brain: 0
> -------- Original
2017 Aug 22
3
self-heal not working
Thanks for the additional hints, I have the following 2 questions first:
- In order to launch the index heal is the following command correct:
gluster volume heal myvolume
- If I run a "volume start force" will it have any short disruptions on my clients which mount the volume through FUSE? If yes, how long? This is a production system that's why I am asking.
> --------