similar to: Migrate and reduce volume

Displaying 20 results from an estimated 10000 matches similar to: "Migrate and reduce volume"

2011 Sep 16
2
Can't replace dead peer/brick
I have a simple setup: gluster> volume info Volume Name: myvolume Type: Distributed-Replicate Status: Started Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: 10.2.218.188:/srv Brick2: 10.116.245.136:/srv Brick3: 10.206.38.103:/srv Brick4: 10.114.41.53:/srv Brick5: 10.68.73.41:/srv Brick6: 10.204.129.91:/srv I *killed* Brick #4 (kill -9 and then shut down instance). My
2018 Feb 25
0
Re-adding an existing brick to a volume
.gluster and attr already in that folder so it would not connect it as a brick I don't think there is option to "reconnect brick back" what I did many times - delete .gluster and reset attr on the folder, connect the brick and then update those attr. with stat commands example here http://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html Vlad On Sun, Feb 25, 2018
2018 Feb 25
2
Re-adding an existing brick to a volume
Hi! I am running a replica 3 volume. On server2 I wanted to move the brick to a new disk. I removed the brick from the volume: gluster volume remove-brick VOLUME rep 2 server2:/gluster/VOLUME/brick0/brick force I unmounted the old brick and mounted the new disk to the same location. I added the empty new brick to the volume: gluster volume add-brick VOLUME rep 3
2018 Feb 25
1
Re-adding an existing brick to a volume
Let me see if I understand this. Remove attrs from the brick and delete the .glusterfs folder. Data stays in place. Add the brick to the volume. Since most of the data is the same as on the actual volume it does not need to be synced, and the heal operation finishes much faster. Do I have this right? Kind regards, Mitja On 25/02/2018 17:02, Vlad Kopylov wrote: > .gluster and attr already in
2018 Feb 13
0
Failed to get quota limits
Yes, I need the log files in that duration, the log rotated file after hitting the issue aren't necessary, but the ones before hitting the issues are needed (not just when you hit it, the ones even before you hit it). Yes, you have to do a stat from the client through fuse mount. On Tue, Feb 13, 2018 at 3:56 PM, mabi <mabi at protonmail.ch> wrote: > Thank you for your answer. This
2018 Feb 13
0
Failed to get quota limits
I tried to set the limits as you suggest by running the following command. $ sudo gluster volume quota myvolume limit-usage /directory 200GB volume quota : success but then when I list the quotas there is still nothing, so nothing really happened. I also tried to run stat on all directories which have a quota but nothing happened either. I will send you tomorrow all the other logfiles as
2018 Apr 06
0
Can't stop volume using gluster volume stop
Hello, On one of my GlusterFS 3.12.7 3-way replica volume I can't stop it using the standard gluster volume stop command as you can see below: $ sudo gluster volume stop myvolume Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: myvolume: failed: geo-replication Unable to get the status of active geo-replication session for the volume
2018 Feb 13
0
Failed to get quota limits
Hi, A part of the log won't be enough to debug the issue. Need the whole log messages till date. You can send it as attachments. Yes the quota.conf is a binary file. And I need the volume status output too. On Tue, Feb 13, 2018 at 1:56 PM, mabi <mabi at protonmail.ch> wrote: > Hi Hari, > Sorry for not providing you more details from the start. Here below you will > find all
2018 Feb 24
0
Failed to get quota limits
Dear Hari, Thank you for getting back to me after having analysed the problem. As you said I tried to run "gluster volume quota <VOLNAME> list <PATH>" for all of my directories which have a quota and found out that there was one directory quota which was missing (stale) as you can see below: $ gluster volume quota myvolume list /demo.domain.tld Path
2018 Feb 27
0
Failed to get quota limits
Hi, Thanks for the link to the bug. We should be hopefully moving soon onto 3.12 so I guess this bug is also fixed there. Best regards, M. ? ??????? Original Message ??????? On February 27, 2018 9:38 AM, Hari Gowtham <hgowtham at redhat.com> wrote: > ?? > > Hi Mabi, > > The bugs is fixed from 3.11. For 3.10 it is yet to be backported and > > made available. >
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:33 PM, mabi wrote: > Now I understand what you mean the the "-samefile" parameter of > "find". As requested I have now run the following command on all 3 > nodes with the ouput of all 3 nodes below: > > sudo find /data/myvolume/brick -samefile > /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 > -ls > >
2018 Feb 13
2
Failed to get quota limits
Thank you for your answer. This problem seem to have started since last week, so should I also send you the same log files but for last week? I think logrotate rotates them on a weekly basis. The only two quota commands we use are the following: gluster volume quota myvolume limit-usage /directory 10GB gluster volume quota myvolume list basically to set a new quota or to list the current
2018 Feb 13
2
Failed to get quota limits
Were you able to set new limits after seeing this error? On Tue, Feb 13, 2018 at 4:19 PM, Hari Gowtham <hgowtham at redhat.com> wrote: > Yes, I need the log files in that duration, the log rotated file after > hitting the > issue aren't necessary, but the ones before hitting the issues are needed > (not just when you hit it, the ones even before you hit it). > > Yes,
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:00 PM, mabi wrote: > To quickly resume my current situation: > > on node2 I have found the following file xattrop/indices file which > matches the GFID of the "heal info" command (below is there output of > "ls -lai": > > 2798404 ---------- 2 root root 0 Apr 28 22:51 >
2018 Feb 13
2
Failed to get quota limits
Hi Hari, Sorry for not providing you more details from the start. Here below you will find all the relevant log entries and info. Regarding the quota.conf file I have found one for my volume but it is a binary file. Is it supposed to be binary or text? Regards, M. *** gluster volume info myvolume *** Volume Name: myvolume Type: Replicate Volume ID: e7a40a1b-45c9-4d3c-bb19-0c59b4eceec5 Status:
2018 Feb 23
2
Failed to get quota limits
Hi, There is a bug in 3.10 which doesn't allow the quota list command to output, if the last entry on the conf file is a stale entry. The workaround for this is to remove the stale entry at the end. (If the last two entries are stale then both have to be removed and so on until the last entry on the conf file is a valid entry). This can be avoided by adding a new limit. As the new limit you
2018 Feb 27
2
Failed to get quota limits
Hi Mabi, The bugs is fixed from 3.11. For 3.10 it is yet to be backported and made available. The bug is https://bugzilla.redhat.com/show_bug.cgi?id=1418259. On Sat, Feb 24, 2018 at 4:05 PM, mabi <mabi at protonmail.ch> wrote: > Dear Hari, > > Thank you for getting back to me after having analysed the problem. > > As you said I tried to run "gluster volume quota
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
Now I understand what you mean the the "-samefile" parameter of "find". As requested I have now run the following command on all 3 nodes with the ouput of all 3 nodes below: sudo find /data/myvolume/brick -samefile /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -ls node1: 8404683 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43
2018 May 17
0
New 3.12.7 possible split-brain on replica 3
Hi mabi, Some questions: -Did you by any chance change the cluster.quorum-type option from the default values? -Is filename.shareKey supposed to be any empty file? Looks like the file was fallocated with the keep-size option but never written to. (On the 2 data bricks, stat output shows Size =0, but non zero Blocks and yet a 'regular empty file'). -Do you have some sort of a
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
To quickly resume my current situation: on node2 I have found the following file xattrop/indices file which matches the GFID of the "heal info" command (below is there output of "ls -lai": 2798404 ---------- 2 root root 0 Apr 28 22:51 /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 As you can see this file has inode number 2798404, so I ran