similar to: Online Rebalancing

Displaying 20 results from an estimated 2000 matches similar to: "Online Rebalancing"

2017 Dec 13
0
Online Rebalancing
On 13 December 2017 at 17:34, mohammad kashif <kashif.alig at gmail.com> wrote: > Hi > > I have a five node 300 TB distributed gluster volume with zero > replication. I am planning to add two more servers which will add around > 120 TB. After fixing the layout, can I rebalance the volume while clients > are online and accessing the data? > > Hi, Yes, you can. Are
2017 Jul 13
2
Rebalance task fails
Hi Nithya, I see index in context: [2017-07-07 10:07:18.230202] E [MSGID: 106062] [glusterd-utils.c:7997:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index I wonder if there is anything I can do to fix it. I was trying to strace gluster process but still have no clue what exactly is gluster index. Best regards, Szymon Miotk On Thu, Jul 13, 2017 at 10:12 AM, Nithya
2017 Jul 10
2
Rebalance task fails
Hi Nithya, the files were sent to priv to avoid spamming the list with large attachments. Could someone explain what is index in Gluster? Unfortunately index is popular word, so googling is not very helpful. Best regards, Szymon Miotk On Sun, Jul 9, 2017 at 6:37 PM, Nithya Balachandran <nbalacha at redhat.com> wrote: > > On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at
2017 Jul 13
0
Rebalance task fails
Hi Szymon, I have received the files and will take a look and get back to you. In what context are you seeing index? Thanks, Nithya On 11 July 2017 at 01:15, Szymon Miotk <szymon.miotk at gmail.com> wrote: > Hi Nithya, > > the files were sent to priv to avoid spamming the list with large > attachments. > Could someone explain what is index in Gluster? > Unfortunately
2017 Jul 11
2
Extremely slow du
Hi Kashif, Thank you for your feedback! Do you have some data on the nature of performance improvement observed with 3.11 in the new setup? Adding Raghavendra and Poornima for validation of configuration and help with identifying why certain files disappeared from the mount point after enabling readdir-optimize. Regards, Vijay On 07/11/2017 11:06 AM, mohammad kashif wrote: > Hi Vijay and
2017 Jul 07
2
Rebalance task fails
Hello everyone, I have problem rebalancing Gluster volume. Gluster version is 3.7.3. My 1x3 replicated volume become full, so I've added three more bricks to make it 2x3 and wanted to rebalance. But every time I start rebalancing, it fails immediately. Rebooting Gluster nodes doesn't help. # gluster volume rebalance gsae_artifactory_cluster_storage start volume rebalance:
2017 Jul 09
0
Rebalance task fails
On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at gmail.com> wrote: > Hello everyone, > > > I have problem rebalancing Gluster volume. > Gluster version is 3.7.3. > My 1x3 replicated volume become full, so I've added three more bricks > to make it 2x3 and wanted to rebalance. > But every time I start rebalancing, it fails immediately. > Rebooting Gluster
2017 Oct 05
2
data corruption - any update?
On 4 October 2017 at 23:34, WK <wkmail at bneit.com> wrote: > Just so I know. > > Is it correct to assume that this corruption issue is ONLY involved if you > are doing rebalancing with sharding enabled. > > So if I am not doing rebalancing I should be fine? > That is correct. > -bill > > > > On 10/3/2017 10:30 PM, Krutika Dhananjay wrote: > >
2017 Jun 12
2
Extremely slow du
Hi Vijay I have enabled client profiling and used this script https://github.com/bengland2/gluster-profile-analysis/blob/master/gvp-client.sh to extract data. I am attaching output files. I don't have any reference data to compare with my output. Hopefully you can make some sense out of it. On Sat, Jun 10, 2017 at 10:47 AM, Vijay Bellur <vbellur at redhat.com> wrote: > Would it be
2017 Jun 18
1
Extremely slow du
Hi Mohammad, A lot of time is being spent in addressing metadata calls as expected. Can you consider testing out with 3.11 with md-cache [1] and readdirp [2] improvements? Adding Poornima and Raghavendra who worked on these enhancements to help out further. Thanks, Vijay [1] https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/ [2] https://github.com/gluster/glusterfs/issues/166 On
2017 Jun 16
0
Extremely slow du
Hi Vijay Did you manage to look into the gluster profile logs ? Thanks Kashif On Mon, Jun 12, 2017 at 11:40 AM, mohammad kashif <kashif.alig at gmail.com> wrote: > Hi Vijay > > I have enabled client profiling and used this script > https://github.com/bengland2/gluster-profile-analysis/blob/ > master/gvp-client.sh to extract data. I am attaching output files. I >
2018 May 01
2
Usage monitoring per user
Hi Is there any easy way to find usage per user in Gluster? We have 300TB storage with almost 100 million files. Running du take too much time. Are people aware of any other tool which can be used to break up storage per user? Thanks Kashif -------------- next part -------------- An HTML attachment was scrubbed... URL:
2018 May 01
0
Usage monitoring per user
Hi, There are several programs that will basically take the outputs of your scans and store the results in a database. If you size the database appropriately, then querying that database will be much quicker than querying the filesystem. But of course the results will be a little bit outdated. One such project is robinhood. https://github.com/cea-hpc/robinhood/wiki A simpler way might be to
2011 Apr 22
1
rebalancing after remove-brick
Hello, I'm having trouble migrating data from 1 removed replica set to another active one in a dist replicated volume. My test scenario is the following: - create set (A) - create a bunch of files on it - add another set (B) - rebalance (works fine) - remove-brick A - rebalance (doesn't rebalance - ran on one brick in each set) The doc seems to imply that it is possible to remove
2017 Jun 09
2
Extremely slow du
Hi Vijay Thanks for your quick response. I am using gluster 3.8.11 on Centos 7 servers glusterfs-3.8.11-1.el7.x86_64 clients are centos 6 but I tested with a centos 7 client as well and results didn't change gluster volume info Volume Name: atlasglust Type: Distribute Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b Status: Started Snapshot Count: 0 Number of Bricks: 5 Transport-type: tcp
2017 Aug 30
2
Gluster status fails
Hi I am running a 400TB five node purely distributed gluster setup. I am troubleshooting an issue where some times files creation fails. I found that volume status is not working gluster volume status Another transaction is in progress for atlasglust. Please try again after sometime. When I tried from other node then it seems two nodes have Locking issue gluster volume status Locking failed
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------
2017 Jun 10
0
Extremely slow du
Would it be possible for you to turn on client profiling and then run du? Instructions for turning on client profiling can be found at [1]. Providing the client profile information can help us figure out where the latency could be stemming from. Regards, Vijay [1] https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Performance%20Testing/#client-side-profiling On Fri, Jun 9, 2017 at
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message. Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- -----------
2017 Jun 09
2
Extremely slow du
Hi I have just moved our 400 TB HPC storage from lustre to gluster. It is part of a research institute and users have very small files to big files ( few KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6 disks. All servers are connected through 10G ethernet but not all clients. Gluster volumes are distributed without any replication. There are approximately 80 million files in