similar to: Is Gluster the wrong solution for us?

Displaying 20 results from an estimated 11000 matches similar to: "Is Gluster the wrong solution for us?"

2013 Jan 26
4
Write failure on distributed volume with free space available
Hello, Thanks to "partner" on IRC who told me about this (quite big) problem. Apparently in a distributed setup once a brick fills up you start getting write failures. Is there a way to work around this? I would have thought gluster would check for free space before writing to a brick. It's very easy to test, I created a distributed volume from 2 uneven bricks and started to
2012 Nov 14
3
Using local writes with gluster for temporary storage
Hi, We have a cluster with 130 compute nodes with an NAS-type central storage under gluster (3 bricks, ~50TB). When we run large number of ocean models we can run into bottlenecks with many jobs trying to write to our central storage. It was suggested to us that we could also used gluster to unite the disks on the compute nodes into a single "disk" in which files would be written
2013 Aug 20
1
files got sticky permissions T--------- after gluster volume rebalance
Dear gluster experts, We're running glusterfs 3.3 and we have met file permission probelems after gluster volume rebalance. Files got stick permissions T--------- after rebalance which break our client normal fops unexpectedly. Any one known this issue? Thank you for your help. -- ??? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Dec 06
2
How reliable is XFS under Gluster?
Hello, I am in the point of picking up a FS for new brick nodes. I was used to like and use ext4 until now but I recently red for an issue introduced by a patch in ext4 that breaks the distributed translator. In the same time, it looks like the recommended FS for a brick is no longer ext4 but XFS which apparently will also be the default FS in the upcoming RedHat7. On the other hand, XFS is being
2012 Nov 02
8
Very slow directory listing and high CPU usage on replicated volume
Hi all, I am having problems with painfully slow directory listings on a freshly created replicated volume. The configuration is as follows: 2 nodes with 3 replicated drives each. The total volume capacity is 5.6T. We would like to expand the storage capacity much more, but first we need to figure this problem out. Soon after loading up about 100 MB of small files (about 300kb each), the
2013 May 10
2
Self-heal and high load
Hi all, I'm pretty new to Gluster, and the company I work for uses it for storage across 2 data centres. An issue has cropped up fairly recently with regards to the self-heal mechanism. Occasionally the connection between these 2 Gluster servers breaks or drops momentarily. Due to the nature of the business it's highly likely that files have been written during this time. When the
2013 Nov 12
2
Expanding legacy gluster volumes
Hi there, This is a hypothetical problem, not one that describes specific hardware at the moment. As we all know, gluster currently usually works best when each brick is the same size, and each host has the same number of bricks. Let's call this a "homogeneous" configuration. Suppose you buy the hardware to build such a pool. Two years go by, and you want to grow the pool. Changes
2013 Nov 01
1
Gluster "Cheat Sheet"
Greetings, One of the best things I've seen at conferences this year has been a bookmark distributed by the RDO folks with most common and/or useful commands for OpenStack users. Some people at Red Hat were wondering about doing the same for Gluster, and I thought it would be a great idea. Paul Cuzner, the author of the gluster-deploy project, took a first cut, pasted below. What do you
2012 Nov 14
2
Avoid Split-brain and other stuff
Hi! I just gave GlusterFS a try and experienced two problems. First some background: - I want to set up a file server with synchronous replication between branch offices, similar to Windows DFS-Replication. The goal is _not_ high-availability or cluster-scaleout, but just having all files locally available at each branch office. - To test GlusterFS, I installed two virtual machines
2012 Dec 13
1
Rebalance may never finish, Gluster 3.2.6
Hi Guys, I have a rebalance that is going so slow it may never end. Particulars on system: 3 nodes 6 bricks, ~55TB about 10%full. The use of data is very active during the day and less so at night. All are CentOS 6.3, x86_64, Gluster 3.2.6 [root at node01 ~]# gluster volume rebalance data01 status rebalance step 2: data migration in progress: rebalanced 1378203 files of size 308570266988
2013 Nov 09
2
Failed rebalance - lost files, inaccessible files, permission issues
I'm starting a new thread on this, because I have more concrete information than I did the first time around. The full rebalance log from the machine where I started the rebalance can be found at the following link. It is slightly redacted - one search/replace was made to replace an identifying word with REDACTED. https://dl.dropboxusercontent.com/u/97770508/mdfs-rebalance-redacted.zip
2012 Dec 18
2
Gluster and public/private LAN
I have an idea I'd like to run past everyone. Every gluster peer would have two NICs - one "public" and the other "private" with different IP subnets. The idea that I am proposing would be to have every gluster peer have all private peer addresses in /etc/hosts, but the public addresses would be in DNS. Clients would use DNS. The goal is to have all peer-to-peer
2012 Jun 28
1
Rebalance failures
I am messing around with gluster management and I've added a couple bricks and did a rebalance, first fix-layout and then migrate data. When I do this I seem to get a lot of failures: gluster> volume rebalance MAIL status Node Rebalanced-files size scanned failures status --------- -----------
2017 Dec 28
1
Adding larger bricks to an existing volume
I have a 10x2 distributed replica volume running gluster3.8. Each of my bricks is about 60TB in size. ( 6TB drives Raid 6 10+2 ) I am running of storage so I intend on adding servers with larger 8Tb drives. My new bricks will be 80TB in size. I will make sure the replica to the larger brick will match in size. Will gluster place more files on the larger bricks? Or will I have wasted space? In
2013 Apr 29
1
Replicated and Non Replicated Bricks on Same Partition
Gluster-Users, We currently have a 30 node Gluster Distributed-Replicate 15 x 2 filesystem. Each node has a ~20TB xfs filesystem mounted to /data and the bricks live on /data/brick. We have been very happy with this setup, but are now collecting more data that doesn't need to be replicated because it can be easily regenerated. Most of the data lives on our replicated volume and is
2013 Mar 16
1
different size of nodes
hi All, There is a distributed cluster with 5 bricks: gl0 Filesystem Size Used Avail Use% Mounted on /dev/sda4 5.5T 4.1T 1.5T 75% /mnt/brick1 gl1 Filesystem Size Used Avail Use% Mounted on /dev/sda4 5.5T 4.3T 1.3T 78% /mnt/brick1 gl2 Filesystem Size Used Avail Use% Mounted on /dev/sda4 5.5T 4.1T 1.4T 76% /mnt/brick1 gl3 Filesystem Size Used
2018 Apr 23
4
Problems since 3.12.7: invisible files, strange rebalance size, setxattr failed during rebalance and broken unix rights
Hi, after 2 years running GlusterFS without bigger problems we're facing some strange errors lately. After updating to 3.12.7 some user reported at least 4 broken directories with some invisible files. The files are at the bricks and don't start with a dot, but aren't visible in "ls". Clients still can interact with them by using the explicit path. More information:
2017 Jul 07
2
Rebalance task fails
Hello everyone, I have problem rebalancing Gluster volume. Gluster version is 3.7.3. My 1x3 replicated volume become full, so I've added three more bricks to make it 2x3 and wanted to rebalance. But every time I start rebalancing, it fails immediately. Rebooting Gluster nodes doesn't help. # gluster volume rebalance gsae_artifactory_cluster_storage start volume rebalance:
2011 Apr 22
1
rebalancing after remove-brick
Hello, I'm having trouble migrating data from 1 removed replica set to another active one in a dist replicated volume. My test scenario is the following: - create set (A) - create a bunch of files on it - add another set (B) - rebalance (works fine) - remove-brick A - rebalance (doesn't rebalance - ran on one brick in each set) The doc seems to imply that it is possible to remove
2017 Jul 10
2
Rebalance task fails
Hi Nithya, the files were sent to priv to avoid spamming the list with large attachments. Could someone explain what is index in Gluster? Unfortunately index is popular word, so googling is not very helpful. Best regards, Szymon Miotk On Sun, Jul 9, 2017 at 6:37 PM, Nithya Balachandran <nbalacha at redhat.com> wrote: > > On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at