similar to: New release of Gluster?

Displaying 20 results from an estimated 10000 matches similar to: "New release of Gluster?"

2012 Apr 19
2
Gluster 3.2.6 for XenServer
Hi, I have Gluster 3.2.6 RPM's for Citrix XenServer 6.0. I've installed and mounted exports, but that's where I stopped. My issues are: 1. XenServer mounts the NFS servers SR subdirectory, not the export. Gluster won't do that. -- I can, apparently, mount the gluster export somewhere else, and then 'mount --bind' the subdir to the right place 2. I don't really know
2011 Oct 23
2
GlusterFS over lessfs/opendedupe
Hi, I'm currently running GlusterFS over XFS, and it works quite well. I'm wondering if it's possible to add data deduplication into the mix by: glusterfs --> lessfs --> xfs or glusterfs --> opendedupe --> xfs Has anybody tried doing this? We're running VM images on gluster, and I figure we could get a bit of space saving bu deduplicating the data. Gerald
2013 Dec 05
2
Ubuntu GlusterFS in Production
Hi, Is anyone using GlusterFS on Ubuntu in production? Specifically, I'm looking at using the NFS portion of it over a bonded interface. I believe I'll get better speed than user the gluster client across a single interface. Setup: 3 servers running KVM (about 24 VM's) 2 NAS boxes running Ubuntu (13.04 and 13.10) Since Gluster NFS does server side replication, I'll put
2012 Jan 05
1
Can't stop or delete volume
Hi, I can't stop or delete a replica volume: # gluster volume info Volume Name: sync1 Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: thinkpad:/gluster/export Brick2: quad:/raid/gluster/export # gluster volume stop sync1 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y Volume sync1 does not exist # gluster volume
2018 Jan 17
1
Gluster endless heal
Hi, I have an issue with Gluster 3.8.14. The cluster is 4 nodes with replica count 2, on of the nodes went offline for around 15 minutes, when it came back online, self heal triggered and it just did not stop afterward, it's been running for 3 days now, maxing the bricks utilization without actually healing anything. The bricks are all SSDs, and the logs of the source node is spamming with
2013 Dec 09
0
Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
Hello, I''m trying to build a replica volume, on two servers. The servers are: blade6 and blade7. (another blade1 in the peer, but with no volumes) The volume seems ok, but I cannot mount it from NFS. Here are some logs: [root@blade6 stor1]# df -h /dev/mapper/gluster_stor1 882G 200M 837G 1% /gluster/stor1 [root@blade7 stor1]# df -h /dev/mapper/gluster_fast
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Well, it looks like I've stumped the list, so I did a bit of additional digging myself: azathoth replicates with yog-sothoth, so I compared their brick directories. `ls -R /var/local/brick0/data | md5sum` gives the same result on both servers, so the filenames are identical in both bricks. However, `du -s /var/local/brick0/data` shows that azathoth has about 3G more data (445G vs 442G) than
2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
I'm using gluster for a virt-store with 3x2 distributed/replicated servers for 16 qemu/kvm/libvirt virtual machines using image files stored in gluster and accessed via libgfapi. Eight of these disk images are standalone, while the other eight are qcow2 images which all share a single backing file. For the most part, this is all working very well. However, one of the gluster servers
2017 Dec 20
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi, I have the following volume: Volume Name: virt_images Type: Replicate Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594 Status: Started Snapshot Count: 2 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: virt3:/data/virt_images/brick Brick2: virt2:/data/virt_images/brick Brick3: printserver:/data/virt_images/brick (arbiter) Options Reconfigured: features.quota-deem-statfs:
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Here is the process for resolving split brain on replica 2: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html It should be pretty much the same for replica 3, you change the xattrs with something like: # setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a When I try to decide which
2018 Feb 15
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Hi, Have you checked for any file system errors on the brick mount point? I once was facing weird io errors and xfs_repair fixed the issue. What about the heal? Does it report any pending heals? On Feb 15, 2018 14:20, "Dave Sherohman" <dave at sherohman.org> wrote: > Well, it looks like I've stumped the list, so I did a bit of additional > digging myself: > >
2017 Jun 28
0
Gluster volume not mounted
The mount log file of the volume would help in debugging the actual cause. On Tue, Jun 27, 2017 at 6:33 PM, Joel Diaz <mrjoeldiaz at gmail.com> wrote: > Good morning Gluster users, > > I'm very new to the Gluster file system. My apologies if this is not the > correct way to seek assistance. However, I would appreciate some insight > into understanding the issue I have.
2017 Jun 27
2
Gluster volume not mounted
Good morning Gluster users, I'm very new to the Gluster file system. My apologies if this is not the correct way to seek assistance. However, I would appreciate some insight into understanding the issue I have. I have three nodes running two volumes, engine and data. The third node is the arbiter on both volumes. Both volumes were operation fine but one of the volumes, data, no longer
2018 Apr 30
0
Gluster rebalance taking many years
Hi, This value is an ongoing rough estimate based on the amount of data rebalance has migrated since it started. The values will cange as the rebalance progresses. A few questions: 1. How many files/dirs do you have on this volume? 2. What is the average size of the files? 3. What is the total size of the data on the volume? Can you send us the rebalance log? Thanks, Nithya On 30
2017 Sep 22
2
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hi, thanks for suggesions. Yes "gluster peer probe node3? will be first command in order to discover 3rd node by Gluster. I am running on latest 3.7.x - there is 3.7.6-1ubuntu1 installed and latest 3.7.x according https://packages.ubuntu.com/xenial/glusterfs-server <https://packages.ubuntu.com/xenial/glusterfs-server> is 3.7.6-1ubuntu1, so this should be OK. > If you are *not* on
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 10:14 AM, Artem Russakovskii wrote: > Following up here on a related and very serious for us issue. > > I took down one of the 4 replicate gluster servers for maintenance > today. There are 2 gluster volumes totaling about 600GB. Not that much > data. After the server comes back online, it starts auto healing and > pretty much all operations on gluster freeze for
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 10:35 AM, Artem Russakovskii wrote: > Hi Ravi, > > Could you please expand on how these would help? > > By forcing full here, we move the logic from the CPU to network, thus > decreasing CPU utilization, is that right? Yes, 'diff' employs the rchecksum FOP which does a sha256? checksum which can consume CPU. So yes it is sort of shifting the load from CPU
2018 Apr 18
1
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 11:59 AM, Artem Russakovskii wrote: > Btw, I've now noticed at least 5 variations in toggling binary option > values. Are they all interchangeable, or will using the wrong value > not work in some cases? > > yes/no > true/false > True/False > on/off > enable/disable > > It's quite a confusing/inconsistent practice, especially given that
2018 Apr 30
2
Gluster rebalance taking many years
2017 Jun 30
0
Multi petabyte gluster
Did you test healing by increasing disperse.shd-max-threads? What is your heal times per brick now? On Fri, Jun 30, 2017 at 8:01 PM, Alastair Neil <ajneil.tech at gmail.com> wrote: > We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the > rebuild time are bottlenecked by matrix operations which scale as the square > of the number of data stripes. There are