Displaying 20 results from an estimated 20000 matches similar to: "How does Gluster distribute files?"
2017 Nov 14
1
glusterfs-fuse package update
Folks, I need to update all my gluserfs-fuse clients to the latest version.
Can I do this without a reboot?
If I stop the module then update the fuse client, would this suffice? Or do
I really need a reboot?
Thank You
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171114/438530ee/attachment.html>
2017 Jun 29
0
Persistent storage for docker containers from a Gluster volume
Hi,
glusterFS is working fine for large files (in most of the cases it's
used for VM image store), with docker you'll generate bunch of small
size files and if you want to have a good performance may be look in [1]
and [2].
Also two node replica is a bit dangerous in case of high load with small
files there is a good risk of split brain situation, therefore think
about arbiter
2017 Jun 29
2
Persistent storage for docker containers from a Gluster volume
On 28-Jun-2017 5:49 PM, "mabi" <mabi at protonmail.ch> wrote:
Anyone?
-------- Original Message --------
Subject: Persistent storage for docker containers from a Gluster volume
Local Time: June 25, 2017 6:38 PM
UTC Time: June 25, 2017 4:38 PM
From: mabi at protonmail.ch
To: Gluster Users <gluster-users at gluster.org>
Hello,
I have a two node replica 3.8 GlusterFS
2017 Jul 07
1
GluserFS WORM hardlink
GlusterFS WORM hard links will not be created
OS is CentOS7
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170707/aaea8dfc/attachment.html>
2013 Dec 12
3
Is Gluster the wrong solution for us?
We are about to abandon GlusterFS as a solution for our object storage needs. I'm hoping to get some feedback to tell me whether we have missed something and are making the wrong decision. We're already a year into this project after evaluating a number of solutions. I'd like not to abandon GlusterFS if we just misunderstand how it works.
Our use case is fairly straight forward.
2013 Nov 12
2
Expanding legacy gluster volumes
Hi there,
This is a hypothetical problem, not one that describes specific hardware
at the moment.
As we all know, gluster currently usually works best when each brick is
the same size, and each host has the same number of bricks. Let's call
this a "homogeneous" configuration.
Suppose you buy the hardware to build such a pool. Two years go by, and
you want to grow the pool. Changes
2013 Feb 07
1
Gluster - data migration.
Hi All.
I have two servers ("master" and "slave") with a replicated gluster
volume. Recently I've had a problem with slave and gluster does not
work on it now.
So I would like to:
- stop and remove current volume on master (on slave it is not accessible);
- stop gluster software on master (already stopped on slave);
- remove gluster software on master and slave (previous
2012 Nov 14
3
Using local writes with gluster for temporary storage
Hi,
We have a cluster with 130 compute nodes with an NAS-type
central storage under gluster (3 bricks, ~50TB). When we
run large number of ocean models we can run into bottlenecks
with many jobs trying to write to our central storage.
It was suggested to us that we could also used gluster to
unite the disks on the compute nodes into a single "disk"
in which files would be written
2012 Dec 11
4
Gluster machines slowing down over time
I have 2 gluster servers in replicated mode on EC2 with ~4G RAM
CPU and RAM look fine but over time the system becomes sluggish,
particularly networking.
I notice when sshing into the machine takes ages and running remote
commands with capistrano takes longer and longer.
Any kernel settings people typically use?
Thanks,
Tom
-------------- next part --------------
An HTML attachment was
2024 Feb 05
1
Challenges with Replicated Gluster volume after stopping Gluster on any node.
Hi,
normally, when we shut down or reboot one of the (server) nodes, we call
the "stop-all-gluster-processes.sh" script. But i think you did that, right?
Best regards,
Hubert
Am Mo., 5. Feb. 2024 um 13:35 Uhr schrieb Anant Saraswat <
anant.saraswat at techblue.co.uk>:
> Hello Everyone,
>
> We have a replicated Gluster volume with three nodes, and we face a
>
2011 Sep 07
1
Gluster on Clustered Storage
Hi ...,
How does one deploy Gluster on a clustered storage i.e. two server nodes
with a common storage array?
Say one has, two server nodes connected to a common storage array, that
exports 2 LUNs visible to both nodes.
[ Server 1 ]------[ Storage ]------[ Server 2 ]
Each server node mount a single LUN, but in case one of the node fails, the
other node takes over the LUN.
[ Server 1
2021 Sep 27
1
回复: What types of volumes are supported in the latest version of Gluster?
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210927/9d027096/attachment.html>
2017 Jul 10
2
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
Hi,
is there a recommended way to upgrade Gluster cluster when upgrading
to newer revision? I experienced filesystem corruption on several but
not all VMs (KVM, FUSE) stored on Gluster during Gluster upgrade.
After upgrading one of two nodes, I checked peer status and volume
heal info, everything seemed fine so I upgraded second node and then
two VMs remounted root as read-only and dmesg
2019 Jun 12
1
Proper command for replace-brick on distribute–replicate?
On 12/06/19 1:38 PM, Alan Orth wrote:
> Dear Ravi,
>
> Thanks for the confirmation?I replaced a brick in a volume last night
> and by the morning I see that Gluster has replicated data there,
> though I don't have any indication of its progress. The `gluster v
> heal volume info` and `gluster v heal volume info split-brain` are all
> looking good so I guess that's
2017 Jul 17
1
Gluster set brick online and start sync.
Hello everybody,
Please, help to fix me a problem.
I have a distributed-replicated volume between two servers. On each
server I have 2 RAID-10 arrays, that replicated between servers.
Brick gl1:/mnt/brick1/gm0 49153 0 Y
13910
Brick gl0:/mnt/brick0/gm0 N/A N/A N
N/A
Brick gl0:/mnt/brick1/gm0 N/A
2017 Jul 31
3
gluster volume 3.10.4 hangs
Hi folks,
I'm running a simple gluster setup with a single volume replicated at two servers, as follows:
Volume Name: gv0
Type: Replicate
Volume ID: dd4996c0-04e6-4f9b-a04e-73279c4f112b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: sst0:/var/glusterfs
Brick2: sst2:/var/glusterfs
Options Reconfigured:
cluster.self-heal-daemon: enable
2011 May 09
1
Gluster text file configuration information?
Where can I find documentation about manual configuration of Gluster
peers/volumes? All documentation seems to be about the gluster CLI. I
would prefer manual configuration to facilitate automation via scripts
(e.g. Puppet/Chef).
I also read in this list that it is possible to configure Raid10 via
text files... I would also like to experiment with this setup. Any
related documents on how to do
2024 Feb 05
1
Challenges with Replicated Gluster volume after stopping Gluster on any node.
Hello Everyone,
We have a replicated Gluster volume with three nodes, and we face a strange issue whenever we need to restart one of the nodes in this cluster.
As per my understanding, if we shut down one node, the Gluster mount should smoothly connect to another remaining Gluster server and shouldn't create any issues.
In our setup, when we stop Gluster on any of the nodes, we mostly get
2017 Oct 30
3
Gluster Scale Limitations
Hi all,
Are there any scale limitations in terms of how many nodes can be in a single Gluster Cluster or how much storage capacity can be managed in a single cluster? What are some of the large deployments out there that you know of?
Thanks,
Mayur
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for
2017 Aug 09
1
Gluster performance with VM's
Hi, community
Please, help me with my trouble.
I have 2 Gluster nodes, with 2 bricks on each.
Configuration:
Node1 brick1 replicated on Node0 brick0
Node0 brick1 replicated on Node1 brick0
Volume Name: gm0
Type: Distributed-Replicate
Volume ID: 5e55f511-8a50-46e4-aa2f-5d4f73c859cf
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: