Displaying 20 results from an estimated 1200 matches similar to: "RECOMMENDED CONFIGURATIONS - DISPERSED VOLUME"
2017 Nov 15
0
Help with reconnecting a faulty brick
On 11/15/2017 12:54 PM, Daniel Berteaud wrote:
>
>
>
> Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?:
>>
>> Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?:
>>>
>>> Could I just remove the content of the brick (including the
>>> .glusterfs directory) and reconnect ?
>>>
>>
If it is only the brick that is faulty on the bad node,
2017 Nov 16
2
Help with reconnecting a faulty brick
Le 15/11/2017 ? 09:45, Ravishankar N a ?crit?:
> If it is only the brick that is faulty on the bad node, but everything
> else is fine, like glusterd running, the node being a part of the
> trusted storage pool etc,? you could just kill the brick first and do
> step-13 in "10.6.2. Replacing a Host Machine with the Same Hostname",
> (the mkdir of non-existent dir,
2017 Oct 11
5
gluster volume + lvm : recommendation or neccessity ?
Hi everyone,
I've read on the gluster & redhat documentation, that it seems
recommended to use XFS over LVM before creating & using gluster volumes.
Sources :
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/
My point is :
2017 Oct 11
0
gluster volume + lvm : recommendation or neccessity ?
On 10/11/2017 09:50 AM, ML wrote:
> Hi everyone,
>
> I've read on the gluster & redhat documentation, that it seems recommended to
> use XFS over LVM before creating & using gluster volumes.
>
> Sources :
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
>
>
2017 Oct 11
0
gluster volume + lvm : recommendation or neccessity ?
On 10/11/2017 12:20 PM, ML wrote:
> Hi everyone,
>
> I've read on the gluster & redhat documentation, that it seems
> recommended to use XFS over LVM before creating & using gluster volumes.
>
> Sources :
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
>
>
2017 Nov 16
0
Help with reconnecting a faulty brick
On 11/16/2017 12:54 PM, Daniel Berteaud wrote:
> Le 15/11/2017 ? 09:45, Ravishankar N a ?crit?:
>> If it is only the brick that is faulty on the bad node, but
>> everything else is fine, like glusterd running, the node being a part
>> of the trusted storage pool etc,? you could just kill the brick first
>> and do step-13 in "10.6.2. Replacing a Host Machine with
2014 Jun 27
1
geo-replication status faulty
Venky Shankar, can you follow up on these questions? I too have this issue and cannot resolve the reference to '/nonexistent/gsyncd'.
As Steve mentions, the nonexistent reference in the logs looks like the culprit especially seeing that the ssh command trying to be run is printed on an earlier line with the incorrect remote path.
I have followed the configuration steps as documented in
2017 Jun 29
0
Persistent storage for docker containers from a Gluster volume
Hi,
glusterFS is working fine for large files (in most of the cases it's
used for VM image store), with docker you'll generate bunch of small
size files and if you want to have a good performance may be look in [1]
and [2].
Also two node replica is a bit dangerous in case of high load with small
files there is a good risk of split brain situation, therefore think
about arbiter
2017 Oct 11
2
gluster volume + lvm : recommendation or neccessity ?
Thanks Rafi, that's understood now :)
I'm considering to deploy gluster on a 4 x 40 TB? bricks, do you think
it would better to make 1 LVM partition for each Volume I need or to
make one Big LVM partition and start multiple volumes on it ?
We'll store mostly big files (videos) on this environement.
Le 11/10/2017 ? 09:34, Mohammed Rafi K C a ?crit?:
>
> On 10/11/2017 12:20
2017 Nov 15
2
Help with reconnecting a faulty brick
Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?:
>
> Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?:
>>
>> Could I just remove the content of the brick (including the
>> .glusterfs directory) and reconnect ?
>>
>
> In fact, what would be the difference between reconnecting the brick
> with a wiped FS, and using
>
> gluster volume remove-brick vmstore
2017 Oct 11
1
gluster volume + lvm : recommendation or neccessity ?
After some extra reading about LVM snapshots & Gluster, I think I can
conclude it may be a bad idea to use it on big storage bricks.
I understood that the LVM maximum metadata, used to store the snapshots
data, is about 16GB.
So if I have a brick with a volume arount 10TB (for example), daily
snapshots, files changing ~100GB : the LVM snapshot is useless.
LVM's snapshots doesn't
2018 Jan 24
1
Split brain directory
Hello,
I'm trying to fix an issue with a Directory Split on a gluster 3.10.3. The
effect consist of a specific file in this splitted directory to randomly be
unavailable on some clients.
I have gathered all the informations on this gist:
https://gist.githubusercontent.com/lucagervasi/534e0024d349933eef44615fa8a5c374/raw/52ff8dd6a9cc8ba09b7f258aa85743d2854f9acc/splitinfo.txt
I discovered the
2017 Oct 11
0
gluster volume + lvm : recommendation or neccessity ?
Volumes are aggregation of bricks, so I would consider bricks as a
unique entity here rather than volumes. Taking the constraints from the
blog [1].
* All bricks should be carved out from an independent thinly provisioned
logical volume (LV). In other words, no two brick should share a common
LV. More details about thin provisioning and thin provisioned snapshot
can be found here.
* This thinly
2017 Jun 29
2
Persistent storage for docker containers from a Gluster volume
On 28-Jun-2017 5:49 PM, "mabi" <mabi at protonmail.ch> wrote:
Anyone?
-------- Original Message --------
Subject: Persistent storage for docker containers from a Gluster volume
Local Time: June 25, 2017 6:38 PM
UTC Time: June 25, 2017 4:38 PM
From: mabi at protonmail.ch
To: Gluster Users <gluster-users at gluster.org>
Hello,
I have a two node replica 3.8 GlusterFS
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Here is the process for resolving split brain on replica 2:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html
It should be pretty much the same for replica 3, you change the xattrs with something like:
# setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a
When I try to decide which
2017 Jul 30
2
Hot Tier
Hi
I'm looking for an advise on hot tier feature - how can I tell if the hot
tier is working?
I've attached replicated-distributed hot tier to an EC volume.
Yet, I don't think it's working, at least I don't see any files directly on
the bricks (only folder structure). 'Status' command has all 0s and 'In
progress' for all servers.
~]# gluster volume tier home
2018 Apr 03
1
Tune and optimize dispersed cluster
Hi all,
I have setup a dispersed cluster (2+1), version 3.12.
The way our users run I guessed that we would get the penalties
with dispersed cluster and I was right....
A calculation that usually takes about 48 hours (on a replicaited cluster),
now took about 60 hours.
There is alot of "small" reads/writes going on in these programs.
Is there a way to tune, optimize a dispersed cluster
2018 Apr 03
1
Dispersed cluster tune, optimize
Hi all,
I have setup a dispersed cluster (2+1), verision 3.12.
I guessed that we going to get punished by small read/writes...
and I was right.
A calculation that usually takes 48 hours
took about 60 hours and there are many small read/writes
to intermediate files that at the end get summed up.
Is there a way to tune, optimize a dispersed cluster to work
better with small read/writes?
Many
2024 Mar 04
1
Gluster Rebalance question
Hi all,
I'm using glusterfs for a few years now, and generally very happy with it. Saved my data multiple times already! :-)
However, I do have a few questions for which I hope someone is able to answer them.
I have a distributed, replicated glusterfs setup. I am in the process of replacing 4TB bricks with 8TB bricks, which is working nicely. However, what I am seeing now is that the space
2023 Sep 12
0
Recovering files "lost" during a rebalance on a Dispersed 3+1
Hello,
We are running glusterfs 6.6 on Ubuntu.
We have a Gluster storage system that is a few years old. There are 4 VMs
running a Dispersed (NOT replicated) system - a 3 + 1 configuration.
Generally performance is well tuned for our needs, but the problem arose
last time we added bricks: we attempted a rebalance which is reported as
failed. From the mounted POSIX view of the file system, we