Displaying 20 results from an estimated 10000 matches similar to: "gluster volume + lvm : recommendation or neccessity ?"
2017 Oct 11
2
gluster volume + lvm : recommendation or neccessity ?
Thanks Rafi, that's understood now :)
I'm considering to deploy gluster on a 4 x 40 TB? bricks, do you think
it would better to make 1 LVM partition for each Volume I need or to
make one Big LVM partition and start multiple volumes on it ?
We'll store mostly big files (videos) on this environement.
Le 11/10/2017 ? 09:34, Mohammed Rafi K C a ?crit?:
>
> On 10/11/2017 12:20
2017 Oct 11
0
gluster volume + lvm : recommendation or neccessity ?
On 10/11/2017 12:20 PM, ML wrote:
> Hi everyone,
>
> I've read on the gluster & redhat documentation, that it seems
> recommended to use XFS over LVM before creating & using gluster volumes.
>
> Sources :
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
>
>
2017 Oct 11
0
gluster volume + lvm : recommendation or neccessity ?
On 10/11/2017 09:50 AM, ML wrote:
> Hi everyone,
>
> I've read on the gluster & redhat documentation, that it seems recommended to
> use XFS over LVM before creating & using gluster volumes.
>
> Sources :
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
>
>
2017 Oct 11
0
gluster volume + lvm : recommendation or neccessity ?
Volumes are aggregation of bricks, so I would consider bricks as a
unique entity here rather than volumes. Taking the constraints from the
blog [1].
* All bricks should be carved out from an independent thinly provisioned
logical volume (LV). In other words, no two brick should share a common
LV. More details about thin provisioning and thin provisioned snapshot
can be found here.
* This thinly
2017 Oct 11
1
gluster volume + lvm : recommendation or neccessity ?
After some extra reading about LVM snapshots & Gluster, I think I can
conclude it may be a bad idea to use it on big storage bricks.
I understood that the LVM maximum metadata, used to store the snapshots
data, is about 16GB.
So if I have a brick with a volume arount 10TB (for example), daily
snapshots, files changing ~100GB : the LVM snapshot is useless.
LVM's snapshots doesn't
2017 Jun 29
2
Arbiter node as VM
Hello,
I have a replica 2 GlusterFS 3.8.11 cluster on 2 Debian 8 physical servers using ZFS as filesystem. Now in order to avoid a split-brain situation I would like to add a third node as arbiter.
Regarding the arbiter node I have a few questions:
- can the arbiter node be a virtual machine? (I am planning to use Xen as hypervisor)
- can I use ext4 as file system on my arbiter? or does it need
2017 Jun 29
0
Arbiter node as VM
As long as the VM isn't hosted on one of the two Gluster nodes, that's
perfectly fine. One of my smaller clusters uses the same setup.
As for your other questions, as long as it supports Unix file permissions,
Gluster doesn't care what filesystem you use. Mix & match as you wish. Just
try to keep matching Gluster versions across your nodes.
On 29 June 2017 at 16:10, mabi
2017 Nov 15
2
Help with reconnecting a faulty brick
Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?:
>
> Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?:
>>
>> Could I just remove the content of the brick (including the
>> .glusterfs directory) and reconnect ?
>>
>
> In fact, what would be the difference between reconnecting the brick
> with a wiped FS, and using
>
> gluster volume remove-brick vmstore
2017 Nov 15
0
Help with reconnecting a faulty brick
On 11/15/2017 12:54 PM, Daniel Berteaud wrote:
>
>
>
> Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?:
>>
>> Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?:
>>>
>>> Could I just remove the content of the brick (including the
>>> .glusterfs directory) and reconnect ?
>>>
>>
If it is only the brick that is faulty on the bad node,
2017 Nov 16
2
Help with reconnecting a faulty brick
Le 15/11/2017 ? 09:45, Ravishankar N a ?crit?:
> If it is only the brick that is faulty on the bad node, but everything
> else is fine, like glusterd running, the node being a part of the
> trusted storage pool etc,? you could just kill the brick first and do
> step-13 in "10.6.2. Replacing a Host Machine with the Same Hostname",
> (the mkdir of non-existent dir,
2017 Aug 31
2
Manually delete .glusterfs/changelogs directory ?
Hi Mabi,
If you will not use that geo-replication volume session again, I believe it
is safe to manually delete the files in the brick directory using rm -rf.
However, the gluster documentation specifies that if the session is to be
permanently deleted, this is the command to use:
gluster volume geo-replication gv1 snode1::gv2 delete reset-sync-time
2017 Jul 31
1
RECOMMENDED CONFIGURATIONS - DISPERSED VOLUME
Hi
I'm looking for an advise to configure a dispersed volume.
I have 12 servers and would like to use 10:2 ratio.
Yet RH recommends 8:3 or 8:4 in this case:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Recommended-Configuration_Dispersed.html
My goal is to create 2PT volume, and going with 10:2 vs 8:3/4 saves a few
bricks. With 10:2
2017 Sep 20
3
Backup and Restore strategy in Gluster FS 3.8.4
> First, please note that gluster 3.8 is EOL and that 3.8.4 is rather old in
> the 3.8 release, 3.8.15 is the current (and probably final) release of 3.8.
>
> "With the release of GlusterFS-3.12, GlusterFS-3.8 (LTM) and GlusterFS-
> 3.11 (STM) have reached EOL. Except for serious security issues no
> further updates to these versions are forthcoming. If you find a bug please
2017 Jun 29
2
Persistent storage for docker containers from a Gluster volume
On 28-Jun-2017 5:49 PM, "mabi" <mabi at protonmail.ch> wrote:
Anyone?
-------- Original Message --------
Subject: Persistent storage for docker containers from a Gluster volume
Local Time: June 25, 2017 6:38 PM
UTC Time: June 25, 2017 4:38 PM
From: mabi at protonmail.ch
To: Gluster Users <gluster-users at gluster.org>
Hello,
I have a two node replica 3.8 GlusterFS
2017 Jul 07
1
Gluster 3.11 on ubuntu 16.04 not working
Hi There,
we have a problem with a fresh installation of gluster 3.11 on a ubuntu
16.04 server.
we have made the installaton straight forward like it ist described on
the gluster.org website.
<http://gluster.readthedocs.io/en/latest/Install-Guide/Configure/>
in fstab is:
/dev/sdb1 /gluster xfs defaults 0 0
knoten5:/gv0 /glusterfs glusterfs defaults,_netdev,acl,selinux 0 0
after
2017 Jun 29
0
Persistent storage for docker containers from a Gluster volume
Hi,
glusterFS is working fine for large files (in most of the cases it's
used for VM image store), with docker you'll generate bunch of small
size files and if you want to have a good performance may be look in [1]
and [2].
Also two node replica is a bit dangerous in case of high load with small
files there is a good risk of split brain situation, therefore think
about arbiter
2017 Dec 20
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi,
I have the following volume:
Volume Name: virt_images
Type: Replicate
Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594
Status: Started
Snapshot Count: 2
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: virt3:/data/virt_images/brick
Brick2: virt2:/data/virt_images/brick
Brick3: printserver:/data/virt_images/brick (arbiter)
Options Reconfigured:
features.quota-deem-statfs:
2017 Sep 23
3
EC 1+2
Is possible to create a dispersed volume 1+2 ? (Almost the same as replica
3, the same as RAID-6)
If yes, how many server I have to add in the future to expand the storage?
1 or 3?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170923/a702ba67/attachment.html>
2017 Sep 07
1
Firewalls and ports and protocols
Reading the documentation there is conflicting information:
According to https://wiki.centos.org/HowTos/GlusterFSonCentOS we only need port TCP open between 2 GlusterFS servers:
Ports TCP:24007-24008 are required for communication between GlusterFS nodes and each brick requires another TCP port starting at 24009.
According to
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Here is the process for resolving split brain on replica 2:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html
It should be pretty much the same for replica 3, you change the xattrs with something like:
# setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a
When I try to decide which