Displaying 20 results from an estimated 12000 matches similar to: "Best practices?"
2011 Sep 15
1
Gluster 3.2 configurations + translators
Hello,
i'm little confused about gluster configuration interface. I did start
with gluster 3.2 and i did all configurations using gluster cli
command.
Now when i was looking into way how to tune performance i find out in
documentation on many places some pieces of text configuration files,
but usually there is a warning that it is old and should be not used.
Right now im solving how to turn
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume:
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2017 Jun 15
1
How to expand Replicated Volume
Hi Nag Pavan Chilakam
Can I use this command "gluster vol add-brick vol1 replica 2
file01g:/brick3/data/vol1 file02g:/brick4/data/vol1" in both file server 01
and 02 exited without add new servers. Is it ok for expanding volume?
Thanks for your support
Regards,
Giang
2017-06-14 22:26 GMT+07:00 Nag Pavan Chilakam <nag.chilakam at gmail.com>:
> Hi,
> You can use add-brick
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
I have not done the upgrade yet. Since this is a production cluster I
need to make sure it stays up or schedule some downtime if it doesn't
doesn't. Thanks.
On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
> On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
> wrote:
>>
>> Hi,
>>
2017 Dec 20
2
Upgrading from Gluster 3.8 to 3.12
Looks like a bug as I see tier-enabled = 0 is an additional entry in the
info file in shchhv01. As per the code, this field should be written into
the glusterd store if the op-version is >= 30706 . What I am guessing is
since we didn't have the commit 33f8703a1 "glusterd: regenerate volfiles on
op-version bump up" in 3.8.4 while bumping up the op-version the info and
volfiles were
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
I was attempting the same on a local sandbox and also have the same problem.
Current: 3.8.4
Volume Name: shchst01
Type: Distributed-Replicate
Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: shchhv01-sto:/data/brick3/shchst01
Brick2: shchhv02-sto:/data/brick3/shchst01
Brick3:
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
Yes Atin. I'll take a look.
On Wed, Dec 20, 2017 at 11:28 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
> Looks like a bug as I see tier-enabled = 0 is an additional entry in the
> info file in shchhv01. As per the code, this field should be written into
> the glusterd store if the op-version is >= 30706 . What I am guessing is
> since we didn't have the commit
2012 Apr 20
1
Upgrading from V3.0 production system to V3.3
Before I undertake this upgrade I thought I would see if anyone has any advice
on how to do this on a production system. Maybe someone has already "fought
this dragon".
Current config Gluster V3.0.0. This has been in production for over 16 months:
2 servers with 8 x 2TB hard drives (bricks) replicated
svr1:vol1 <-> srv2:vol1 -> rbrick1
svr1:vol2 <-> srv2:vol2 ->
2018 Apr 16
2
Bitrot strange behavior
Hello,
I am playing around with the bitrot feature and have some questions:
1. when a file is created, the "trusted.bit-rot.signature? attribute
seems only created approximatively 120 seconds after its creations
(the cluster is idle and there is only one file living on it). Why ?
Is there a way to make this attribute generated at the same time of
the file creation ?
2. corrupting a file
2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
Hi,
I have a cluster of 10 servers all running Fedora 24 along with
Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
Gluster 3.12. I saw the documentation and did some testing but I
would like to run my plan through some (more?) educated minds.
The current setup is:
Volume Name: vol0
Distributed-Replicate
Number of Bricks: 2 x (2 + 1) = 6
Bricks:
Brick1:
2017 Dec 19
0
Upgrading from Gluster 3.8 to 3.12
On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
wrote:
> Hi,
>
> I have a cluster of 10 servers all running Fedora 24 along with
> Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
> Gluster 3.12. I saw the documentation and did some testing but I
> would like to run my plan through some (more?) educated minds.
>
2018 Jan 18
2
Segfaults after upgrade to GlusterFS 3.10.9
Hi,
after upgrading to 3.10.9 I'm seing ganesha.nfsd segfaulting all the time:
[12407.918249] ganesha.nfsd[38104]: segfault at 0 ip 00007f872425fb00 sp 00007f867cefe5d0 error 4 in libglusterfs.so.0.0.1[7f8724223000+f1000]
[12693.119259] ganesha.nfsd[3610]: segfault at 0 ip 00007f716d8f5b00 sp 00007f71367e15d0 error 4 in libglusterfs.so.0.0.1[7f716d8b9000+f1000]
[14531.582667]
2018 Apr 18
0
Bitrot strange behavior
Hi Cedric,
Any file is picked up for signing by the bitd process after the
predetermined wait of 120 seconds. This default value is captured in the
volume option 'features.expiry-time' and is configurable - in your case,
it can be set to 0 or 1.
Point 2 is correct. A file corrupted before the bitrot signature is
generated will not be successfully detected by the scrubber. That would