Displaying 20 results from an estimated 500 matches similar to: "Getting glusterfs to expand volume size to brick size"
2018 Apr 17
5
Getting glusterfs to expand volume size to brick size
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
3: option shared-brick-count 3
dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
3: option shared-brick-count 3
dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
3: option shared-brick-count 3
Sincerely,
Artem
--
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
Ok, it looks like the same problem.
@Amar, this fix is supposed to be in 4.0.1. Is it possible to regenerate
the volfiles to fix this?
Regards,
Nithya
On 17 April 2018 at 09:57, Artem Russakovskii <archon810 at gmail.com> wrote:
> pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
> dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
> 3:
2018 Apr 16
2
Getting glusterfs to expand volume size to brick size
Hi Nithya,
I'm on Gluster 4.0.1.
I don't think the bricks were smaller before - if they were, maybe 20GB
because Linode's minimum is 20GB, then I extended them to 25GB, resized
with resize2fs as instructed, and rebooted many times over since. Yet,
gluster refuses to see the full disk size.
Here's the status detail output:
gluster volume status dev_apkmirror_data detail
Status
2018 Apr 17
1
Getting glusterfs to expand volume size to brick size
I just remembered that I didn't run
https://docs.gluster.org/en/v3/Upgrade-Guide/op_version/ for this test
volume/box like I did for the main production gluster, and one of these ops
- either heal or the op-version, resolved the issue.
I'm now seeing:
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
Hi Artem,
Was the volume size correct before the bricks were expanded?
This sounds like [1] but that should have been fixed in 4.0.0. Can you let
us know the values of shared-brick-count in the files in
/var/lib/glusterd/vols/dev_apkmirror_data/ ?
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1541880
On 17 April 2018 at 05:17, Artem Russakovskii <archon810 at gmail.com> wrote:
> Hi
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
To clarify, I was on 3.13.2 previously, recently updated to 4.0.1, and the
bug seems to persist in 4.0.1.
Sincerely,
Artem
--
Founder, Android Police <http://www.androidpolice.com>, APK Mirror
<http://www.apkmirror.com/>, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
<https://plus.google.com/+ArtemRussakovskii> | @ArtemR
<http://twitter.com/ArtemR>
On Mon, Apr
2018 Apr 16
0
Getting glusterfs to expand volume size to brick size
What version of Gluster are you running? Were the bricks smaller earlier?
Regards,
Nithya
On 15 April 2018 at 00:09, Artem Russakovskii <archon810 at gmail.com> wrote:
> Hi,
>
> I have a 3-brick replicate volume, but for some reason I can't get it to
> expand to the size of the bricks. The bricks are 25GB, but even after
> multiple gluster restarts and remounts, the
2018 Apr 17
1
Getting glusterfs to expand volume size to brick size
That might be the reason. Perhaps the volfiles were not regenerated after
upgrading to the version with the fix.
There is a workaround detailed in [2] for the time being (you will need to
copy the shell script into the correct directory for your Gluster release).
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1517260#c19
On 17 April 2018 at 09:58, Artem Russakovskii <archon810 at
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Thanks for the link. Looking at the status of that doc, it isn't quite
ready yet, and there's no mention of the option.
Does it mean that whatever is ready now in 4.0.1 is incomplete but can be
enabled via granular-entry-heal=on, and when it is complete, it'll become
the default and the flag will simply go away?
Is there any risk enabling the option now in 4.0.1?
Sincerely,
Artem
2018 Apr 18
3
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Following up here on a related and very serious for us issue.
I took down one of the 4 replicate gluster servers for maintenance today.
There are 2 gluster volumes totaling about 600GB. Not that much data. After
the server comes back online, it starts auto healing and pretty much all
operations on gluster freeze for many minutes.
For example, I was trying to run an ls -alrt in a folder with 7300
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Ravi,
Could you please expand on how these would help?
By forcing full here, we move the logic from the CPU to network, thus
decreasing CPU utilization, is that right? This is assuming the CPU and
disk utilization are caused by the differ and not by lstat and other calls
or something.
> Option: cluster.data-self-heal-algorithm
> Default Value: (null)
> Description: Select between
2018 Apr 10
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Wish I knew or was able to get detailed description of those options myself.
here is direct-io-mode
https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode
Same as you I ran tests on a large volume of files, finding that main
delays are in attribute calls, ending up with those mount options to add
performance.
I discovered those options through basically googling this user list with
2018 Apr 18
1
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 11:59 AM, Artem Russakovskii wrote:
> Btw, I've now noticed at least 5 variations in toggling binary option
> values. Are they all interchangeable, or will using the wrong value
> not work in some cases?
>
> yes/no
> true/false
> True/False
> on/off
> enable/disable
>
> It's quite a confusing/inconsistent practice, especially given that
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Btw, I've now noticed at least 5 variations in toggling binary option
values. Are they all interchangeable, or will using the wrong value not
work in some cases?
yes/no
true/false
True/False
on/off
enable/disable
It's quite a confusing/inconsistent practice, especially given that many
options will accept any value without erroring out/validation.
Sincerely,
Artem
--
Founder, Android
2018 Apr 05
2
[dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal failed: Unable to form layout for directory /
Hi,
I noticed when I run gluster volume heal data info, the follow message
shows up in the log, along with other stuff:
[dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal
> failed: Unable to form layout for directory /
I'm seeing it on Gluster 4.0.1 and 3.13.2.
Here's the full log after running heal info:
2018 Apr 03
3
cluster.readdir-optimize and disappearing files/dirs bug
Hi,
As many of you know, gluster suffers from pretty bad performance issues
when there are lots of files. One way to at least attempt to improve
performance is setting cluster.readdir-optimize to on.
However, based on my recent tests (using Gluster 3.13.2), as well as tests
of many others (like
http://lists.gluster.org/pipermail/gluster-devel/2016-November/051417.html),
there's a bug that
2018 Apr 06
3
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi,
I'm trying to squeeze performance out of gluster on 4 80GB RAM 20-CPU
machines where Gluster runs on attached block storage (Linode) in (4
replicate bricks), and so far everything I tried results in sub-optimal
performance.
There are many files - mostly images, several million - and many operations
take minutes, copying multiple files (even if they're small) suddenly
freezes up for
2018 Apr 10
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
you definitely need mount options to /etc/fstab
use ones from here
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
I went on with using local mounts to achieve performance as well
Also, 3.12 or 3.10 branches would be preferable for production
On Fri, Apr 6, 2018 at 4:12 AM, Artem Russakovskii <archon810 at gmail.com>
wrote:
> Hi again,
>
> I'd like to
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 10:35 AM, Artem Russakovskii wrote:
> Hi Ravi,
>
> Could you please expand on how these would help?
>
> By forcing full here, we move the logic from the CPU to network, thus
> decreasing CPU utilization, is that right?
Yes, 'diff' employs the rchecksum FOP which does a sha256? checksum
which can consume CPU. So yes it is sort of shifting the load from CPU
2018 Apr 10
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Vlad,
I actually saw that post already and even asked a question 4 days ago (
https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode#comment1172497_540917).
The accepted answer also seems to go against your suggestion to enable
direct-io-mode as it says it should be disabled for better performance when
used just for file accesses.
It'd be great if someone from the Gluster team