similar to: slow lstat on 3.12 disperse volume

Displaying 20 results from an estimated 4000 matches similar to: "slow lstat on 3.12 disperse volume"

2018 Feb 26
2
new Gluster cluster: 3.10 vs 3.12
After discussing with Xavi in #gluster-dev we found out that we could eliminate the slow lstats by disabling disperse.eager-lock. There is an open issue here : https://bugzilla.redhat.com/show_bug.cgi?id=1546732 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180226/77e194f8/attachment.html>
2018 Apr 18
1
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Nithya, Amar, Any movement here? There could be a significant performance gain here that may also affect other bottlenecks that I'm experiencing which make gluster close to unusable at times. Sincerely, Artem -- Founder, Android Police <http://www.androidpolice.com>, APK Mirror <http://www.apkmirror.com/>, Illogical Robot LLC beerpla.net | +ArtemRussakovskii
2018 Feb 27
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Any updates on this one? On Mon, Feb 5, 2018 at 8:18 AM, Tom Fite <tomfite at gmail.com> wrote: > Hi all, > > I have seen this issue as well, on Gluster 3.12.1. (3 bricks per box, 2 > boxes, distributed-replicate) My testing shows the same thing -- running a > find on a directory dramatically increases lstat performance. To add > another clue, the performance degrades
2017 Sep 28
0
Upgrading (online) GlusterFS-3.7.11 to 3.10 with Distributed-Disperse volume
I'm working on upgrading a set of our gluster machines from 3.7 to 3.10- at first I was going to follow the guide here: https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.10/ but it mentions: > * Online upgrade is only possible with replicated and distributed > replicate volumes > * Online upgrade is not supported for dispersed or distributed >
2018 Mar 20
0
Disperse volume recovery and healing
On Tue, Mar 20, 2018 at 5:26 AM, Victor T <hero_of_nothing_1 at hotmail.com> wrote: > That makes sense. In the case of "file damage," it would show up as files > that could not be healed in logfiles or gluster volume heal [volume] info? > If the damage affects more bricks than the volume redundancy, then probably yes. These files or directories will appear in
2017 Aug 20
0
Add brick to a disperse volume
Hi, Adding bricks to a disperse volume is very easy and same as replica volume. You just need to add bricks in the multiple of the number of bricks which you already have. So if you have disperse volume with n+k configuration, you need to add n+k more bricks. Example : If your disperse volume is 4+2, where 2 is the redundancy count, you need to provide 6 (or multiple of 6) bricks (4+2 = 6)
2024 Mar 14
1
Adding storage capacity to a production disperse volume
Hi, On 14.03.2024 01:39, Theodore Buchwald wrote: > > ... So my question is. What would be the correct amount of bricks > needed to expand the storage on the current configuration of 'Number > of Bricks: 1 x (4 + 1) = 5'? ... > I tried something similar and ended up with a similar error. As far as I understand the documentation the answer in your case is "5".
2017 Nov 09
0
GlusterFS healing questions
Someone on the #gluster-users irc channel said the following : "Decreasing features.locks-revocation-max-blocked to an absurdly low number is letting our distributed-disperse set heal again." Is this something to concider? Does anyone else have experience with tweaking this to speed up healing? Sent from my iPhone > On 9 Nov 2017, at 18:00, Serkan ?oban <cobanserkan at
2018 Mar 16
0
Disperse volume recovery and healing
On Fri, Mar 16, 2018 at 4:57 AM, Victor T <hero_of_nothing_1 at hotmail.com> wrote: > Xavi, does that mean that even if every node was rebooted one at a time > even without issuing a heal that the volume would have no issues after > running gluster volume heal [volname] when all bricks are back online? > No. After bringing up one brick and before stopping the next one, you need
2018 Mar 13
2
Disperse volume recovery and healing
I have a question about how disperse volumes handle brick failure. I'm running version 3.10.10 on all systems. If I have a disperse volume in a 4+2 configuration with 6 servers each serving 1 brick, and maintenance needs to be performed on all systems, are there any general steps that need to be taken to ensure data is not lost or service interrupted? For example, can I just reboot each system
2018 Mar 18
1
Disperse volume recovery and healing
No. After bringing up one brick and before stopping the next one, you need to be sure that there are no damaged files. You shouldn't reboot a node if "gluster volume heal <volname> info" shows damaged files. What happens in this case then? I'm thinking about a situation where the servers are kept in an environment that we don't control - i.e. the cloud. If the VMs are
2018 Apr 10
0
glusterfs disperse volume input output error
Hi, Could you help me? i have a problem with file on disperse volume. When i try to read this from mount point i recieve error, # md5sum /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2 md5sum: /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2: Input/output error Configuration and status of volume is: # gluster volume info vol1 Volume Name: vol1 Type: Disperse Volume ID:
2017 Nov 09
2
GlusterFS healing questions
Hi, You can set disperse.shd-max-threads to 2 or 4 in order to make heal faster. This makes my heal times 2-3x faster. Also you can play with disperse.self-heal-window-size to read more bytes at one time, but i did not test it. On Thu, Nov 9, 2017 at 4:47 PM, Xavi Hernandez <jahernan at redhat.com> wrote: > Hi Rolf, > > answers follow inline... > > On Thu, Nov 9, 2017 at
2018 Mar 16
2
Disperse volume recovery and healing
Xavi, does that mean that even if every node was rebooted one at a time even without issuing a heal that the volume would have no issues after running gluster volume heal [volname] when all bricks are back online? ________________________________ From: Xavi Hernandez <jahernan at redhat.com> Sent: Thursday, March 15, 2018 12:09:05 AM To: Victor T Cc: gluster-users at gluster.org Subject:
2018 Mar 15
0
Disperse volume recovery and healing
Hi Victor, On Wed, Mar 14, 2018 at 12:30 AM, Victor T <hero_of_nothing_1 at hotmail.com> wrote: > I have a question about how disperse volumes handle brick failure. I'm > running version 3.10.10 on all systems. If I have a disperse volume in a > 4+2 configuration with 6 servers each serving 1 brick, and maintenance > needs to be performed on all systems, are there any
2007 Nov 28
0
[Fwd: Re: network-bridge does not create veth or peth devices]
-------- Original Message -------- Subject: Re: [Xen-users] network-bridge does not create veth or peth devices Date: Wed, 28 Nov 2007 14:42:25 +0100 From: Ingard Mevåg <ingardm@startsiden.no> Organization: ABCStartsiden To: Luciano Rocha <strange@nsk.no-ip.org> References: <474D6492.5030807@startsiden.no> <20071128130205.GA1838@bit.office.eurotux.com> Luciano Rocha
2024 Mar 14
3
Adding storage capacity to a production disperse volume
Hi, This is the first time I have tried to expand the storage of a live gluster volume. I was able to get another supermicro storage unit for a gluster cluster that I built. The current clustered storage configuration contains five supermicro units. And the cluster volume is setup with the following configuration: node-6[/var/log/glusterfs]# gluster volume info Volume Name: researchdata
2006 Nov 06
3
Lstat & Dovecot
I am chasing a problem with dovecot generating an error: lstat(/var/spool/virtual_mailboxes/[domain dir]/[user dir]/Maildir/cur) failed: Permission denied I first tried making the directory world readable, same error. Them tried to lstat [the path] at the console and receive the error: lstat: command not found I have a manpage on lstat, but no file. "Yum provides" showed the
2017 Sep 18
2
Confusing lstat() performance
Hi Ben, do you know if the smallfile benchmark also does interleaved getdents() and lstat, which is what I found as being the key difference that creates the performance gap (further down this thread)? Also, wouldn't `--threads 8` change the performance numbers by factor 8 versus the plain `ls` and `rsync` that I did? Would you mind running those commands directly/plainly on your cluster to
2017 Sep 17
3
Confusing lstat() performance
On 17/09/17 18:03, Niklas Hamb?chen wrote: > So far the only difference between `ls` and `bup index` I could observe > is that `bup index` chdir()s into the directory to index, ls doesn't. > > But when I `cd` into the dir and run `ls` without directory argument, it > is still much faster than bup index for each stat(). Hmm, bup uses the fchdir() syscall to go into the target