similar to: small files performance

Displaying 20 results from an estimated 3000 matches similar to: "small files performance"

2017 Jul 13
2
Rebalance task fails
Hi Nithya, I see index in context: [2017-07-07 10:07:18.230202] E [MSGID: 106062] [glusterd-utils.c:7997:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index I wonder if there is anything I can do to fix it. I was trying to strace gluster process but still have no clue what exactly is gluster index. Best regards, Szymon Miotk On Thu, Jul 13, 2017 at 10:12 AM, Nithya
2017 Jul 13
0
Rebalance task fails
Hi Szymon, I have received the files and will take a look and get back to you. In what context are you seeing index? Thanks, Nithya On 11 July 2017 at 01:15, Szymon Miotk <szymon.miotk at gmail.com> wrote: > Hi Nithya, > > the files were sent to priv to avoid spamming the list with large > attachments. > Could someone explain what is index in Gluster? > Unfortunately
2017 Jul 10
2
Rebalance task fails
Hi Nithya, the files were sent to priv to avoid spamming the list with large attachments. Could someone explain what is index in Gluster? Unfortunately index is popular word, so googling is not very helpful. Best regards, Szymon Miotk On Sun, Jul 9, 2017 at 6:37 PM, Nithya Balachandran <nbalacha at redhat.com> wrote: > > On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few minutes. SIGTERM on the other hand causes crash, but this time it is not read-only remount, but around 10 IOPS tops and 2 IOPS on average. -ps On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote: > I currently only have a Windows 2012 R2 server VM in testing on top of > the gluster storage,
2017 Jul 09
0
Rebalance task fails
On 7 July 2017 at 15:42, Szymon Miotk <szymon.miotk at gmail.com> wrote: > Hello everyone, > > > I have problem rebalancing Gluster volume. > Gluster version is 3.7.3. > My 1x3 replicated volume become full, so I've added three more bricks > to make it 2x3 and wanted to rebalance. > But every time I start rebalancing, it fails immediately. > Rebooting Gluster
2018 May 07
0
arbiter node on client?
On Sun, May 06, 2018 at 11:15:32AM +0000, Gandalf Corvotempesta wrote: > is possible to add an arbiter node on the client? I've been running in that configuration for a couple months now with no problems. I have 6 data + 3 arbiter bricks hosting VM disk images and all three of my arbiter bricks are on one of the kvm hosts. > Can I use multiple arbiter for the same volume ? In example,
2017 Sep 08
0
GlusterFS as virtual machine storage
I currently only have a Windows 2012 R2 server VM in testing on top of the gluster storage, so I will have to take some time to provision a couple Linux VMs with both ext4 and XFS to see what happens on those. The Windows server VM is OK with killall glusterfsd, but when the 42 second timeout goes into effect, it gets paused and I have to go into RHEVM to un-pause it. Diego On Fri, Sep 8, 2017
2017 Sep 08
2
GlusterFS as virtual machine storage
I would prefer the behavior was different to what it is of I/O stopping. The argument I heard for the long 42 second time out was that MTBF on a server was high, and that the client reconnection operation was *costly*. Those were arguments to *not* change the ping timeout value down from 42 seconds. I think it was mentioned that low ping timeout settings could lead to high cpu loads with many
2017 Oct 10
0
small files performance
I just tried setting: performance.parallel-readdir on features.cache-invalidation on features.cache-invalidation-timeout 600 performance.stat-prefetch performance.cache-invalidation performance.md-cache-timeout 600 network.inode-lru-limit 50000 performance.cache-invalidation on and clients could not see their files with ls when accessing via a fuse mount. The files and directories were there,
2017 Jul 07
2
Rebalance task fails
Hello everyone, I have problem rebalancing Gluster volume. Gluster version is 3.7.3. My 1x3 replicated volume become full, so I've added three more bricks to make it 2x3 and wanted to rebalance. But every time I start rebalancing, it fails immediately. Rebooting Gluster nodes doesn't help. # gluster volume rebalance gsae_artifactory_cluster_storage start volume rebalance:
2003 Feb 06
2
Strange routing limitations and workaroud
Hi! I got some strange problem with routing loadbalancing. I cannot get the full speed from my ISPs until I get some big files from close ftp server. I have server with one connection to internal network and 3 to ISPs: __________ | eth1|---- ISP1 | | internal--|eth0 eth2|---- ISP2 net | | (~300 | eth3|---- ISP3 hosts
2017 Sep 08
0
GlusterFS as virtual machine storage
2017-09-08 14:11 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few > minutes. SIGTERM on the other hand causes crash, but this time it is > not read-only remount, but around 10 IOPS tops and 2 IOPS on average. > -ps So, seems to be reliable to server crashes but not to server shutdown :)
2017 Sep 08
0
GlusterFS as virtual machine storage
On Fri, Sep 8, 2017 at 12:48 PM, Gandalf Corvotempesta <gandalf.corvotempesta at gmail.com> wrote: > I think this should be considered a bug > If you have a server crash, glusterfsd process obviously doesn't exit > properly and thus this could least to IO stop ? I agree with you completely in this.
2017 Sep 23
1
EC 1+2
Already read that. Seems that I have to use a multiple of 512, so 512*(3-2) is 512. Seems fine Il 23 set 2017 5:00 PM, "Dmitri Chebotarov" <4dimach at gmail.com> ha scritto: > Hi > > Take a look at this link (under ?Optimal volumes?), for Erasure Coded > volume optimal configuration > > http://docs.gluster.org/Administrator%20Guide/Setting%20Up%20Volumes/ >
2017 Oct 04
2
data corruption - any update?
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > > > On 3 October 2017 at 13:27, Gandalf Corvotempesta < > gandalf.corvotempesta at gmail.com> wrote: > >> Any update about multiple bugs regarding data corruptions with >> sharding enabled ? >> >> Is 3.12.1 ready to be used in production? >> > >
2017 Jun 29
0
How to shutdown a node properly ?
On Thu, Jun 29, 2017 at 12:41 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Init.d/system.d script doesn't kill gluster automatically on > reboot/shutdown? > > Sounds less like an issue with how it's shutdown but an issue with how it's mounted perhaps. My gluster fuse mounts seem to handle any one node being shutdown just fine as long as
2016 Oct 27
4
Server migration
On 27 Oct 2016, at 15:29, Tanstaafl <tanstaafl at libertytrek.org> wrote: > > On 10/26/2016 2:38 AM, Gandalf Corvotempesta > <gandalf.corvotempesta at gmail.com> wrote: >> This is much easier than dovecot replication as i can start immedialy with >> no need to upgrade the old server >> >> my only question is: how to manage the email received on the
2018 May 06
3
arbiter node on client?
is possible to add an arbiter node on the client? Let's assume a gluster storage made with 2 storage server. This is prone to split-brains. An arbiter node can be added, but can I put the arbiter on one of the client ? Can I use multiple arbiter for the same volume ? In example, one arbiter on each client.
2017 Oct 04
0
data corruption - any update?
Just so I know. Is it correct to assume that this corruption issue is ONLY involved if you are doing rebalancing with sharding enabled. So if I am not doing rebalancing I should be fine? -bill On 10/3/2017 10:30 PM, Krutika Dhananjay wrote: > > > On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran > <nbalacha at redhat.com <mailto:nbalacha at redhat.com>> wrote:
2017 Jun 30
0
How to shutdown a node properly ?
Yes but why killing gluster notifies all clients and a graceful shutdown don't? I think this is a bug, if I'm shutting down a server, it's obvious that all clients should stop to connect to it.... Il 30 giu 2017 3:24 AM, "Ravishankar N" <ravishankar at redhat.com> ha scritto: > On 06/30/2017 12:40 AM, Renaud Fortier wrote: > > On my nodes, when i use the