similar to: GFS performance under heavy traffic

Displaying 20 results from an estimated 10000 matches similar to: "GFS performance under heavy traffic"

2019 Dec 24
1
GFS performance under heavy traffic
Hi David, On Dec 24, 2019 02:47, David Cunningham <dcunningham at voisonics.com> wrote: > > Hello, > > In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node? It makes sense, as no data is being generated towards
2019 Dec 27
0
GFS performance under heavy traffic
Hi David, Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first. Also, the gluster client should remount in order to bump the gluster op-version. What kind of workload do you have ? I'm asking as there are predefined (and recommended) settings located at /var/lib/gluster/groups . You
2019 Dec 20
1
GFS performance under heavy traffic
Hi David, Also consider using the mount option to specify backup server via 'backupvolfile-server=server2:server3' (you can define more but I don't thing replica volumes greater that 3 are usefull (maybe in some special cases). In such way, when the primary is lost, your client can reach a backup one without disruption. P.S.: Client may 'hang' - if the primary server got
2023 Jul 05
1
remove_me files building up
Hi Strahil, This is the output from the commands: root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick 2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs 24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings 16K /data/glusterfs/gv1/brick1/brick/mytute 18M /data/glusterfs/gv1/brick1/brick/.shard 0
2023 Feb 14
1
File\Directory not healing
I guess you didn't receive my last e-mail. Use getfattr and identify if the gfid mismatch. If yes, move away the mismatched one. In order a dir to heal, you have to fix all files inside it before it can be healed. Best Regards, Strahil Nikolov ? ???????, 14 ???????? 2023 ?., 14:04:31 ?. ???????+2, David Dolan <daithidolan at gmail.com> ??????: I've touched the directory one
2023 Feb 14
1
File\Directory not healing
I've touched the directory one level above the directory with the I\O issue as the one above that is the one showing as dirty. It hasn't healed. Should the self heal daemon automatically kick in here? Is there anything else I can do? Thanks David On Tue, 14 Feb 2023 at 07:03, Strahil Nikolov <hunter86_bg at yahoo.com> wrote: > You can always mount it locally on any of the
2023 Jul 04
1
remove_me files building up
Thanks for the clarification. That behaviour is quite weird as arbiter bricks should hold?only metadata. What does the following show on host?uk3-prod-gfs-arb-01: du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick If indeed the shards are taking space -?that is a really strange situation.From which version
2023 Jul 04
1
remove_me files building up
Hi Strahil, We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night. The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low. This is the df -h? output for the bricks on the arb server: /dev/sdd1 15G 12G 3.3G 79%
2023 Mar 24
1
How to configure?
There are 285 files in /var/lib/glusterd/vols/cluster_data ... including many files with names related to quorum bricks already moved to a different path (like cluster_data.client.clustor02.srv-quorum-00-d.vol that should already have been replaced by cluster_data.clustor02.srv-bricks-00-q.vol -- and both vol files exist). Is there something I should check inside the volfiles? Diego Il
2023 Jul 04
1
remove_me files building up
Hi Liam, I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low. If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future. Of course, always
2018 Mar 19
3
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi, On 03/19/2018 03:42 PM, TomK wrote: > On 3/19/2018 5:42 AM, Ondrej Valousek wrote: > Removing NFS or NFS Ganesha from the equation, not very impressed on my > own setup either.? For the writes it's doing, that's alot of CPU usage > in top. Seems bottle-necked via a single execution core somewhere trying > to facilitate read / writes to the other bricks. > >
2023 Mar 24
1
How to configure?
Can you check your volume file contents?Maybe it really can't find (or access) a specific volfile ? Best Regards,Strahil Nikolov? On Fri, Mar 24, 2023 at 8:07, Diego Zuccato<diego.zuccato at unibo.it> wrote: In glfsheal-Connection.log I see many lines like: [2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021] [glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the
2020 Oct 30
3
Multiple IP addresses and using same IP for outbound calls as inbound
Why not use OpenSips/Kamailoo in between? Where you want 1.1.1.1 you pass it along as is. Where you want 2.2.2.2 change the sdp in opensips/kamailio On Thu, Oct 29, 2020 at 20:44 David Cunningham <dcunningham at voisonics.com> wrote: > Hello, > > Does anyone know a way with chan_sip to tell Asterisk to use a specific IP > address for its end of the communication for a specific
2023 Apr 23
1
How to configure?
After a lot of tests and unsuccessful searching, I decided to start from scratch: I'm going to ditch the old volume and create a new one. I have 3 servers with 30 12TB disks each. Since I'm going to start a new volume, could it be better to group disks in 10 3-disk (or 6 5-disk) RAID-0 volumes to reduce the number of bricks? Redundancy would be given by replica 2 (still undecided
2020 Oct 23
2
Multiple IP addresses and using same IP for outbound calls as inbound
OK, thank you George. On Sat, 24 Oct 2020 at 03:16, George Joseph <gjoseph at digium.com> wrote: > > > On Thu, Oct 22, 2020 at 4:13 PM David Cunningham < > dcunningham at voisonics.com> wrote: > >> Hi George, >> >> Thank you for the response. I'm a little unclear on what you mean by a >> transport. We're using chan_sip, not pjsip.
2023 Jul 04
1
remove_me files building up
Hi, Thanks for your response, please find the xfs_info for each brick on the arbiter below: root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1 meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 =
2023 Feb 24
1
Big problems after update to 9.6
Hi David, It seems like a network issue to me, As it's unable to connect the other node and getting timeout. Few things you can check- * Check the /etc/hosts file on both the servers and make sure it has the correct IP of the other node. * Are you binding gluster on any specific IP, which is changed after your update. * Check if you can access port 24007 from the other host. If
2023 Apr 18
1
RTP address learning and timing problem
I don't know in that specific output what happened. Your best course of action is to add further logging or step through the logic with all of the knowledge you have of the RTP streams to understand what is happening. On Mon, Apr 17, 2023 at 8:52 PM David Cunningham <dcunningham at voisonics.com> wrote: > Hi Joshua, > > Thank you for that. From the code it kind of looks like
2024 Jan 27
1
Upgrade 10.4 -> 11.1 making problems
You don't need to mount it. Like this : # getfattr -d -e hex -m. /path/to/brick/.glusterfs/00/46/00462be8-3e61-4931-8bda-dae1645c639e # file: 00/46/00462be8-3e61-4931-8bda-dae1645c639e trusted.gfid=0x00462be83e6149318bdadae1645c639e trusted.gfid2path.05fcbdafdeea18ab=0x30326333373930632d386637622d346436652d393464362d3936393132313930643131312f66696c656c6f636b696e672e7079
2023 Apr 17
1
RTP address learning and timing problem
Hi Joshua, Thank you for that. From the code it kind of looks like STRICT_RTP_LEARN_TIMEOUT is a minimum, not a maximum: if (!ast_sockaddr_isnull(&rtp->strict_rtp_address) && STRICT_RTP_LEARN_TIMEOUT < ast_tvdiff_ms(ast_tvnow(), rtp->rtp_source_learn.start)) { ast_verb(4, "%p -- Strict RTP learning complete - Locking on source address %s\n", Our call shows: #