similar to: Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

Displaying 20 results from an estimated 1000 matches similar to: "Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)"

2018 Mar 18
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/18/2018 6:13 PM, Sam McLeod wrote: Even your NFS transfers are 12.5 or so MB per second or less. 1) Did you use fdisk and LVM under that XFS filesystem? 2) Did you benchmark the XFS with something like bonnie++? (There's probably newer benchmark suites now.) 3) Did you benchmark your Network transfer speeds? Perhaps your NIC negotiated a lower speed. 3) I've done XFS tuning
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi, I've done some similar tests and experience similar performance issues (see my 'gluster for home directories?' thread on the list). If I read your mail correctly, you are comparing an NFS mount of the brick disk against a gluster mount (using the fuse client)? Which options do you have set on the NFS export (sync or async)? >From my tests, I concluded that the issue was not
2018 Mar 19
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi, As I posted in my previous emails - glusterfs can never match NFS (especially async one) performance of small files/latency. That's given by the design. Nothing you can do about it. Ondrej -----Original Message----- From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Rik Theys Sent: Monday, March 19, 2018 10:38 AM To: gluster-users at
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/19/2018 5:42 AM, Ondrej Valousek wrote: Removing NFS or NFS Ganesha from the equation, not very impressed on my own setup either. For the writes it's doing, that's alot of CPU usage in top. Seems bottle-necked via a single execution core somewhere trying to facilitate read / writes to the other bricks. Writes to the gluster FS from within one of the gluster participating bricks:
2017 Sep 20
4
xfs not getting it right?
Hi, xfs is supposed to detect the layout of a md-RAID devices when creating the file system, but it doesn?t seem to do that: # cat /proc/mdstat Personalities : [raid1] md10 : active raid1 sde[1] sdd[0] 499976512 blocks super 1.2 [2/2] [UU] bitmap: 0/4 pages [0KB], 65536KB chunk # mkfs.xfs /dev/md10p2 meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 blks
2014 Jul 01
3
corruption of in-memory data detected (xfs)
Hi All, I am having an issue with an XFS filesystem shutting down under high load with very many small files. Basically, I have around 3.5 - 4 million files on this filesystem. New files are being written to the FS all the time, until I get to 9-11 mln small files (35k on average). at some point I get the following in dmesg: [2870477.695512] Filesystem "sda5": XFS internal error
2015 Aug 06
1
xfs quota weirdness
Hi all, I have a quota problem with xfs (xfsprogs 3.1.7+b1 on debian GNU/Linux 7 -- wheezy) and samba-4.1.19. If I set a user quota to say 10GB, windows explorer reports a 20GB quota of which none used. If I change quota to x, windows explorer reports 2x space of which none used. So I assume samba is somehow getting (albeit incomplete and incorrect) xfs quota info from operating system. disks
2023 Jul 03
1
remove_me files building up
Hi, you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ? Best Regards,Strahil Nikolov? On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi, We're running a cluster with two data nodes and one arbiter, and have sharding enabled. We had an issue a while back where one of the server's
2023 Jul 04
1
remove_me files building up
Hi, Thanks for your response, please find the xfs_info for each brick on the arbiter below: root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1 meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 =
2023 Jul 04
1
remove_me files building up
Hi Liam, I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low. If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future. Of course, always
2023 Jul 04
1
remove_me files building up
Hi Strahil, We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night. The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low. This is the df -h? output for the bricks on the arb server: /dev/sdd1 15G 12G 3.3G 79%
2023 Jul 04
1
remove_me files building up
Thanks for the clarification. That behaviour is quite weird as arbiter bricks should hold?only metadata. What does the following show on host?uk3-prod-gfs-arb-01: du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick If indeed the shards are taking space -?that is a really strange situation.From which version
2007 Nov 19
10
Resize domU block device?
Is there a way for a domU to discover size changes of block devices modified by dom0? To make it clear - if I do in dom0 a lvresize of a logical volume given as physical disk to a domU, is there a way to use the new size of this device within the domU without reboot? Thanks Ralf _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com
2014 Jul 16
1
anaconda, kickstart, lvm over raid, logvol --grow, centos7 mystery
I am testing some kickstarts on ESXi virtual machine with pair of 16GB disks. Partitioning is lvm over raid. If i am using "logvol --grow i get "ValueError: not enough free space in volume group" Only workaround i can find is to add --maxsize=XXX where XXX is at least 640MB less than available. (10 extents or 320Mb per created logical volume) Following snippet is failing with
2023 Jul 05
1
remove_me files building up
Hi Strahil, This is the output from the commands: root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick 2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs 24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings 16K /data/glusterfs/gv1/brick1/brick/mytute 18M /data/glusterfs/gv1/brick1/brick/.shard 0
2014 Jul 26
2
Concern: rsync failing to find some attributes in a file transfer?
I have a regular script I run to make static "snapshots" of my home file system, with each being all the files that changed in the past 24 hours. I just moved my home partition to a new harddisk w/more space. I ran the util and have gotten odd results each time I ran it. This one bothers me... as I'm not sure why the attrs would be missing. How can the names be transfered but no
2012 Jul 09
1
[PATCH] NEW API: add new api xfs_info
Add xfs_info to show the geometry of the xfs filesystem. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- Hi Rich, This patch add xfs_info, and start the xfs support work. I'd like to add the xfs support, like xfs_growfs, xfs_io, xfs_db, xfs_repair etc. Any thoughts? Thanks, Wanlong Gao daemon/Makefile.am | 1 + daemon/xfs.c | 69
2010 Apr 13
2
XFS-filesystem corrupted by defragmentation Was: Performance problems with XFS on Centos 5.4
Before I'd try to defragment my whole filesystem (see attached mail for whole story) I figured "Let's try it on some file". So I did > xfs_bmap /raid/Temp/someDiskimage.iso [output shows 101 extents and 1 hole] Then I defragmented the file > xfs_fsr /raid/Temp/someDiskimage.iso extents before:101 after:3 DONE > xfs_bmap /raid/Temp/someDiskimage.iso [output shows 3
2010 Sep 10
10
DO NOT REPLY [Bug 7670] New: rsync --hard-links fails where ditto succeeds
https://bugzilla.samba.org/show_bug.cgi?id=7670 Summary: rsync --hard-links fails where ditto succeeds Product: rsync Version: 3.1.0 Platform: Other OS/Version: Mac OS X Status: NEW Severity: blocker Priority: P3 Component: core AssignedTo: wayned at samba.org ReportedBy: Dave at Yost.com
2017 Sep 20
0
xfs not getting it right?
On 20 September 2017 at 10:47, hw <hw at gc-24.de> wrote: > > Hi, > > xfs is supposed to detect the layout of a md-RAID devices when creating the > file system, but it doesn?t seem to do that: > > > # cat /proc/mdstat > Personalities : [raid1] > md10 : active raid1 sde[1] sdd[0] > 499976512 blocks super 1.2 [2/2] [UU] > bitmap: 0/4 pages [0KB],