similar to: how to enforce sunit and swidth for root device/partition when installing?

Displaying 20 results from an estimated 20000 matches similar to: "how to enforce sunit and swidth for root device/partition when installing?"

2023 Jul 04
1
remove_me files building up
Hi, Thanks for your response, please find the xfs_info for each brick on the arbiter below: root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1 meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 =
2023 Jul 04
1
remove_me files building up
Hi Liam, I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low. If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future. Of course, always
2015 Aug 06
1
xfs quota weirdness
Hi all, I have a quota problem with xfs (xfsprogs 3.1.7+b1 on debian GNU/Linux 7 -- wheezy) and samba-4.1.19. If I set a user quota to say 10GB, windows explorer reports a 20GB quota of which none used. If I change quota to x, windows explorer reports 2x space of which none used. So I assume samba is somehow getting (albeit incomplete and incorrect) xfs quota info from operating system. disks
2023 Jul 04
1
remove_me files building up
Hi Strahil, We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night. The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low. This is the df -h? output for the bricks on the arb server: /dev/sdd1 15G 12G 3.3G 79%
2023 Jul 04
1
remove_me files building up
Thanks for the clarification. That behaviour is quite weird as arbiter bricks should hold?only metadata. What does the following show on host?uk3-prod-gfs-arb-01: du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick If indeed the shards are taking space -?that is a really strange situation.From which version
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Strahil and Gluster users, Yes I had checked but, checked again and only 1% inode usage. 99% free. Same every node. Example: [root at nybaknode1 ]# df -i /lvbackups/brick Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vgbackups-lvbackups 3108921344 93602 3108827742 1% /lvbackups [root at nybaknode1 ]# I neglected to clarify in
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not know if the upgrade is related to this new issue. We are seeing a new issue 'error=No space left on device' error
2023 Jul 05
1
remove_me files building up
Hi Strahil, This is the output from the commands: root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick 2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs 24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings 16K /data/glusterfs/gv1/brick1/brick/mytute 18M /data/glusterfs/gv1/brick1/brick/.shard 0
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ? Best Regards,Strahil Nikolov On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote: Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago
2012 Jul 16
2
[PATCH V4] NEW API: add new api xfs_info
Add xfs_info to show the geometry of the xfs filesystem. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- Hi Rich, I got an odd error, can you help me with this error or give me a debug method? Thanks, Wanlong Gao daemon/Makefile.am | 1 + daemon/xfs.c | 278 +++++++++++++++++++++++++++++++ generator/generator_actions.ml
2014 Jul 01
3
corruption of in-memory data detected (xfs)
Hi All, I am having an issue with an XFS filesystem shutting down under high load with very many small files. Basically, I have around 3.5 - 4 million files on this filesystem. New files are being written to the FS all the time, until I get to 9-11 mln small files (35k on average). at some point I get the following in dmesg: [2870477.695512] Filesystem "sda5": XFS internal error
2018 Mar 18
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/18/2018 6:13 PM, Sam McLeod wrote: Even your NFS transfers are 12.5 or so MB per second or less. 1) Did you use fdisk and LVM under that XFS filesystem? 2) Did you benchmark the XFS with something like bonnie++? (There's probably newer benchmark suites now.) 3) Did you benchmark your Network transfer speeds? Perhaps your NIC negotiated a lower speed. 3) I've done XFS tuning
2014 Jul 26
2
Concern: rsync failing to find some attributes in a file transfer?
I have a regular script I run to make static "snapshots" of my home file system, with each being all the files that changed in the past 24 hours. I just moved my home partition to a new harddisk w/more space. I ran the util and have gotten odd results each time I ran it. This one bothers me... as I'm not sure why the attrs would be missing. How can the names be transfered but no
2012 Mar 16
1
NFS Hanging Under Heavy Load
Hello all, I'm currently experiencing an issue with an NFS server I've built (a Dell R710 with a Dell PERC H800/LSI 2108 and four external disk trays). It's a backup target for Solaris 10, CentOS 5.5 and CentOS 6.2 servers that mount it's data volume via NFS. It has two 10gig NICs set up in a layer2+3 bond for one network, and two more 10gig NICs set up in the same way in another
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi, I've done some similar tests and experience similar performance issues (see my 'gluster for home directories?' thread on the list). If I read your mail correctly, you are comparing an NFS mount of the brick disk against a gluster mount (using the fuse client)? Which options do you have set on the NFS export (sync or async)? >From my tests, I concluded that the issue was not
2017 Sep 20
0
xfs not getting it right?
On 20 September 2017 at 10:47, hw <hw at gc-24.de> wrote: > > Hi, > > xfs is supposed to detect the layout of a md-RAID devices when creating the > file system, but it doesn?t seem to do that: > > > # cat /proc/mdstat > Personalities : [raid1] > md10 : active raid1 sde[1] sdd[0] > 499976512 blocks super 1.2 [2/2] [UU] > bitmap: 0/4 pages [0KB],
2017 Sep 20
1
xfs not getting it right?
Stephen John Smoogen wrote: > On 20 September 2017 at 10:47, hw <hw at gc-24.de> wrote: >> >> Hi, >> >> xfs is supposed to detect the layout of a md-RAID devices when creating the >> file system, but it doesn?t seem to do that: >> >> >> # cat /proc/mdstat >> Personalities : [raid1] >> md10 : active raid1 sde[1] sdd[0] >>
2017 Sep 20
4
xfs not getting it right?
Hi, xfs is supposed to detect the layout of a md-RAID devices when creating the file system, but it doesn?t seem to do that: # cat /proc/mdstat Personalities : [raid1] md10 : active raid1 sde[1] sdd[0] 499976512 blocks super 1.2 [2/2] [UU] bitmap: 0/4 pages [0KB], 65536KB chunk # mkfs.xfs /dev/md10p2 meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 blks
2018 Mar 18
4
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Howdy all, We're experiencing terrible small file performance when copying or moving files on gluster clients. In the example below, Gluster is taking 6mins~ to copy 128MB / 21,000 files sideways on a client, doing the same thing on NFS (which I know is a totally different solution etc. etc.) takes approximately 10-15 seconds(!). Any advice for tuning the volume or XFS settings would be
2012 Jul 09
1
[PATCH] NEW API: add new api xfs_info
Add xfs_info to show the geometry of the xfs filesystem. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- Hi Rich, This patch add xfs_info, and start the xfs support work. I'd like to add the xfs support, like xfs_growfs, xfs_io, xfs_db, xfs_repair etc. Any thoughts? Thanks, Wanlong Gao daemon/Makefile.am | 1 + daemon/xfs.c | 69