similar to: xfs quota weirdness

Displaying 20 results from an estimated 400 matches similar to: "xfs quota weirdness"

2017 Sep 20
4
xfs not getting it right?
Hi, xfs is supposed to detect the layout of a md-RAID devices when creating the file system, but it doesn?t seem to do that: # cat /proc/mdstat Personalities : [raid1] md10 : active raid1 sde[1] sdd[0] 499976512 blocks super 1.2 [2/2] [UU] bitmap: 0/4 pages [0KB], 65536KB chunk # mkfs.xfs /dev/md10p2 meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 blks
2017 Sep 20
0
xfs not getting it right?
On 20 September 2017 at 10:47, hw <hw at gc-24.de> wrote: > > Hi, > > xfs is supposed to detect the layout of a md-RAID devices when creating the > file system, but it doesn?t seem to do that: > > > # cat /proc/mdstat > Personalities : [raid1] > md10 : active raid1 sde[1] sdd[0] > 499976512 blocks super 1.2 [2/2] [UU] > bitmap: 0/4 pages [0KB],
2017 Sep 20
1
xfs not getting it right?
Stephen John Smoogen wrote: > On 20 September 2017 at 10:47, hw <hw at gc-24.de> wrote: >> >> Hi, >> >> xfs is supposed to detect the layout of a md-RAID devices when creating the >> file system, but it doesn?t seem to do that: >> >> >> # cat /proc/mdstat >> Personalities : [raid1] >> md10 : active raid1 sde[1] sdd[0] >>
2013 Nov 23
1
windows can not see the content of samba shared folder
Hello, I want to access a shared folder on my linux from a windows machine. The smb.conf has this entry ?? [samba_share] ??????? comment = QEMU share place ??????? path = /media/samba_share ??????? valid users = mahmood vb ??????? public = no ??????? writable = yes ??????? printable = no ??????? create mask = 0777 Then I added a user to samba with "smbpasswd mahmood". The folder mask
2016 Aug 05
2
How to modify user fields with a command line ?
2016-08-04 17:49 GMT+04:00 Rowland Penny <rpenny at samba.org>: > On Thu, 4 Aug 2016 16:44:34 +0400 > henri transfert <hb.transfert at gmail.com> wrote: > > > Hi, > > > > On RSAT , we can see that there are some extra fields for users > > account like description, office, phone number or email address. > > > > I already have hundreds of
2023 Jul 04
1
remove_me files building up
Hi, Thanks for your response, please find the xfs_info for each brick on the arbiter below: root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1 meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 =
2007 Aug 18
6
Help with backups
I've got a Redhat 5 server running Samba, and two dualboot CentOS 5 workstations. Until we get a better backup strategy, I'm backing up the workstations to the server via mounting a shared samba drive to /mnt. Trying tar cvf /mnt/samba_share/backup.tar /* eventually yields backing up /mnt, which produces an unwanted loop, including /mnt/samba_share I looked at tar with --exclude /mnt
2023 Jul 04
1
remove_me files building up
Hi Liam, I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low. If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future. Of course, always
2023 Jul 04
1
remove_me files building up
Hi Strahil, We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night. The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low. This is the df -h? output for the bricks on the arb server: /dev/sdd1 15G 12G 3.3G 79%
2023 Jul 04
1
remove_me files building up
Thanks for the clarification. That behaviour is quite weird as arbiter bricks should hold?only metadata. What does the following show on host?uk3-prod-gfs-arb-01: du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick If indeed the shards are taking space -?that is a really strange situation.From which version
2014 Jul 01
3
corruption of in-memory data detected (xfs)
Hi All, I am having an issue with an XFS filesystem shutting down under high load with very many small files. Basically, I have around 3.5 - 4 million files on this filesystem. New files are being written to the FS all the time, until I get to 9-11 mln small files (35k on average). at some point I get the following in dmesg: [2870477.695512] Filesystem "sda5": XFS internal error
2023 Jul 05
1
remove_me files building up
Hi Strahil, This is the output from the commands: root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick 2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs 24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings 16K /data/glusterfs/gv1/brick1/brick/mytute 18M /data/glusterfs/gv1/brick1/brick/.shard 0
2012 Oct 09
2
Mount options for NFS
We're experiencing problems with some legacy software when it comes to NFS access. Even though files are visible in a terminal and can be accessed with standard shell tools and vi, this software typically complains that the files are empty or not syntactically correct. The NFS filesystems in question are 8TB+ XFS filesystems mounted with
2014 Jul 16
1
anaconda, kickstart, lvm over raid, logvol --grow, centos7 mystery
I am testing some kickstarts on ESXi virtual machine with pair of 16GB disks. Partitioning is lvm over raid. If i am using "logvol --grow i get "ValueError: not enough free space in volume group" Only workaround i can find is to add --maxsize=XXX where XXX is at least 640MB less than available. (10 extents or 320Mb per created logical volume) Following snippet is failing with
2013 Oct 09
1
XFS quotas not working at all (seemingly)
Hi All, I have a very strange problem that I'm unable to pinpoint at the moment. For some reason I am simply unable to get xfs_quotas to report correctly on a freshly installed, fully patched CentOS 6 box. I have specified all the same options as on another machine which *is* reporting quota LABEL=TEST /exports/TEST xfs inode64,nobarrier,delaylog,usrquota,grpquota 0 0 xfs_quota -xc
2018 Mar 18
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/18/2018 6:13 PM, Sam McLeod wrote: Even your NFS transfers are 12.5 or so MB per second or less. 1) Did you use fdisk and LVM under that XFS filesystem? 2) Did you benchmark the XFS with something like bonnie++? (There's probably newer benchmark suites now.) 3) Did you benchmark your Network transfer speeds? Perhaps your NIC negotiated a lower speed. 3) I've done XFS tuning
2014 Jul 26
2
Concern: rsync failing to find some attributes in a file transfer?
I have a regular script I run to make static "snapshots" of my home file system, with each being all the files that changed in the past 24 hours. I just moved my home partition to a new harddisk w/more space. I ran the util and have gotten odd results each time I ran it. This one bothers me... as I'm not sure why the attrs would be missing. How can the names be transfered but no
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi, I've done some similar tests and experience similar performance issues (see my 'gluster for home directories?' thread on the list). If I read your mail correctly, you are comparing an NFS mount of the brick disk against a gluster mount (using the fuse client)? Which options do you have set on the NFS export (sync or async)? >From my tests, I concluded that the issue was not
2018 Mar 18
4
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Howdy all, We're experiencing terrible small file performance when copying or moving files on gluster clients. In the example below, Gluster is taking 6mins~ to copy 128MB / 21,000 files sideways on a client, doing the same thing on NFS (which I know is a totally different solution etc. etc.) takes approximately 10-15 seconds(!). Any advice for tuning the volume or XFS settings would be
2013 May 02
7
XFS vs EXT4 for mail storage
Hello, I'm in the process of finalizing the spec for my new dovecot VM, and this is the last question I need to address... I've read until I'm just about decided on XFS, but I have no experience with it (been using reiserfs on my old box (@ 8 yrs old now), and never had a problem (knock on wood), but considering its current situation (little to no development support for reasons