Displaying 20 results from an estimated 22 matches for "logbsiz".
Did you mean:
logbsize
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
...: Brick nybaknode9.example.net:/lvbackups/brick
TCP Port : 60039
RDMA Port : 0
Online : Y
Pid : 1664
File System : xfs
Device : /dev/mapper/vgbackups-lvbackups
Mount Options :
rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquot
a
Inode Size : 512
Disk Space Free : 6.1TB
Total Disk Space : 29.0TB
Inode Count : 3108974976
Free Inodes : 3108881513
----------------------------------------------------------------------------
--
Brick : Brick n...
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
...? ? ? : Brick nybaknode9.example.net:/lvbackups/brick
TCP Port? ? ? ? ? ? : 60039
RDMA Port? ? ? ? ? ? : 0
Online? ? ? ? ? ? ? : Y
Pid? ? ? ? ? ? ? ? ? : 1664
File System? ? ? ? ? : xfs
Device? ? ? ? ? ? ? : /dev/mapper/vgbackups-lvbackups
Mount Options? ? ? ? :
rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquot
a
Inode Size? ? ? ? ? : 512
Disk Space Free? ? ? : 6.1TB
Total Disk Space? ? : 29.0TB
Inode Count? ? ? ? ? : 3108974976
Free Inodes? ? ? ? ? : 3108881513
----------------------------------------------------------------------------
--
Brick? ? ? ? ? ? ? ? : Brick nyb...
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
...Brick nybaknode9.example.net:/lvbackups/brick
TCP Port : 60039
RDMA Port : 0
Online : Y
Pid : 1664
File System : xfs
Device : /dev/mapper/vgbackups-lvbackups
Mount Options :
rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquot
a
Inode Size : 512
Disk Space Free : 6.1TB
Total Disk Space : 29.0TB
Inode Count : 3108974976
Free Inodes : 3108881513
----------------------------------------------------------------------------
--
Brick :...
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
...58448
RDMA Port : 0
Online : Y
Pid : 1062218
File System : xfs
Device : /dev/mapper/sde1enc
Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
Inode Size : 512
Disk Space Free : 3.6TB
Total Disk Space : 3.6TB
Inode Count : 390700096
Free Inodes : 390699660
------------------------------------------------------------------...
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
...? ? : 58448? ? ? ? ? ? ?
RDMA Port? ? ? ? ? ? : 0? ? ? ? ? ? ? ? ?
Online? ? ? ? ? ? ? : Y? ? ? ? ? ? ? ? ?
Pid? ? ? ? ? ? ? ? ? : 1062218? ? ? ? ? ?
File System? ? ? ? ? : xfs? ? ? ? ? ? ? ?
Device? ? ? ? ? ? ? : /dev/mapper/sde1enc
Mount Options? ? ? ? : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
Inode Size? ? ? ? ? : 512? ? ? ? ? ? ? ?
Disk Space Free? ? ? : 3.6TB? ? ? ? ? ? ?
Total Disk Space? ? : 3.6TB? ? ? ? ? ? ?
Inode Count? ? ? ? ? : 390700096? ? ? ? ?
Free Inodes? ? ? ? ? : 390699660? ? ? ? ?
-------------------------------------------------------------------------...
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
...: Brick gluster1.linova.de:/glusterfs/sde1enc/brick
TCP Port : 58448
RDMA Port : 0
Online : Y
Pid : 1062218
File System : xfs
Device : /dev/mapper/sde1enc
Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
Inode Size : 512
Disk Space Free : 3.6TB
Total Disk Space : 3.6TB
Inode Count : 390700096
Free Inodes : 390699660
------------------------------------------------------------------------------
Brick : Brick gluster2.linova.de:/gluster...
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
...:/glusterfs/sde1enc/brick
> TCP Port : 58448
> RDMA Port : 0
> Online : Y
> Pid : 1062218
> File System : xfs
> Device : /dev/mapper/sde1enc
> Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> Inode Size : 512
> Disk Space Free : 3.6TB
> Total Disk Space : 3.6TB
> Inode Count : 390700096
> Free Inodes : 390699660
> ------------------------------------------------------------------------------
> Brick...
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
...glusterfs/sde1enc/brick
> TCP Port : 58448
> RDMA Port : 0
> Online : Y
> Pid : 1062218
> File System : xfs
> Device : /dev/mapper/sde1enc
> Mount Options :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> Inode Size : 512
> Disk Space Free : 3.6TB
> Total Disk Space : 3.6TB
> Inode Count : 390700096
> Free Inodes : 390699660
>
> ------------------------------------------------------------------------------
> Brick...
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
...Port : 58448
> > RDMA Port : 0
> > Online : Y
> > Pid : 1062218
> > File System : xfs
> > Device : /dev/mapper/sde1enc
> > Mount Options :
> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
> > Inode Size : 512
> > Disk Space Free : 3.6TB
> > Total Disk Space : 3.6TB
> > Inode Count : 390700096
> > Free Inodes : 390699660
> >
> ----------------------------------------------------------------------...
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
...;> > RDMA Port : 0
>> > Online : Y
>> > Pid : 1062218
>> > File System : xfs
>> > Device : /dev/mapper/sde1enc
>> > Mount Options :
>> rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
>> > Inode Size : 512
>> > Disk Space Free : 3.6TB
>> > Total Disk Space : 3.6TB
>> > Inode Count : 390700096
>> > Free Inodes : 390699660
>> >
>> ------------------------------------------...
2023 Jun 05
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
...> RDMA Port? ? ? ? ? ? : 0
> > Online? ? ? ? ? ? ? ?: Y
> > Pid? ? ? ? ? ? ? ? ? : 1062218
> > File System? ? ? ? ? : xfs
> > Device? ? ? ? ? ? ? ?: /dev/mapper/sde1enc
> > Mount Options? ? ? ? : rw,relatime,attr2,inode64,logbufs=8,logbsize=
> 32k,noquota
> > Inode Size? ? ? ? ? ?: 512
> > Disk Space Free? ? ? : 3.6TB
> > Total Disk Space? ? ?: 3.6TB
> > Inode Count? ? ? ? ? : 390700096
> > Free Inodes? ? ? ? ? : 390699660
> >
> --------...
2012 Oct 09
2
Mount options for NFS
...o NFS
access. Even though files are visible in a terminal and can be accessed with
standard shell tools and vi, this software typically complains that the files
are empty or not syntactically correct.
The NFS filesystems in question are 8TB+ XFS filesystems mounted with
"delaylog,inode64,logbsize=32k,logdev=/dev/sda2,nobarrier,quota" options,
and I suspect that inode64 may have to do with the observed behaviour. The
server is running CentOS 6.3 + all patches.
The clients exhibiting the problem are running CentOS 5.4 and CentOS 5.8
x84_64. Interesting enough, the application (whic...
2017 Aug 18
4
Problem with softwareraid
...skipping
booting from a usb stick for rescue my centos everything works. the
md0 device exists and is mounted. (rw).
[root at quad usb-rescue]# cat mount | grep '/data'
/dev/mapper/data-store on /mnt/sysimage/store type xfs
(rw,noatime,seclabel,attr2,largeio,nobarrier,inode64,logbufs=8,logbsize=256k,sunit=256,swidth=768,noquota)
/dev/mapper/data-tm on /mnt/sysimage/var/lib/vdr/video type xfs
(rw,noatime,seclabel,attr2,largeio,nobarrier,inode64,logbufs=8,logbsize=256k,sunit=256,swidth=768,noquota)
3rd option: i am booting the installed rescue kernel from disk:
i am getting a md0 device,...
2015 Sep 22
0
Centos 6.6, apparent xfs corruption
James Peltier wrote:
> Do you have any XFS optimizations enabled in /etc/fstab such logbsize,
nobarrier, etc?
None.
> is the filesystem full? What percentage of the file system is availabl
e?
There are 2 xfs filesystems:
/dev/mapper/vg_gries01-LogVol00 3144200 1000428 2143773 32% /opt/splunk
/dev/mapper/vg_gries00-LogVol00 307068 267001 40067 87%
/opt/splunk/hot
You...
2012 May 03
1
File size diff on local disk vs NFS share
On May 3, 2012, at 3:04 PM, Glenn Cooper wrote:
>>>> I never really paid attention to this but a file on an NFS mount is
>>>> showing 64M in size, but when copying the file to a local drive, it
>>>> shows 2.5MB in size.
>>>>
>>>> My NFS server is hardware Raided with a volume stripe size of 128K
>>>> were the volume size is
2013 Dec 10
0
Re: gentoo linux, problem starting vm´s when cache=none
...d argument" it usually
> >means that your filesystem is bad and does not support O_DIRECT
>
> i use vanilla-kernel 3.10.23 and xfs on the partition where the vm
> .img is lying.
>
> the mount options i use: /raid6 type xfs
> (rw,noatime,nodiratime,largeio,nobarrier,logbsize=256k)
>
> any wrong here?
XFS should support O_DIRECT, but possibly one of the mount option is
causing trouble. I don't know enough about XFS to answer for sure.
BTW, please keep your replies on the mailing list...
Regards,
Daniel
--
|: http://berrange.com -o- http://www.fli...
2015 Feb 09
0
Error when removing kernel oplock
...t/fs1
read only = No
force create mode = 0744
force directory mode = 0755
guest ok = Yes
hosts allow = 10.0.2.0/24, 10.0.1.0/24
hosts deny = ALL
Underlying file system is XFS:
/dev/dm-52 /export/fs1 xfs rw,sync,noatime,wsync,attr2,discard,inode64,logbsize=64k,sunit=128,swidth=384,noquota 0 0
Any idea what could be the problem?
Thanks
Lev.
---
??? ????????? ????????? ?? ?????? ??????????? Avast.
http://www.avast.com
2017 Oct 10
4
ZFS with SSD ZIL vs XFS
Anyone made some performance comparison between XFS and ZFS with ZIL
on SSD, in gluster environment ?
I've tried to compare both on another SDS (LizardFS) and I haven't
seen any tangible performance improvement.
Is gluster different ?
2013 Dec 10
2
gentoo linux, problem starting vm´s when cache=none
hello mailinglist,
on gentoo system with qemu-1.6.1, libvirt 1.1.4, libvirt-glib-0.1.7,
virt-manager 0.10.0-r1
when i set on virtual machine "cache=none" in the disk-menu, the machine
faults to start with:
<<
Fehler beim Starten der Domain: Interner Fehler: Prozess während der
Verbindungsaufnahme zum Monitor beendet :qemu-system-x86_64: -drive
2015 Sep 21
2
Centos 6.6, apparent xfs corruption
Hi all -
After several months of worry-free operation, we received the following
kernel messages about an xfs filesystem running under CentOS 6.6. The
proximate causes appear to be "Internal error xfs_trans_cancel" and
"Corruption of in-memory data detected. Shutting down filesystem". The
filesystem is back up, mounted, appears to be working OK underlying a
Splunk datastore.