Displaying 20 results from an estimated 61 matches for "nobarrier".
2010 Jan 21
1
/proc/mounts always shows "nobarrier" option for xfs, even when mounted with "barrier"
Ran into a confusing situation today. When I mount an xfs filesystem on
a server running centos 5.4 x86_64 with kernel 2.6.18-164.9.1.el5, the
barrier/nobarrier mount option as displayed in /proc/mounts is always
set to "nobarrier"
Here's an example:
[root at host ~]# mount -o nobarrier /dev/vg1/homexfs /mnt
[root at host ~]# grep xfs /proc/mounts
/dev/vg1/homexfs /mnt xfs rw,attr2,nobarrier,noquota 0 0
[root at host ~]# mount | grep xfs
/d...
2016 Oct 24
3
NFS help
...ulate xfs will not work
> with extx, and vice versa.
>
> cat /etc/fstab on the systems, and see what they are. If either is xfs,
> and assuming that the systems are on UPSes, then the fstab which controls
> drive mounting on a system should have, instead of "defaults",
> nobarrier,inode64.
The server is xfs (the client is nfs). The server does have inode64
specified, but not nobarrier.
> Note that the inode64 is relevant if the filesystem is > 2TB.
The file system is 51TB.
> The reason I say this is that we we started rolling out CentOS 7, we tried
> to put o...
2016 Oct 24
2
NFS help
On 10/24/2016 04:51 AM, mark wrote:
> Absolutely add nobarrier, and see what happens.
Using "nobarrier" might increase overall write throughput, but it
removes an important integrity feature, increasing the risk of
filesystem corruption on power loss. I wouldn't recommend doing that
unless your system is on a UPS, and you've tested and...
2016 Oct 21
3
NFS help
On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote:
> Larry Martell wrote:
>> We have 1 system ruining Centos7 that is the NFS server. There are 50
>> external machines that FTP files to this server fairly continuously.
>>
>> We have another system running Centos6 that mounts the partition the files
>> are FTP-ed to using NFS.
> <snip>
2017 Nov 13
0
how to add mount options for root filesystem inside lxc container
...='block' accessmode='passthrough'>
<source dev='/dev/data/lvm-part1'/>
<target dir='/'/>
</filesystem>
Now we start use ssd disk and think how to provide additional options
for mount root FS in container: "discard" and "nobarrier" options for
XFS.
All level below XFS configured for support trim command ("options
raid0 devices_discard_performance=Y" and "issue_discards = 1")
I try set options inside container in /etc/fstab, it's not worked:
/dev/root / rootfs discard,nobarrier,defaul...
2016 Oct 24
0
NFS help
...t;>>> What filesystem?
<snip>
>> cat /etc/fstab on the systems, and see what they are. If either is xfs,
>> and assuming that the systems are on UPSes, then the fstab which controls
>> drive mounting on a system should have, instead of "defaults",
>> nobarrier,inode64.
>
> The server is xfs (the client is nfs). The server does have inode64
> specified, but not nobarrier.
>
>> Note that the inode64 is relevant if the filesystem is > 2TB.
>
> The file system is 51TB.
>
>> The reason I say this is that we we started rolli...
2016 Oct 27
2
NFS help
...t; <snip>
>>>
>>> cat /etc/fstab on the systems, and see what they are. If either is xfs,
>>> and assuming that the systems are on UPSes, then the fstab which controls
>>> drive mounting on a system should have, instead of "defaults",
>>> nobarrier,inode64.
>>
>>
>> The server is xfs (the client is nfs). The server does have inode64
>> specified, but not nobarrier.
>>
>>> Note that the inode64 is relevant if the filesystem is > 2TB.
>>
>>
>> The file system is 51TB.
>>
>>...
2015 Apr 29
2
nfs (or tcp or scheduler) changes between centos 5 and 6?
...you using to
> mount the storage? We had *real* performance problems when we went from 5
> to 6 - as in, unzipping a 26M file to 107M, while writing to an
> NFS-mounted drive, went from 30 sec or so to a *timed* 7 min. The final
> answer was that once we mounted the NFS filesystem with nobarrier in fstab
> instead of default, the time dropped to 35 or 40 sec again.
>
> barrier is in 6, and tries to make writes atomic transactions; its intent
> is to protect in case of things like power failure. Esp. if you're on
> UPSes, nobarrier is the way to go.
The server in this c...
2009 May 05
4
BUG at fs/buffer.c:2933 during umount
Hi,
I could not find this anywhere else reported, so here we go:
creating a new btrfs filesystem (btrfs-progs-unstable from git) and
mounting it succeeds, unmounting however fails with the kernel messages
attached to this mail. After that, I can still read and write to the
btrfs mount, but e.g. /bin/sync never finishes, sysrq-s never reports
"Sync complete".
I''m using a
2016 Oct 24
0
NFS help
Gordon Messmer wrote:
> On 10/24/2016 04:51 AM, mark wrote:
>> Absolutely add nobarrier, and see what happens.
>
> Using "nobarrier" might increase overall write throughput, but it
> removes an important integrity feature, increasing the risk of
> filesystem corruption on power loss. I wouldn't recommend doing that
> unless your system is on a UPS, and yo...
2017 Jun 06
1
Files Missing on Client Side; Still available on bricks
...configured:
server.event-threads: 8
performance.client-io-threads: on
client.event-threads: 8
performance.cache-size: 32MB
performance.readdir-ahead: on
diagnostics.client-log-level: INFO
diagnostics.brick-log-level: INFO
Mount points for the bricks:
/dev/sdb on /data/brick2 type xfs (rw,noatime,nobarrier)
/dev/sda on /data/brick1 type xfs (rw,noatime,nobarrier)
Mount point on the client:
10.xx.xx.xx:/hpcscratch on /scratch type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
My question is what are some of the possibilities for the root cause of this issue and what is the r...
2009 Feb 24
12
How (not) to destroy a PostgreSQL db in domU on powerfail
Now I''m sure that the following configuration can destroy a PostgreSQL
8.3.5 database:
* Linux host (dom0) with XEN, XFS filesystem with "nobarrier", RAID
controller with battery backed cache.
* XEN vm (domU) with XFS filesystem with "nobarrier" with postgresql
* my daughter with 3.5 years switching off the power supply of the
server, just behind the UPS
Seems XEN does lie about fsync, otherwise it shouldn''t have cra...
2015 Apr 29
5
nfs (or tcp or scheduler) changes between centos 5 and 6?
We have a "compute cluster" of about 100 machines that do a read-only
NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on
these boxes are analysis/simulation jobs that constantly read data off
the NAS.
We recently upgraded all these machines from CentOS 5.7 to CentOS 6.5.
We did a "piecemeal" upgrade, usually upgrading five or so machines at
a time, every few
2015 Apr 29
0
nfs (or tcp or scheduler) changes between centos 5 and 6?
...tion: what parms are you using to
mount the storage? We had *real* performance problems when we went from 5
to 6 - as in, unzipping a 26M file to 107M, while writing to an
NFS-mounted drive, went from 30 sec or so to a *timed* 7 min. The final
answer was that once we mounted the NFS filesystem with nobarrier in fstab
instead of default, the time dropped to 35 or 40 sec again.
barrier is in 6, and tries to make writes atomic transactions; its intent
is to protect in case of things like power failure. Esp. if you're on
UPSes, nobarrier is the way to go.
mark
2015 Apr 29
0
nfs (or tcp or scheduler) changes between centos 5 and 6?
...gt; to mount the storage? We had *real* performance problems when we went from
>> 5 to 6 - as in, unzipping a 26M file to 107M, while writing to an
>> NFS-mounted drive, went from 30 sec or so to a *timed* 7 min. The final
>> answer was that once we mounted the NFS filesystem with nobarrier in
>> fstab instead of default, the time dropped to 35 or 40 sec again.
>>
>> barrier is in 6, and tries to make writes atomic transactions; its
>> intent is to protect in case of things like power failure. Esp. if
you're on
>> UPSes, nobarrier is the way to go.
&g...
2015 Oct 16
1
Debugging Kernel Problems
If you have hardware raid on this machine, try to mount xfs partitions with
nobarrier. We had similar freezes and this helped for us.
On Fri, Oct 16, 2015 at 9:04 PM, Akemi Yagi <amyagi at gmail.com> wrote:
> On Fri, Oct 16, 2015 at 7:33 AM, Tod <listacctc at gmail.com> wrote:
> > Not sure if this is the correct subject line but my recently installed
> >...
2016 Oct 21
0
NFS help
..., and now xfs. Tools to manipulate xfs will not work
with extx, and vice versa.
cat /etc/fstab on the systems, and see what they are. If either is xfs,
and assuming that the systems are on UPSes, then the fstab which controls
drive mounting on a system should have, instead of "defaults",
nobarrier,inode64.
Note that the inode64 is relevant if the filesystem is > 2TB.
The reason I say this is that we we started rolling out CentOS 7, we tried
to put one of our user's home directory on one, and it was a disaster.
100% repeatedly, untarring a 100M tarfile onto an nfs-mounted drive took...
2010 May 19
3
mail location filesystem noatime, nodiratime?
Will Dovecot be negatively impacted if I change my XFS mount options to
noatime,nodiratime?
Thanks.
--
Stan
2012 Oct 09
2
Mount options for NFS
...es are visible in a terminal and can be accessed with
standard shell tools and vi, this software typically complains that the files
are empty or not syntactically correct.
The NFS filesystems in question are 8TB+ XFS filesystems mounted with
"delaylog,inode64,logbsize=32k,logdev=/dev/sda2,nobarrier,quota" options,
and I suspect that inode64 may have to do with the observed behaviour. The
server is running CentOS 6.3 + all patches.
The clients exhibiting the problem are running CentOS 5.4 and CentOS 5.8
x84_64. Interesting enough, the application (which is available in 32-bit
only)...
2013 Nov 04
1
extremely slow NFS performance under 6.x [SOLVED]
...5.x even recognizes this option. From
upstream docs,
<https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/writebarrieronoff.html>,
it's enabled by default, and affects *all* journalled filesystems.
Remounting the drive with -o nobarrier, I just NFS mounted an exported
directory... and it took 20 seconds.
Since most of our systems are all on UPSes, we're not worried about sudden
power loss... and my manager did a jig, and we're starting to talk about
migrating the rest of our home directory servers....
mark