search for: nobarriers

Displaying 20 results from an estimated 61 matches for "nobarriers".

Did you mean: nobarrier
2010 Jan 21
1
/proc/mounts always shows "nobarrier" option for xfs, even when mounted with "barrier"
Ran into a confusing situation today. When I mount an xfs filesystem on a server running centos 5.4 x86_64 with kernel 2.6.18-164.9.1.el5, the barrier/nobarrier mount option as displayed in /proc/mounts is always set to "nobarrier" Here's an example: [root at host ~]# mount -o nobarrier /dev/vg1/homexfs /mnt [root at host ~]# grep xfs /proc/mounts /dev/vg1/homexfs /mnt xfs
2016 Oct 24
3
NFS help
On Fri, Oct 21, 2016 at 11:42 AM, <m.roth at 5-cent.us> wrote: > Larry Martell wrote: >> On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: >>> Larry Martell wrote: >>>> We have 1 system ruining Centos7 that is the NFS server. There are 50 >>>> external machines that FTP files to this server fairly continuously. >>>>
2016 Oct 24
2
NFS help
On 10/24/2016 04:51 AM, mark wrote: > Absolutely add nobarrier, and see what happens. Using "nobarrier" might increase overall write throughput, but it removes an important integrity feature, increasing the risk of filesystem corruption on power loss. I wouldn't recommend doing that unless your system is on a UPS, and you've tested and verified that it will perform an
2016 Oct 21
3
NFS help
On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: > Larry Martell wrote: >> We have 1 system ruining Centos7 that is the NFS server. There are 50 >> external machines that FTP files to this server fairly continuously. >> >> We have another system running Centos6 that mounts the partition the files >> are FTP-ed to using NFS. > <snip>
2017 Nov 13
0
how to add mount options for root filesystem inside lxc container
Hi all We use libvirt 3.0.0 + lxc, disk for container described with <filesystem/> tag, example: <filesystem type='block' accessmode='passthrough'> <source dev='/dev/data/lvm-part1'/> <target dir='/'/> </filesystem> Now we start use ssd disk and think how to provide additional options for mount root FS in container:
2016 Oct 24
0
NFS help
On 10/24/16 03:52, Larry Martell wrote: > On Fri, Oct 21, 2016 at 11:42 AM, <m.roth at 5-cent.us> wrote: >> Larry Martell wrote: >>> On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: >>>> Larry Martell wrote: >>>>> We have 1 system ruining Centos7 that is the NFS server. There are 50 >>>>> external machines that
2016 Oct 27
2
NFS help
On Mon, Oct 24, 2016 at 7:51 AM, mark <m.roth at 5-cent.us> wrote: > On 10/24/16 03:52, Larry Martell wrote: >> >> On Fri, Oct 21, 2016 at 11:42 AM, <m.roth at 5-cent.us> wrote: >>> >>> Larry Martell wrote: >>>> >>>> On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: >>>>> >>>>>
2015 Apr 29
2
nfs (or tcp or scheduler) changes between centos 5 and 6?
m.roth at 5-cent.us wrote: > Matt Garman wrote: > >>We have a "compute cluster" of about 100 machines that do a read-only >>NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on >>these boxes are analysis/simulation jobs that constantly read data off >>the NAS. > > <snip> > *IF* I understand you, I've got one question:
2009 May 05
4
BUG at fs/buffer.c:2933 during umount
Hi, I could not find this anywhere else reported, so here we go: creating a new btrfs filesystem (btrfs-progs-unstable from git) and mounting it succeeds, unmounting however fails with the kernel messages attached to this mail. After that, I can still read and write to the btrfs mount, but e.g. /bin/sync never finishes, sysrq-s never reports "Sync complete". I''m using a
2016 Oct 24
0
NFS help
Gordon Messmer wrote: > On 10/24/2016 04:51 AM, mark wrote: >> Absolutely add nobarrier, and see what happens. > > Using "nobarrier" might increase overall write throughput, but it > removes an important integrity feature, increasing the risk of > filesystem corruption on power loss. I wouldn't recommend doing that > unless your system is on a UPS, and
2017 Jun 06
1
Files Missing on Client Side; Still available on bricks
Hello, I am still working at recovering from a few failed OS hard drives on my gluster storage and have been removing, and re-adding bricks quite a bit. I noticed yesterday night that some of the directories are not visible when I access them through the client, but are still on the brick. For example: Client: # ls /scratch/dw Ethiopian_imputation HGDP Rolwaling Tibetan_Alignment Brick: #
2009 Feb 24
12
How (not) to destroy a PostgreSQL db in domU on powerfail
Now I''m sure that the following configuration can destroy a PostgreSQL 8.3.5 database: * Linux host (dom0) with XEN, XFS filesystem with "nobarrier", RAID controller with battery backed cache. * XEN vm (domU) with XFS filesystem with "nobarrier" with postgresql * my daughter with 3.5 years switching off the power supply of the server, just behind the UPS Seems XEN
2015 Apr 29
5
nfs (or tcp or scheduler) changes between centos 5 and 6?
We have a "compute cluster" of about 100 machines that do a read-only NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on these boxes are analysis/simulation jobs that constantly read data off the NAS. We recently upgraded all these machines from CentOS 5.7 to CentOS 6.5. We did a "piecemeal" upgrade, usually upgrading five or so machines at a time, every few
2015 Apr 29
0
nfs (or tcp or scheduler) changes between centos 5 and 6?
Matt Garman wrote: > We have a "compute cluster" of about 100 machines that do a read-only > NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on > these boxes are analysis/simulation jobs that constantly read data off > the NAS. > > We recently upgraded all these machines from CentOS 5.7 to CentOS 6.5. > We did a "piecemeal" upgrade,
2015 Apr 29
0
nfs (or tcp or scheduler) changes between centos 5 and 6?
James Pearson wrote: > m.roth at 5-cent.us wrote: >> Matt Garman wrote: >> >>>We have a "compute cluster" of about 100 machines that do a read-only >>>NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on >>>these boxes are analysis/simulation jobs that constantly read data off >>>the NAS. >> >> <snip>
2015 Oct 16
1
Debugging Kernel Problems
If you have hardware raid on this machine, try to mount xfs partitions with nobarrier. We had similar freezes and this helped for us. On Fri, Oct 16, 2015 at 9:04 PM, Akemi Yagi <amyagi at gmail.com> wrote: > On Fri, Oct 16, 2015 at 7:33 AM, Tod <listacctc at gmail.com> wrote: > > Not sure if this is the correct subject line but my recently installed > > Centos build
2016 Oct 21
0
NFS help
Larry Martell wrote: > On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: >> Larry Martell wrote: >>> We have 1 system ruining Centos7 that is the NFS server. There are 50 >>> external machines that FTP files to this server fairly continuously. >>> >>> We have another system running Centos6 that mounts the partition the >>>
2010 May 19
3
mail location filesystem noatime, nodiratime?
Will Dovecot be negatively impacted if I change my XFS mount options to noatime,nodiratime? Thanks. -- Stan
2012 Oct 09
2
Mount options for NFS
We're experiencing problems with some legacy software when it comes to NFS access. Even though files are visible in a terminal and can be accessed with standard shell tools and vi, this software typically complains that the files are empty or not syntactically correct. The NFS filesystems in question are 8TB+ XFS filesystems mounted with
2013 Nov 04
1
extremely slow NFS performance under 6.x [SOLVED]
I've posted here about this a number of times. The other admin I work with had been playing with it recently, with some real problems we'd been having, and this time, with a year or so's more stuff to google, and newer documentation, found the problem. What we'd been seeing: cd to an NFS-mounted directory, and from an NFS-mounted directory, tar -xzvf a 25M or so tar.gz, which