James Pearson
2015-Apr-29 15:39 UTC
[CentOS] nfs (or tcp or scheduler) changes between centos 5 and 6?
m.roth at 5-cent.us wrote:> Matt Garman wrote: > >>We have a "compute cluster" of about 100 machines that do a read-only >>NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on >>these boxes are analysis/simulation jobs that constantly read data off >>the NAS. > > <snip> > *IF* I understand you, I've got one question: what parms are you using to > mount the storage? We had *real* performance problems when we went from 5 > to 6 - as in, unzipping a 26M file to 107M, while writing to an > NFS-mounted drive, went from 30 sec or so to a *timed* 7 min. The final > answer was that once we mounted the NFS filesystem with nobarrier in fstab > instead of default, the time dropped to 35 or 40 sec again. > > barrier is in 6, and tries to make writes atomic transactions; its intent > is to protect in case of things like power failure. Esp. if you're on > UPSes, nobarrier is the way to go.The server in this case isn't a Linux box with an ext4 file system - so that won't help ... James Pearson
m.roth at 5-cent.us
2015-Apr-29 15:51 UTC
[CentOS] nfs (or tcp or scheduler) changes between centos 5 and 6?
James Pearson wrote:> m.roth at 5-cent.us wrote: >> Matt Garman wrote: >> >>>We have a "compute cluster" of about 100 machines that do a read-only >>>NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on >>>these boxes are analysis/simulation jobs that constantly read data off >>>the NAS. >> >> <snip> >> *IF* I understand you, I've got one question: what parms are you using >> to mount the storage? We had *real* performance problems when we went from >> 5 to 6 - as in, unzipping a 26M file to 107M, while writing to an >> NFS-mounted drive, went from 30 sec or so to a *timed* 7 min. The final >> answer was that once we mounted the NFS filesystem with nobarrier in >> fstab instead of default, the time dropped to 35 or 40 sec again. >> >> barrier is in 6, and tries to make writes atomic transactions; its >> intent is to protect in case of things like power failure. Esp. ifyou're on>> UPSes, nobarrier is the way to go. > > The server in this case isn't a Linux box with an ext4 file system - so > that won't help ... >What kind of filesystem is it? I note that xfs also has barrier as a mount option. mark
Matt Garman
2015-Apr-29 16:37 UTC
[CentOS] nfs (or tcp or scheduler) changes between centos 5 and 6?
On Wed, Apr 29, 2015 at 10:51 AM, <m.roth at 5-cent.us> wrote:>> The server in this case isn't a Linux box with an ext4 file system - so >> that won't help ... >> > What kind of filesystem is it? I note that xfs also has barrier as a mount > option.The server is a NetApp FAS6280. It's using NetApp's filesystem. I'm almost certain it's none of the common Linux ones. (I think they call it WAFL IIRC.) Either way, we do the NFS mount read-only, so write barriers don't even come into play. E.g., with your original example, if we unzipped something, we'd have to write to the local disk. Furthermore, in "low load" situations, the NetApp read latency stays low, and the 5/6 performance is fairly similar. It's only when the workload gets high, and it turn this "aggressive" demand is placed on the NetApp, that we in turn see overall decreased performance. Thanks for the thoughts!
Apparently Analagous Threads
- nfs (or tcp or scheduler) changes between centos 5 and 6?
- nfs (or tcp or scheduler) changes between centos 5 and 6?
- nfs (or tcp or scheduler) changes between centos 5 and 6?
- nfs (or tcp or scheduler) changes between centos 5 and 6?
- nfs (or tcp or scheduler) changes between centos 5 and 6?