similar to: reboot - is there a timeout on filesystem flush?

Displaying 20 results from an estimated 4000 matches similar to: "reboot - is there a timeout on filesystem flush?"

2015 Jan 07
0
reboot - is there a timeout on filesystem flush?
On Tue, Jan 6, 2015 at 6:12 PM, Les Mikesell <> wrote: > I've had a few systems with a lot of RAM and very busy filesystems > come up with filesystem errors that took a manual 'fsck -y' after what > should have been a clean reboot. This is particularly annoying on > remote systems where I have to talk someone else through the recovery. > > Is there some time
2012 Apr 17
1
Help needed with NFS issue
I have four NFS servers running on Dell hardware (PE2900) under CentOS 5.7, x86_64. The number of NFS clients is about 170. A few days ago, one of the four, with no apparent changes, stopped responding to NFS requests for two minutes every half an hour (approx). Let's call this "the hang". It has been doing this for four days now. There are no log messages of any kind pertaining
2015 Jan 07
5
reboot - is there a timeout on filesystem flush?
On Jan 6, 2015, at 4:28 PM, Fran Garcia <franchu.garcia at gmail.com> wrote: > > On Tue, Jan 6, 2015 at 6:12 PM, Les Mikesell <> wrote: >> I've had a few systems with a lot of RAM and very busy filesystems >> come up with filesystem errors that took a manual 'fsck -y' after what >> should have been a clean reboot. This is particularly annoying on
2013 Dec 30
2
oom situation
I have continous oom&panic situation unresolved. I am not sure system fills up all the ram (36GB). Why this system triggered this oom situation? Is it about some other memory? highmem? lowmem? stack size? Best Regards, Kernel 3.10.24 Dec 27 09:19:05 2013 kernel: : [277622.359064] squid invoked oom-killer: gfp_mask=0x42d0, order=3, oom_score_adj=0 Dec 27 09:19:05 2013 kernel: :
2008 Dec 04
1
page cache keeps growing untill system runs out of memory on a MIPS platform
Hi, I have samba-3.0.28a crosscompiled and running on a MIPS platform. The development system has about 150MB of free RAM after system bootup and no swap space. The system also has an USB interface, to which an external USB hard disk is connected. When I try to transfer huge files above (100MB) from a client on to the USB hard disk, I find that the page cache eats up almost about 100MB and
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello, I would really appreciate some help/guidance with this problem. First of all sorry for the long message. I would file a bug, but do not know if it is my fault, dm-cache, qemu or (probably) a combination of both. And i can imagine some of you have this setup up and running without problems (or maybe you think it works, just like i did, but it does not): PROBLEM LVM cache writeback
2013 Jul 31
11
Is the checkpoint interval adjustable?
I believe 30 sec is the default for the checkpoint interval.  Is this adjustable? -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
2015 Aug 19
2
Optimum Block Size to use
Am 19.08.2015 um 10:24 schrieb John Hodrien <J.H.Hodrien at leeds.ac.uk>: > On Wed, 19 Aug 2015, Jatin Davey wrote: > >> Hi All >> >> We use CentOS 6.6 for our application. I have profiled the application and find that we have a heavy requirement in terms of Disk writes. On an average when our application operates at a certain load i can observe that the disk writes
2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
Dear list, I thought I'd just share my experiences with this 3Ware card, and see if anyone might have any suggestions. System: Supermicro H8DA8 with 2 x Opteron 250 2.4GHz and 4GB RAM installed. 9550SX-8LP hosting 4x Seagate ST3250820SV 250GB in a RAID 1 plus 2 hot spare config. The array is properly initialized, write cache is on, as is queueing (and supported by the drives). StoreSave
2015 Aug 19
4
Optimum Block Size to use
Hi All We use CentOS 6.6 for our application. I have profiled the application and find that we have a heavy requirement in terms of Disk writes. On an average when our application operates at a certain load i can observe that the disk writes / second is around 2 Mbps (Average). The block size set is 4k ******************* [root at localhost ~]# blockdev --getbsz /dev/sda3 4096
2015 Jan 07
5
reboot - is there a timeout on filesystem flush?
> On Jan 6, 2015, at 5:50 PM, Les Mikesell <lesmikesell at gmail.com> wrote: > > On Tue, Jan 6, 2015 at 6:37 PM, Gary Greene <ggreene at minervanetworks.com> wrote: >> >> >> Almost every controller and drive out there now lies about what is and isn?t flushed to disk, making it nigh on impossible for the Kernel to reliably know 100% of the time that the
2011 Sep 28
3
[PATCH] Btrfs: fix missing clear_extent_bit
We forget to clear inode''s dirty_bytes and EXTENT_DIRTY at the end of write. Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com> --- fs/btrfs/file.c | 1 - fs/btrfs/inode.c | 5 ++++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c index e7872e4..3f3b4a8 100644 --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -1150,7 +1150,6 @@
2015 Jan 07
0
reboot - is there a timeout on filesystem flush?
On 01/06/2015 04:37 PM, Gary Greene wrote: > This has been discussed to death on various lists, including the > LKML... > > Almost every controller and drive out there now lies about what is > and isn?t flushed to disk, making it nigh on impossible for the > Kernel to reliably know 100% of the time that the data HAS been > flushed to disk. This is part of the reason why it is
2015 Jan 07
0
reboot - is there a timeout on filesystem flush?
On Tue, Jan 6, 2015 at 6:37 PM, Gary Greene <ggreene at minervanetworks.com> wrote: > > > Almost every controller and drive out there now lies about what is and isn?t flushed to disk, making it nigh on impossible for the Kernel to reliably know 100% of the time that the data HAS been flushed to disk. This is part of the reason why it is always a Good Idea? to have some sort of pause
2016 Oct 24
3
NFS help
On Fri, Oct 21, 2016 at 11:42 AM, <m.roth at 5-cent.us> wrote: > Larry Martell wrote: >> On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: >>> Larry Martell wrote: >>>> We have 1 system ruining Centos7 that is the NFS server. There are 50 >>>> external machines that FTP files to this server fairly continuously. >>>>
2010 Jan 21
1
/proc/mounts always shows "nobarrier" option for xfs, even when mounted with "barrier"
Ran into a confusing situation today. When I mount an xfs filesystem on a server running centos 5.4 x86_64 with kernel 2.6.18-164.9.1.el5, the barrier/nobarrier mount option as displayed in /proc/mounts is always set to "nobarrier" Here's an example: [root at host ~]# mount -o nobarrier /dev/vg1/homexfs /mnt [root at host ~]# grep xfs /proc/mounts /dev/vg1/homexfs /mnt xfs
2015 Jan 07
2
reboot - is there a timeout on filesystem flush?
> On Jan 6, 2015, at 9:23 PM, Gordon Messmer <gordon.messmer at gmail.com> wrote: > > On 01/06/2015 04:37 PM, Gary Greene wrote: >> This has been discussed to death on various lists, including the >> LKML... >> >> Almost every controller and drive out there now lies about what is >> and isn?t flushed to disk, making it nigh on impossible for the
2015 Jan 07
0
reboot - is there a timeout on filesystem flush?
On 1/7/2015 11:30 AM, Gary Greene wrote: > During the reboot, most card?s drivers on init, will invalidate the cache on the card to ensure dirty pages of data don?t get flushed to disk, to prevent scribbling junk data to the platters. From what I recall, this is true of both the megaraid and adaptec based cards. Presumably, this cache invalidation is only on cards that don't have battery
2016 Oct 24
2
NFS help
On 10/24/2016 04:51 AM, mark wrote: > Absolutely add nobarrier, and see what happens. Using "nobarrier" might increase overall write throughput, but it removes an important integrity feature, increasing the risk of filesystem corruption on power loss. I wouldn't recommend doing that unless your system is on a UPS, and you've tested and verified that it will perform an
2011 Jun 09
4
Possible to use multiple disk to bypass I/O wait?
I'm trying to resolve an I/O problem on a CentOS 5.6 server. The process basically scans through Maildirs, checking for space usage and quota. Because there are hundred odd user folders and several 10s of thousands of small files, this sends the I/O wait % way high. The server hits a very high load level and stops responding to other requests until the crawl is done. I am wondering if I add