similar to: fsck hangs in Pass 0a

Displaying 20 results from an estimated 100 matches similar to: "fsck hangs in Pass 0a"

2006 Aug 15
0
[git patches] ocfs2 updates
This set of patches includes a few dlm related fixes from Kurt, and a small, trivial cleanup by Adrian. Also included are three disk allocation patches by me - two fixes and one incremental improvement in our allocation strategy. These have been around since early June, so I think they've had enough testing that they can go upstream. Please pull from 'upstream-linus' branch of
2010 Apr 05
1
Kernel Panic, Server not coming back up
I have a relatively new test environment setup that is a little different from your typical scenario. This is my first time using OCFS2, but I believe it should work the way I have it setup. All of this is setup on VMWare virtual hosts. I have two front-end web servers and one backend administrative server. They all share 2 virtual hard drives within VMware (independent, persistent, &
2009 Feb 27
2
[PATCH 1/1] OCFS2: anti stale inode for nfs (V5)
changes from v4: 1, let suballoc lock covers the checking of the group. 2, add/correct some log messages. 3, use ocfs2_read_group_descriptor() instead of diry reading the group. Signed-off-by: Wengang Wang <wen.gang.wang at oracle.com> -- dlmglue.c | 45 ++++++++++++++++ dlmglue.h | 2 export.c | 77 +++++++++++++++++++++++++-- inode.c | 24 ++++++++
2009 Mar 06
2
[PATCH 1/1] OCFS2: anti stale inode for nfs (for 1.4git)
Back porting from mainline. For nfs exporting, ocfs2_get_dentry() returns the dentry for fh. ocfs2_get_dentry() may read from disk(when inode not in memory) without any cross cluster lock. this leads to load a stale inode. this patch fixes above problem. solution is that in case of inode is not in memory, we get the cluster lock(PR) of alloc inode where the inode in question is allocated
2009 Feb 17
1
[PATCH 1/1] OCFS2: anti stale inode for nfs (V3)
For nfs exporting, ocfs2_get_dentry() returns the dentry for fh. ocfs2_get_dentry() may read from disk(when inode not in memory) without any cross cluster lock. this leads to load a stale inode. this patch fixes above problem. solution is that in case of inode is not in memory, we get the cluster lock(PR) of alloc inode where the inode in question is allocated from(this causes node on which
2009 Jul 30
11
[PATCH 0/9] Quota support for ocfs2-tools (version 2)
Hi, this is the next version of quota support for quota tools. I've addressed all the comments of Tao, Joel and others. Sparse feature disabling also correctly updates quota information now and the patch is merged into the tunefs support patch. Honza
2009 Feb 20
3
[PATCH 1/1] OCFS2: anti stale inode for nfs (V4)
changes from v3: 1, move codes that checks inode allocation bit to subfunction ocfs2_test_inode_bit(). 2, release the suballoc lock just after we get it. we should release it asap and doing so doesn't affect functionility. 3, add inode alloc slot validation. Signed-off-by: Wengang Wang <wen.gang.wang at oracle.com> -- dlmglue.c | 45 +++++++++++++++++ dlmglue.h | 2
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present. This will prevent starting arrays in degraded state. Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state. Two new tests are added. This fixes rhbz1527852. Here is boot-benchmark
2009 Jul 27
11
[PATCH 0/8] Quota support for ocfs2-tools
Hi, I'm sending a series of patches implementing quota support into ocfs2-tools. It's the same as the original huge patch I've sent but now it's split as Joel asked. I've also realized that when disabling SPARSE feature, we should update quota information. That piece of code is missing, I'll implement it soon. Comments welcome. Honza
2006 Mar 02
3
Advice on setting up Raid and LVM
Hi all, I'm setting up Centos4.2 on 2x80GB SATA drives. The partition scheme is like this: /boot = 300MB / = 9.2GB /home = 70GB swap = 500MB The RAID is RAID 1. md0 = 300MB = /boot md1 = 9.2GB = LVM md2 = 70GB = LVM md3 = 500MB = LVM Now, the confusing part is: 1. When creating VolGroup00, should I include all PV (md1, md2, md3)? Then create the LV. 2. When setting up RAID 1, should I
2008 Feb 25
0
The I/O bandwidth controller: dm-ioband Performance Report
Hi All, I report new results of dm-ioband bandwidth control test. The previous test results were posted on Jan 25. I've got really good results as well as the last report. dm-ioband works well with Xen virtual disk. I also announce that dm-ioband website has launched. The patches, the manual, the benchmark results and other related information are available through this site. Please check it
2008 Feb 25
0
The I/O bandwidth controller: dm-ioband Performance Report
Hi All, I report new results of dm-ioband bandwidth control test. The previous test results were posted on Jan 25. I've got really good results as well as the last report. dm-ioband works well with Xen virtual disk. I also announce that dm-ioband website has launched. The patches, the manual, the benchmark results and other related information are available through this site. Please check it
2008 Feb 25
0
The I/O bandwidth controller: dm-ioband Performance Report
Hi All, I report new results of dm-ioband bandwidth control test. The previous test results were posted on Jan 25. I've got really good results as well as the last report. dm-ioband works well with Xen virtual disk. I also announce that dm-ioband website has launched. The patches, the manual, the benchmark results and other related information are available through this site. Please check it
2005 Jan 23
0
ef2sck loops forever with re-allocation
I hope someone is able to tell me wheter it is possible to rescue any data from my lvm disk. After adding a second disk, resizing, etc I made the super block sparse. Then problems surfaced. I tried both ef2sck v1.34 and v1.35 but they have the same result: Group descriptors look bad... trying backup blocks... Inode table for group 1519 is not in group. (block 7503623) WARNING: SEVERE DATA LOSS
2011 Nov 11
3
[PATCH v2] Add mdadm-create, list-md-devices APIs.
This adds the mdadm-create API for creating RAID devices, and includes various fixes for the other two patches. Rich.
2009 Aug 03
9
[PATCH 0/9] Quota support for ocfs2-tools (version 3)
Hi, below comes a new version of the series of patches implementing quota support for ocfs2-tools. I've fixed the calls of ocfs2_malloc_blocks() which were given number of bytes instead of number of blocks. Besides that the series should be the same. Honza
2007 Nov 16
8
[PATCH 0/6] Add online resize for ocfs2-tools,take 1
Add online resize in tunefs.ocfs2 so that user can increase the volume when it is mounted.
2012 Jun 12
9
[PATCH v2 0/9]
More comprehensive support for virtio-scsi. Passes all the tests. Rich.
2016 Jun 17
2
[Bug 96562] New: nouveau crashes with SCHED_ERROR 0a [CTXSW_TIMEOUT]
https://bugs.freedesktop.org/show_bug.cgi?id=96562 Bug ID: 96562 Summary: nouveau crashes with SCHED_ERROR 0a [CTXSW_TIMEOUT] Product: xorg Version: unspecified Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: major Priority: medium Component: Driver/nouveau
2023 Dec 08
1
fifo: SCHED_ERROR 0a [CTXSW_TIMEOUT]
I begin to find a way that help me investigate fifo: SCHED_ERROR 0a [CTXSW_TIMEOUT] errors. See https://gitlab.freedesktop.org/xorg/driver/xf86-video-nouveau/-/issues/339 I believe this affects mostly Fermi, Kepler and Maxwell1 graphic cards. I'd like first to describe a bit how I proceed, then talk about separating this issue in many. I am working on Gnome Debian Testing. This environment