Displaying 20 results from an estimated 500 matches similar to: "puzzling md error ?"
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64.
I have noticed when building a new software RAID-6 array on CentOS 6.3
that the mismatch_cnt grows monotonically while the array is building:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2010 Jan 21
1
/proc/mounts always shows "nobarrier" option for xfs, even when mounted with "barrier"
Ran into a confusing situation today. When I mount an xfs filesystem on
a server running centos 5.4 x86_64 with kernel 2.6.18-164.9.1.el5, the
barrier/nobarrier mount option as displayed in /proc/mounts is always
set to "nobarrier"
Here's an example:
[root at host ~]# mount -o nobarrier /dev/vg1/homexfs /mnt
[root at host ~]# grep xfs /proc/mounts
/dev/vg1/homexfs /mnt xfs
2008 Aug 17
2
mirroring with LVM?
I'm pulling my hair out trying to setup a mirrored logical volume.
lvconvert tells me I don't have enough free space, even though I have
hundreds of gigabytes free on both physical volumes.
Command: lvconvert -m1 /dev/vg1/iscsi_deeds_data
Insufficient suitable allocatable extents for logical volume : 10240
more required
Any ideas?
Thanks!,
Gordon
Here's the output from the
2012 Sep 06
2
C6 VM text install not recognizing LV
Hi,
I am trying to install a C6 VM on C6 using the text installer using:
# virt-install -n C6_1 -r 3072 --os-variant=rhel6 -l \
ftp://ftp.nluug.nl/site/centos.org/CentOS/6.3/os/x86_64/ --disk \
path=/dev/VG1/vm_c6_1 -w network:default --nographics \
-x "console=ttyS0" --autostart
/dev/VG1/vm_c6_1 has been successfully created. The installation starts
but once I get to the disk
2002 Feb 28
5
Problems with ext3 fs
Hi,
Apologies, this is going to be quite long - I'm going to provide as much
info as possible.
I'm running a system with ext3 fs on software RAID. The RAID set-up is as
shown below:
jlm@nijinsky:~$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid1 hdc1[1] hda1[0]
96256 blocks [2/2] [UU]
md5 : active raid1 hdk1[1] hde1[0]
2010 Jul 19
3
Accessing console for Xen 4.0 with 2.6.31 pvops kernel on Dell Poweredge R610
I am currently using a Dell PowerEdge server R610 with Xen 4.0 installed and
the 2.6.31.13 pvops kernel.
I am accessing the server console using the iDRAC KVM feature of the dell
management console.
Does anyone know how to configure the console option in the grub menu so
that all the boot messages can be seen on the
mgmt console?
Currently I can view only the Xen bootup messages if I dont specify
2007 Apr 27
9
can''t mount vfat fs on lvm created by winxp guest
Greetings,
I''ve had no success with mounting a vfat file system created by a
Windows XP guest on a lvm volume.
# mount -t vfat /dev/vg1/win1 /mnt/
mount: wrong fs type, bad option, bad superblock on /dev/vg1/win1,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
# dmesg
FAT: invalid media value (0xb9)
VFS:
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>>
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present.
This will prevent starting arrays in degraded state.
Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state.
Two new tests are added.
This fixes rhbz1527852.
Here is boot-benchmark
2013 Mar 21
2
[LLVMdev] How to describe a pointer that points to All memory(include global memory, heap, stack)?
Hi, Daniel, thank you for your advice.
Yes, ALL_MEMORY points to ALL_MEMORY.
We use MD(memory descriptor) to abstract a memory location.
MD contains 4 main fields: id, base, offset, size.
For these special MD (ALL_MEMORY, GLOBAL_MEMORY, STACK_MEMORY, HEAP_MEMORY),
we give them id 1, 2, 3, 4, that means MD1 is ALL_MEMORY, MD2 is GLOBAL_MEMORY, the same goes for the rest.
Then we maintain a
2002 Dec 29
1
ext3 external journal and fstab
Hi again!
I would like to add an ext3 partition (eg. /dev/md11) with its external
journal (on /dev/md21) to /etc/fstab in order to have it mounted while
system-startup.
Do I have to specify the external journal device in fstab or does the
partition find its journal device by itself? If the first, ho should I
specify it?
I created the journal with mke2fs /dev/md22 -O journal_dev and the ext3
2013 Mar 05
1
ubuntu, libvirt and virtio block devices
--
Hi. I'm running
Ubuntu Precise
12.04 LTS. I created some virtual machines using vmbuilder, and then
migrated those from their .qcow files to lvm. However, those virtual
machines are still using disk type "file":
Code:
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source
2013 Mar 22
0
[LLVMdev] How to describe a pointer that points to All memory(include global memory, heap, stack)?
On Thu, Mar 21, 2013 at 12:29 AM, Steven Su <steven_known at yahoo.com.cn> wrote:
> Hi, Daniel, thank you for your advice.
> Yes, ALL_MEMORY points to ALL_MEMORY.
>
> We use MD(memory descriptor) to abstract a memory location.
> MD contains 4 main fields: id, base, offset, size.
> For these special MD (ALL_MEMORY, GLOBAL_MEMORY, STACK_MEMORY, HEAP_MEMORY),
> we give them
2017 Sep 20
4
xfs not getting it right?
Hi,
xfs is supposed to detect the layout of a md-RAID devices when creating the
file system, but it doesn?t seem to do that:
# cat /proc/mdstat
Personalities : [raid1]
md10 : active raid1 sde[1] sdd[0]
499976512 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk
# mkfs.xfs /dev/md10p2
meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 blks
2009 Nov 30
3
/etc/cron.weekly/99-raid-check
hi,
it's been a few weeks since rhel/centos 5.4 released and there were many
discussion about this new "feature" the weekly raid partition check.
we've got a lot's of server with raid1 system and i already try to
configure them not to send these messages, but i'm not able ie. i
already add to the SKIP_DEVS all of my swap partitions (since i read it
on linux-kernel list
2011 Jan 18
4
Error bringing up VMs with Xen 4.0.1 + 2.6.18-194.32.1.el5xen CentOS Dom0
I am running Xen 4.0.1 with a 2.6.18-194.32.1.el5xen CentOS Dom0. I built
and installed the Xen binaries from source. The systems boot successfully
however I am having issues bring up DomUs. Here is config:
name = ''xyz''
memory = 256
vcpus = 1
pae = 1
vnc = 1
vncunused = 1
disk = [ ''phy:/dev/vg1/xyz,hda,w'' ]
vif = [
2002 Dec 04
0
[Fwd: [RESEND] 2.4.20: ext3: Assertion failure in journal_forget()/Oops on another system]
Just to make sure somebody reacts (please) I'm forwarding this. Please
cc me on replies as I'm not subscribed to this list.
-------- Original Message --------
Subject: [RESEND] 2.4.20: ext3: Assertion failure in
journal_forget()/Oops on another system
Date: Wed, 04 Dec 2002 21:27:31 +0100
From: Andreas Steinmetz <ast@domdv.de>
To: Linux Kernel Mailing List
2008 Sep 21
3
question about software Raid 1
Does software raid 1 compare checksums or otherwise verify that the same
bits are coming from both disks during reads? What I'm interested in,
is whether bit errors that were somehow undetected by the hardware would
be detected by the raid 1 software.
Thanks,
Nataraj
2014 May 31
4
[long] major problems on fs; e2fsck running out of memory
Hello ext3 list,
I am having an odd issue with one of my filesystems, and I am hoping
someone here can help out. Yes, I do have backups. :) But as is often
the case, it's nice to avoid restoring from backup if possible. If
there is a more appropriate place for this question please let me know.
After quite a while between reboots, I saw a report on the console that
the filesystem was
2009 Oct 25
3
mismatch_cnt after 5.3 -> 5.4 upgrade
Saturday I did an upgrade from 5.3 (original install) to 5.4. Saturday
night, /etc/cron.weekly reported the following:
/etc/cron.weekly/99-raid-check:
WARNING: mismatch_cnt is not 0 on /dev/md0
md0 holds /boot and resides, mirrored, on sda1 and sdb1. md1 holds
an LVM volume containing the remaining filesytems, including swap.
The underlying hardware is just a few months hold,