Displaying 20 results from an estimated 581 matches for "md0".
Did you mean:
md
2019 Jan 29
2
C7, mdadm issues
...gt; devices and the current status from /proc/mdstat?
>
Well, nope. I got to the point of rebooting the system (xfs had the RAID
volume, and wouldn't let go; I also commented out the RAID volume.
It's RAID 5, /dev/sdb *also* appears to have died. If I do
mdadm --assemble --force -v /dev/md0 /dev/sd[cefgdh]1
mdadm: looking for devices for /dev/md0
mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot -1.
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdf1 is identified as a member of /d...
2019 Jan 29
2
C7, mdadm issues
...;>
>> Well, nope. I got to the point of rebooting the system (xfs had the
>> RAID
>> volume, and wouldn't let go; I also commented out the RAID volume.
>>
>> It's RAID 5, /dev/sdb *also* appears to have died. If I do
>> mdadm --assemble --force -v /dev/md0 /dev/sd[cefgdh]1 mdadm: looking for
>> devices for /dev/md0 mdadm: /dev/sdc1 is identified as a member of
>> /dev/md0, slot 0.
>> mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot -1.
>> mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 2.
>> mda...
2019 Jan 30
4
C7, mdadm issues
...point of rebooting the system (xfs had the
>>>> RAID
>>>> volume, and wouldn't let go; I also commented out the RAID volume.
>>>>
>>>> It's RAID 5, /dev/sdb *also* appears to have died. If I do
>>>> mdadm --assemble --force -v /dev/md0? /dev/sd[cefgdh]1 mdadm: looking for
>>>> devices for /dev/md0 mdadm: /dev/sdc1 is identified as a member of
>>>> /dev/md0, slot 0.
>>>> mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot -1.
>>>> mdadm: /dev/sde1 is identified as a member o...
2019 Jan 30
2
C7, mdadm issues
...he RAID
>>>>>> volume, and wouldn't let go; I also commented out the RAID
>>>>>> volume.
>>>>>>
>>>>>> It's RAID 5, /dev/sdb *also* appears to have died. If I do
>>>>>> mdadm --assemble --force -v /dev/md0? /dev/sd[cefgdh]1 mdadm:
>>>>>> looking for devices for /dev/md0 mdadm: /dev/sdc1 is identified
>>>>>> as a member of /dev/md0, slot 0.
>>>>>> mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot -1.
>>>>>> mdadm: /dev...
2011 Aug 17
1
RAID5 suddenly broken
...n LVM volume group for all my important data,
among them the root of the operating system(s).
It based on four partitions on four separate disks (the third
partition of each disk, 3 active, one spare)
When booting, I get an error message similar to:
raid5 failed: No md superblock detected on /dev/md0.
and the LVM volume group does not come up.
I then booted using the CentOS 5.6 LiveCD and tried to run a few mdadm
command (see just below).
It seems that there are some data still lying around, but I'm not very
experienced with RAID and I thought that I would ask for advice before
trying com...
2019 Jan 30
1
C7, mdadm issues
...volume, and wouldn't let go; I also commented
>>>>>>>> out the RAID volume.
>>>>>>>>
>>>>>>>> It's RAID 5, /dev/sdb *also* appears to have died. If I do
>>>>>>>> mdadm --assemble --force -v /dev/md0? /dev/sd[cefgdh]1
>>>>>>>> mdadm:
>>>>>>>> looking for devices for /dev/md0 mdadm: /dev/sdc1 is
>>>>>>>> identified as a member of /dev/md0, slot 0. mdadm: /dev/sdd1
>>>>>>>> is identified as a member of...
2019 Jan 29
0
C7, mdadm issues
...tatus from /proc/mdstat?
>>
> Well, nope. I got to the point of rebooting the system (xfs had the RAID
> volume, and wouldn't let go; I also commented out the RAID volume.
>
> It's RAID 5, /dev/sdb *also* appears to have died. If I do
> mdadm --assemble --force -v /dev/md0 /dev/sd[cefgdh]1
> mdadm: looking for devices for /dev/md0
> mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot 0.
> mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot -1.
> mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 2.
> mdadm: /dev/sdf1 is ide...
2020 Feb 03
3
Hard disk activity will not die down
I updated my backup server this weekend from CentOS 7 to CentOS 8.
OS disk is SSD, /dev/md0 are two 4TB WD mechanical drives.
No hardware was changed.
1. wiped all drives
2. installed new copy of 8 on system SSD
3. re-created the 4TB mirror /dev/md0 with the same WD mechanical drives
4. created the largest single partition possible on /dev/md0 and formatted
it ext4
5. waited several hour...
2019 Jan 30
0
C7, mdadm issues
...nope. I got to the point of rebooting the system (xfs had the
>>> RAID
>>> volume, and wouldn't let go; I also commented out the RAID volume.
>>>
>>> It's RAID 5, /dev/sdb *also* appears to have died. If I do
>>> mdadm --assemble --force -v /dev/md0 /dev/sd[cefgdh]1 mdadm: looking for
>>> devices for /dev/md0 mdadm: /dev/sdc1 is identified as a member of
>>> /dev/md0, slot 0.
>>> mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot -1.
>>> mdadm: /dev/sde1 is identified as a member of /dev/md0, slot...
2019 Jan 30
3
C7, mdadm issues
...he
>>>>>> RAID
>>>>>> volume, and wouldn't let go; I also commented out the RAID volume.
>>>>>>
>>>>>> It's RAID 5, /dev/sdb *also* appears to have died. If I do
>>>>>> mdadm --assemble --force -v /dev/md0? /dev/sd[cefgdh]1 mdadm: looking
>>>>>> for
>>>>>> devices for /dev/md0 mdadm: /dev/sdc1 is identified as a member of
>>>>>> /dev/md0, slot 0.
>>>>>> mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot -1.
>>>...
2019 Jan 30
0
C7, mdadm issues
...he system (xfs had the
>>>>> RAID
>>>>> volume, and wouldn't let go; I also commented out the RAID volume.
>>>>>
>>>>> It's RAID 5, /dev/sdb *also* appears to have died. If I do
>>>>> mdadm --assemble --force -v /dev/md0? /dev/sd[cefgdh]1 mdadm:
>>>>> looking for
>>>>> devices for /dev/md0 mdadm: /dev/sdc1 is identified as a member of
>>>>> /dev/md0, slot 0.
>>>>> mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot -1.
>>>>> mdadm:...
2019 Jan 30
0
C7, mdadm issues
...>>>> volume, and wouldn't let go; I also commented out the RAID
>>>>>>> volume.
>>>>>>>
>>>>>>> It's RAID 5, /dev/sdb *also* appears to have died. If I do
>>>>>>> mdadm --assemble --force -v /dev/md0? /dev/sd[cefgdh]1 mdadm:
>>>>>>> looking for devices for /dev/md0 mdadm: /dev/sdc1 is identified
>>>>>>> as a member of /dev/md0, slot 0.
>>>>>>> mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot -1.
>>>>>>&...
2019 Jan 29
2
C7, mdadm issues
I've no idea what happened, but the box I was working on last week has a
*second* bad drive. Actually, I'm starting to wonder about that
particulare hot-swap bay.
Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... but
see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a reliable
way to make either one active.
Actually, I would have expected the linux
2019 Jan 30
0
C7, mdadm issues
...he system (xfs had the
>>>>> RAID
>>>>> volume, and wouldn't let go; I also commented out the RAID volume.
>>>>>
>>>>> It's RAID 5, /dev/sdb *also* appears to have died. If I do
>>>>> mdadm --assemble --force -v /dev/md0? /dev/sd[cefgdh]1 mdadm: looking
>>>>> for
>>>>> devices for /dev/md0 mdadm: /dev/sdc1 is identified as a member of
>>>>> /dev/md0, slot 0.
>>>>> mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot -1.
>>>>> mdadm:...
2006 Jan 19
3
ext3 fs errors 3T fs
...relevant,
if you know otherwise please point me in the right direction.
I have a ~3T ext3 filesystem on linux software raid that had been behaving
corectly for sometime. Not to long ago it gave the following error after
trying to mount it:
mount: wrong fs type, bad option, bad superblock on /dev/md0,
or too many mounted file systems
after a long fsck which I had to do manually I noticed the following in
/var/log/messages after trying to mount again:
Jan 19 09:13:11 terrorbytes kernel: EXT3-fs error (device md0):
ext3_check_descriptors: Block bitmap for group 3584 not in group (block
0...
2019 Jan 31
0
C7, mdadm issues
...gt;>> RAID
>>>>>>> volume, and wouldn't let go; I also commented out the RAID volume.
>>>>>>>
>>>>>>> It's RAID 5, /dev/sdb *also* appears to have died. If I do
>>>>>>> mdadm --assemble --force -v /dev/md0? /dev/sd[cefgdh]1 mdadm:
>>>>>>> looking
>>>>>>> for
>>>>>>> devices for /dev/md0 mdadm: /dev/sdc1 is identified as a member of
>>>>>>> /dev/md0, slot 0.
>>>>>>> mdadm: /dev/sdd1 is identified a...
2006 Jun 29
1
problem with raid assembly
...'s fine and dandy,
except that kinit reports a mess of errors about the devices, as they
are already loaded. It appears there errors are harmelss, but I am
not as well versed in low-level raid stuff as I should be, which is
why I am reporting this here.
The following is reported by kinit when md0 is already active (this
was typed by hand so minor textual errors may be present):
md: will configure md0 (super-block) from /dev/sda1,/dev/sdb1, below
md: kinit (pid 1 ) used obsolete MD ioctl, upgrade your software to
use new ictls
md: loading md0: /dev/sda1
md: couldn't update array info. -2...
2014 Dec 03
7
DegradedArray message
...ge 257:
>From root at desk4.localdomain Tue Oct 28 07:25:37 2014
Return-Path: <root at desk4.localdomain>
X-Original-To: root
Delivered-To: root at desk4.localdomain
From: mdadm monitoring <root at desk4.localdomain>
To: root at desk4.localdomain
Subject: DegradedArray event on /dev/md0:desk4
Date: Tue, 28 Oct 2014 07:25:27 -0400 (EDT)
Status: RO
This is an automatically generated mail message from mdadm
running on desk4
A DegradedArray event had been detected on md device /dev/md0.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalit...
2010 Nov 14
3
RAID Resynch...??
So still coming up to speed with mdadm and I notice this morning one of my
servers acting sluggish...so when I looked at the mdadm raid device I see
this:
mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Mon Sep 27 22:47:44 2010
Raid Level : raid10
Array Size : 976759808 (931.51 GiB 1000.20 GB)
Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is pe...
2007 Aug 02
1
kernel: EXT3-fs: Unsupported filesystem blocksize 8192 on md0.
Hi,
I made an ext3 filesystem with 8kB block size:
# mkfs.ext3 -T largefile -v -b 8192 /dev/md0
Warning: blocksize 8192 not usable on most systems.
mke2fs 1.38 (30-Jun-2005)
Filesystem label=
OS type: Linux
Block size=8192 (log=3)
Fragment size=8192 (log=3)
148480 inodes, 18940704 blocks
947035 blocks (5.00%) reserved for the super user
First data block=0
290 block groups
65528 blocks per gro...