Displaying 20 results from an estimated 1200 matches similar to: "mdadm: failed to write superblock to"
2003 Jun 09
1
unable to read superblock ?
Hi, I have a machine setup as software-raid + ext3 + redhat ,
but this morning, it seems the file system crash. here is
the booting message, anyone has any idea how to fix it.
Thanks!
Donghui
----------------------------------------------------------------------------
------
md: adding sda1
md: created md2
md: using <sdb1> <sda1>
md: md2: raid array is not clean -- starting
2007 Sep 25
2
mdadm problem.
So I'm trying to RAID-1 this system which has two identical disks
installed in it, and it isn't working for some reason.
I started by doing a CentOS-4 install on /dev/sda1 as root, and with
/dev/sda2 as my swap.
I finish the install, yum update, and then I want to make the mirrors.
I copy the partition table from one disk to the other:
# sfdisk -d /dev/sda | sfdisk /dev/sdb
I create
2019 Feb 25
0
Problem with mdadm, raid1 and automatically adds any disk to raid
> Hi.
>
> CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade
> new/old machines.
>
> I was trying to setup two disks as a RAID1 array, using these lines
>
> mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1
> /dev/sdc1
> mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2
> /dev/sdc2
> mdadm
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because
two drives became unavailable. After adjusting the cables on several
occasions and shutting down and restarting, I was able to see the
drives again. This is when I snatched defeat from the jaws of
victory. Please, someone with vast knowledge of how RAID 5 with mdadm
works, tell me if I have any chance at all
2019 Feb 25
0
Problem with mdadm, raid1 and automatically adds any disk to raid
In article <20190225050144.GA5984 at button.barrett.com.au>,
Jobst Schmalenbach <jobst at barrett.com.au> wrote:
> Hi.
>
> CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines.
>
> I was trying to setup two disks as a RAID1 array, using these lines
>
> mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi.
CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines.
I was trying to setup two disks as a RAID1 array, using these lines
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2
mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2022 Apr 24
3
Installing mdadm and C7 on new computer
On 04/23/2022 09:19 PM, H wrote:
> On 04/19/2022 09:57 AM, Roberto Ragusa wrote:
>> On 4/18/22 1:27 PM, H wrote:
>>> I have a new computer with 2 x 2TB SSDs where I wanted to install C7 and use mdadm for RAID1 configuration and encrypting the /home partition. On the net I found https://tuxfixer.com/centos-7-installation-with-lvm-raid-1-mirroring/ which I adopted slightly with
2008 Mar 23
4
md raid1 - no speed improvement
Hi,
I have two 320 GB SATA disks (/dev/sda, /dev/sdb) in a server running
CentOS release 5.
They both have three partitions setup as RAID1 using md (boot, swap,
and an LVM data partition).
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
4192896 blocks [2/2] [UU]
md2 : active raid1 sdb3[1]
2016 Oct 19
0
renaming mdadm name
Hi
I have a disk which two of the partitions are part of a RAID1 setup. I'm
trying to rename the the second raided partition
mdadm -E /dev/sdc4
/dev/sdc4:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 83d7657b:ebfddcb7:36b0fa14:d29a350c
Name : oldname:2
Creation Time : Tue Aug 30 15:25:10 2016
Raid Level : raid1
Raid Devices : 2
2019 Feb 26
0
Problem with mdadm, raid1 and automatically adds any disk to raid
On 2/24/19 9:01 PM, Jobst Schmalenbach wrote:
> I tried to delete the MDX, I removed the disks by failing them, then removing each array md0, md1 and md2.
> I also did
>
> dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz /dev/sdX)-1024)) count=1024
Clearing the initial sectors doesn't do anything to clear the data in
the partitions.? They don't become blank
2011 Mar 08
0
Race condition with mdadm at bootup?
Hello folks,
I am experiencing a weird problem at bootup with large RAID-6 arrays.
After Googling around (a lot) I find that others are having the same
issues with CentOS/RHEL/Ubuntu/whatever. In my case it's Scientific
Linux-6 which should behave the same way as CentOS-6. I had the same
problem with the RHEL-6 evaluation version. I'm posting this question
to the SL mailing list
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file:
more /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382
ARRAY /dev/md2 level=raid1 num-devices=2
2019 Feb 26
2
Problem with mdadm, raid1 and automatically adds any disk to raid
> On 2/24/19 9:01 PM, Jobst Schmalenbach wrote:
>> I tried to delete the MDX, I removed the disks by failing them, then
>> removing each array md0, md1 and md2.
>> I also did
>>
>> dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz
>> /dev/sdX)-1024)) count=1024
>
>
> Clearing the initial sectors doesn't do anything to clear the
2022 Apr 18
1
Installing mdadm and C7 on new computer
I have a new computer with 2 x 2TB SSDs where I wanted to install C7 and use mdadm for RAID1 configuration and encrypting the /home partition. On the net I found https://tuxfixer.com/centos-7-installation-with-lvm-raid-1-mirroring/ which I adopted slightly with respect to partition sizes, using RAID1 for /boot and /root as well and added the /home partition with RAID1 and chose to have /home
2013 Oct 09
1
mdraid strange surprises...
Hey,
I installed 2 new data servers with a big (12TB) RAID6 mdraid.
I formated the whole arrays with bad blocks checks.
One server is moderately used (nfs on one md), while the other not.
One week later, after the raid-check from cron, I get on both servers
a few block_mismatch... 1976162368 on the used one and a tiny bit less
on the other...? That seems a tiny little bit high...
I do the
2015 Aug 25
0
CentOS 6.6 - reshape of RAID 6 is stucked
Hello
I have a CentOS 6.6 Server with 13 disks in a RAID 6. Some weeks ago, i upgraded it to 17 disks, two of them configured as spare. The reshape worked like normal in the beginning. But at 69% it stopped.
md2 : active raid6 sdj1[0] sdg1[18](S) sdh1[2] sdi1[5] sdm1[15] sds1[12] sdr1[14] sdk1[9] sdo1[6] sdn1[13] sdl1[8] sdd1[20] sdf1[19] sdq1[16] sdb1[10] sde1[17](S) sdc1[21]
19533803520
2010 Nov 18
1
kickstart raid disk partitioning
Hello.
A couple of years ago I installed two file-servers
using kickstart. The server has two 1TB sata disks
with two software raid1 partitions as follows:
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb4[1] sda4[0]
933448704 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda2[2](F)
40957568 blocks [2/1] [_U]
Now the drives are starting to be failing and next week
2019 Feb 26
2
Problem with mdadm, raid1 and automatically adds any disk to raid
> On Mon, Feb 25, 2019 at 11:54 PM Simon Matter via CentOS
> <centos at centos.org>
> wrote:
>
>> >
>> > What makes you think this has *anything* to do with systemd? Bitching
>> > about systemd every time you hit a problem isn't helpful. Don't.
>>
>> If it's not systemd, who else does it? Can you elaborate, please?
>>
>
2005 Nov 08
1
EXT3-fs error (device md2): ext3_journal_start_sb: Detected aborted journal...
Hi,
I'm running a production server (Debian Sarge install) whose root
filesystem (a software raid 1 array of 2 partitions of IDE drive)
exhibited the following problem:
Oct 28 06:00:06 server2 kernel: attempt to access beyond end of device
Oct 28 06:00:06 server2 kernel: md2: rw=1, want=3050401328, limit=16353920
[...] a few of the above line snipped, want is different each time
Oct 28
2011 May 04
2
Cannot resize btrfs volume
Hello,
I added a new disk into our RAID5 array, it looks like this:
md2 : active raid5 sdd4[3] sde4[4] sda4[0] sdc4[2] sdb4[1]
3767274240 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
# btrfs fi sh
Label: none uuid: 5534d2e7-be31-49c7-8ab7-90c5ab8afe18
Total devices 1 FS bytes used 2.24TB
devid 3 size 2.63TB used 2.63TB path /dev/md2
# mount
...
/dev/md2 on /home type btrfs