Displaying 20 results from an estimated 3000 matches similar to: "problem with raid assembly"
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list,
I'm new with UEFI and GPT.
For several years I've used MBR partition table. I've installed my
system on software raid1 (mdadm) using md0(sda1,sdb1) for swap,
md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to
concerning raid1 installation, I must put each partition on a different
md devices. I've asked times ago if it's more correct create the
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi.
CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines.
I was trying to setup two disks as a RAID1 array, using these lines
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2
mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I
noticed one drive is giving errors. Good thing I had RAID. I planned
on upgrading this server in next month or so. Just wandering if there
was an easy way to fix this to avoid rushing the upgrade? Having a
single drive is slowing down reads as well, I think.
Thanks.
Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 15:03, mark ha scritto:
>
>> I've no idea what happened, but the box I was working on last week has
>> a *second* bad drive. Actually, I'm starting to wonder about that
>> particulare hot-swap bay.
>>
>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1...
>> but see both /dev/sdh1 and
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 18:47, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 15:03, mark ha scritto:
>>>
>>>> I've no idea what happened, but the box I was working on last week
>>>> has a *second* bad drive. Actually, I'm starting to wonder about
>>>> that particulare hot-swap bay.
>>>>
2007 Nov 29
1
RAID, LVM, extra disks...
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot
/dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2
/dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs
sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and
sdf.
What should I do if I
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64.
I have noticed when building a new software RAID-6 array on CentOS 6.3
that the mismatch_cnt grows monotonically while the array is building:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 14:02, mark ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 16:33, mark ha scritto:
>
>> Alessandro Baggi wrote:
>>
>>> Il 30/01/19 14:02, mark ha scritto:
>>>
>>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>>
>>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>>
>>>>>> Alessandro Baggi wrote:
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote:
> Il 29/01/19 20:42, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 18:47, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>
>>>>>> I've no idea what happened, but the box I was working on last week
2020 Feb 03
3
Hard disk activity will not die down
I updated my backup server this weekend from CentOS 7 to CentOS 8.
OS disk is SSD, /dev/md0 are two 4TB WD mechanical drives.
No hardware was changed.
1. wiped all drives
2. installed new copy of 8 on system SSD
3. re-created the 4TB mirror /dev/md0 with the same WD mechanical drives
4. created the largest single partition possible on /dev/md0 and formatted
it ext4
5. waited several hours for the
2006 Oct 28
2
hard drive failing in linux raid
Hello all. I have a server with a linux software raid1 setup between
two drives of the same model....one hard drive as primary ide master
and second hard drive as secondary master. Now primary master hard
drive is displaying a lot of SMART errors so I would like to remove it
and replace with another drive....different brand but same size.
Partitions are /dev/md0 till /dev/md5. I think I know what
2011 Aug 17
1
RAID5 suddenly broken
Hello,
I have a RAID5 array on my CentOS 5.6 x86_64 workstation which
"suddenly" failed to work (actually after the system could not resume
from a suspend).
I had recently issues after moving the workstation to another office,
where one of the disks got accidently unplugged. But the RAID was
working and it had reconstructed (as far as I can tell) the data.
After I replugged the disk,
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>>
2006 Aug 06
2
File fragmentation
I've been running some tests on files created by rsync and noticing
fragmentation issues. I started the testing because our 5TB array started
performing very slowly and it appears fragmentation was the culprit. The
test I conducted was straighforward:
1. Copy over a 49GB file. Analyzed with contig (from sysinternals), no
fragments.
2. Ran rsync and the file was recreated normally (rsync
2012 Jun 07
1
mdadm: failed to write superblock to
Hello,
i have a little problem. Our server has an broken RAID.
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[2](F) sdb1[1]
2096064 blocks [2/1] [_U]
md2 : active raid1 sda3[2](F) sdb3[1]
1462516672 blocks [2/1] [_U]
md1 : active raid1 sda2[0] sdb2[1]
524224 blocks [2/2] [UU]
unused devices: <none>
I have remove the partition:
# mdadm --remove
2014 Dec 04
2
DegradedArray message
Thanks for all the responses. A little more digging revealed:
md0 is made up of two 250G disks on which the OS and a very large /var
partions resides for a number of virtual machines.
md1 is made up of two 2T disks on which /home resides.
Challenge is that disk 0 of md0 is the problem and it has a 524M /boot
partition outside of the raid partition.
My plan is to back up /home (md1) and at a
2003 Sep 04
1
ext3 + external journal -- Howto..
I am new to ext3 + external journal. Is there any howto I can look at?
this is what I understand
1. mke2fs -O journal_dev /dev/md5
2. mke2fs -J device=/dev/md5 /dev/md0
3. mount /dev/md0 / -t ext3 ( hmm.. what do I need to put on fstab?? )
/dev/md5 is a two drive RAID 1 partition
/dev/md0 is a 4 drive RAID 5 partition.
questions:
1. I am running RedHat 9.0. what extra software I need to
2014 Dec 03
7
DegradedArray message
Received the following message in mail to root:
Message 257:
>From root at desk4.localdomain Tue Oct 28 07:25:37 2014
Return-Path: <root at desk4.localdomain>
X-Original-To: root
Delivered-To: root at desk4.localdomain
From: mdadm monitoring <root at desk4.localdomain>
To: root at desk4.localdomain
Subject: DegradedArray event on /dev/md0:desk4
Date: Tue, 28 Oct 2014 07:25:27
2010 Nov 14
3
RAID Resynch...??
So still coming up to speed with mdadm and I notice this morning one of my
servers acting sluggish...so when I looked at the mdadm raid device I see
this:
mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Mon Sep 27 22:47:44 2010
Raid Level : raid10
Array Size : 976759808 (931.51 GiB 1000.20 GB)
Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
Raid