Displaying 20 results from an estimated 3000 matches similar to: "Adding a new disk to an existing raid 10"
2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions
md0,md1 and md2 with md2 as 400+ gigs
Now it is almost 36 hours the status is
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb1[1] hda1[0]
104320 blocks [2/2] [UU]
resync=DELAYED
md1 : active raid1 hdb2[1] hda2[0]
4096448 blocks [2/2] [UU]
resync=DELAYED
md2 : active raid1
2005 Feb 03
2
RAID 1 sync
Is my new 300GB RAID 1 array REALLY going to take 18936 minutes to
sync!!???
2010 Jul 21
4
Fsck on mdraid array
Something seems to be wrong with my file systems, and I want to fsck
everything. But I cannot.
The setup consists of 2 hds, carrying 3 raid1 (ext3) file systems (boot,
/, swap). OS is up-to-date CentOS 5.
So I boot from CentOS 5.3 dvd in rescue mode, do not mount the file
systems, and try to run
fsck -y /dev/md0
fsck -y /dev/md1
fsck -y /dev/md2
For each try I get an error message:
2011 Feb 07
2
iSCSI disk preperation
I am currently going through the process of installing/configuring an
iSCSI target and cannot find a good write up on how to prepare the disks
on the server. I would like to mirror the two disks and present them to
the client. Mirroring isn't the question, its how I go about it is the
problem. When I partitioned the two drives and mirrored them together,
then presented them to the client,
2011 Jan 12
3
variable raid1 rebuild speed?
I have a 750Gb 3-member software raid1 where 2 partitions are always
present and the third is regularly rotated and re-synced (SATA disks in
hot-swap bays). The timing of the resync seems to be extremely variable
recently, taking anywhere from 3 to 10 hours even if the partition is
unmounted and the drives aren't doing anything else and regardless of
what I echo into
2017 Jan 25
2
CentOS 7 install on one RAID 1 [not-so-SOLVED]
Let me see if I can, um, reboot this thread....
I made a RAID 1 of two raw disks, /dev/sda and /dev/sdb, *not* /dev/sdax
/dev/sdbx. Then I installed CentOS 7 on the RAID, with /boot, /, and swap
being partitions on the RAID. My problem is that grub2-install absolutely
and resolutely refuses to install on /dev/sda or /dev/sdb.
I've currently got it up in a half-assed rescue mode, and have
2011 Apr 01
5
question on software raid
dmesg is not reporting any issues.
The /proc/mdstat looks fine.
md0 : active raid1 sdb1[1] sda1[0]
X blocks [2/2] [UU]
however /var/log/messages says:
smartd[3392] Device /dev/sda 20 offline uncorrectable sectors
The machine is running fine.. raid array looks good - what
is up with smartd?
THanks,
Jerry
2019 Jan 29
2
C7, mdadm issues
I've no idea what happened, but the box I was working on last week has a
*second* bad drive. Actually, I'm starting to wonder about that
particulare hot-swap bay.
Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... but
see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a reliable
way to make either one active.
Actually, I would have expected the linux
2019 Apr 23
2
Intel Vroc experiences?
Hi,
Has anyone had any experience with Intel Vroc[1]? I'm possibly having to deal with a new server with such technology and can't find much (real world) information about it.
Looking at the specs it's basically a glorified fake raid which usually turns on my alarm bells. Has anyone done any testing, how does it compare with "real" raid or software raid?
Cheers,
Lucian
[1]
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 15:03, mark ha scritto:
>
>> I've no idea what happened, but the box I was working on last week has
>> a *second* bad drive. Actually, I'm starting to wonder about that
>> particulare hot-swap bay.
>>
>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1...
>> but see both /dev/sdh1 and
2007 Apr 25
2
Raid 1 newbie question
Hi
I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output:
[root at server admin]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 hdc2[1] hda2[0]
1052160 blocks [2/2] [UU]
md1 : active raid1 hda3[0]
77023552 blocks [2/1] [U_]
md0 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
What happens with md1 ?
My dmesg output is:
[root at
2014 Dec 03
7
DegradedArray message
Received the following message in mail to root:
Message 257:
>From root at desk4.localdomain Tue Oct 28 07:25:37 2014
Return-Path: <root at desk4.localdomain>
X-Original-To: root
Delivered-To: root at desk4.localdomain
From: mdadm monitoring <root at desk4.localdomain>
To: root at desk4.localdomain
Subject: DegradedArray event on /dev/md0:desk4
Date: Tue, 28 Oct 2014 07:25:27
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 18:47, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 15:03, mark ha scritto:
>>>
>>>> I've no idea what happened, but the box I was working on last week
>>>> has a *second* bad drive. Actually, I'm starting to wonder about
>>>> that particulare hot-swap bay.
>>>>
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote:
> Il 29/01/19 20:42, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 18:47, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>
>>>>>> I've no idea what happened, but the box I was working on last week
2006 Mar 14
2
Help. Failed event on md1
Hi all,
This morning I received this notification from mdadm:
This is an automatically generated mail message from mdadm
running on server-mail.mydomain.kom
A Fail event had been detected on md device /dev/md1.
Faithfully yours, etc.
In /proc/mdstat I see this:
Personalities : [raid1]
md1 : active raid1 sdb2[2](F) sda2[0]
77842880 blocks [2/1] [U_]
md0 : active raid1 sdb1[1] sda1[0]
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64.
I have noticed when building a new software RAID-6 array on CentOS 6.3
that the mismatch_cnt grows monotonically while the array is building:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 14:02, mark ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2012 Jun 07
1
mdadm: failed to write superblock to
Hello,
i have a little problem. Our server has an broken RAID.
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[2](F) sdb1[1]
2096064 blocks [2/1] [_U]
md2 : active raid1 sda3[2](F) sdb3[1]
1462516672 blocks [2/1] [_U]
md1 : active raid1 sda2[0] sdb2[1]
524224 blocks [2/2] [UU]
unused devices: <none>
I have remove the partition:
# mdadm --remove
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I
noticed one drive is giving errors. Good thing I had RAID. I planned
on upgrading this server in next month or so. Just wandering if there
was an easy way to fix this to avoid rushing the upgrade? Having a
single drive is slowing down reads as well, I think.
Thanks.
Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi.
CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines.
I was trying to setup two disks as a RAID1 array, using these lines
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2
mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2