Displaying 20 results from an estimated 10000 matches similar to: "question on software raid"
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I
noticed one drive is giving errors. Good thing I had RAID. I planned
on upgrading this server in next month or so. Just wandering if there
was an easy way to fix this to avoid rushing the upgrade? Having a
single drive is slowing down reads as well, I think.
Thanks.
Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb
2008 Oct 05
3
Software Raid Expert Needed
Hello all,
I have 2 x 250GB sata disks (sda and sdb).
# fdisk -l /dev/sda
Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 14939 119997486 fd Linux raid
autodetect
/dev/sda2 14940 29878
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file:
more /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382
ARRAY /dev/md2 level=raid1 num-devices=2
2007 Mar 12
1
raid array device partition missing
From looking at mdstat it appears I have a RAID partition missing.
There are NO errors in /var/log/messages about disk errors or anything.
What happened to my sda1??? I had a power outage last week.
Any ideas??? Jerry
more /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb3[1] sda3[0]
236717696 blocks [2/2] [UU]
md0 : active raid1 sdb1[1]
51199040 blocks [2/1] [_U]
2013 Feb 04
3
Questions about software RAID, LVM.
I am planning to increase the disk space on my desktop system. It is
running CentOS 5.9 w/XEN. I have two 160Gig 2.5" laptop (2.5") SATA drives
in two slots of a 4-slot hot swap bay configured like this:
Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End
2006 Mar 14
2
Help. Failed event on md1
Hi all,
This morning I received this notification from mdadm:
This is an automatically generated mail message from mdadm
running on server-mail.mydomain.kom
A Fail event had been detected on md device /dev/md1.
Faithfully yours, etc.
In /proc/mdstat I see this:
Personalities : [raid1]
md1 : active raid1 sdb2[2](F) sda2[0]
77842880 blocks [2/1] [U_]
md0 : active raid1 sdb1[1] sda1[0]
2011 Feb 14
2
rescheduling sector linux raid ?
Hi List,
What this means?
md: syncing RAID array md0
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maximum available idle IO bandwidth (but not more than
200000 KB/sec) for reconstruction.
md: using 128k window, over a total of 2096384 blocks.
md: md0: sync done.
RAID1 conf printout:
--- wd:2 rd:2
disk 0, wo:0, o:1, dev:sda2
disk 1, wo:0, o:1, dev:sdb2
sd 0:0:0:0:
2012 Jun 07
1
mdadm: failed to write superblock to
Hello,
i have a little problem. Our server has an broken RAID.
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[2](F) sdb1[1]
2096064 blocks [2/1] [_U]
md2 : active raid1 sda3[2](F) sdb3[1]
1462516672 blocks [2/1] [_U]
md1 : active raid1 sda2[0] sdb2[1]
524224 blocks [2/2] [UU]
unused devices: <none>
I have remove the partition:
# mdadm --remove
2010 Oct 19
3
more software raid questions
hi all!
back in Aug several of you assisted me in solving a problem where one
of my drives had dropped out of (or been kicked out of) the raid1 array.
something vaguely similar appears to have happened just a few mins ago,
upon rebooting after a small update. I received four emails like this,
one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for
/dev/md126:
Subject: DegradedArray
2009 Jul 02
4
Upgrading drives in raid 1
I think I have solved my issue and would like some input from anyone who has
done this for pitfalls, errors, or if I am just wrong.
Centos 5.x, software raid, 250gb drives.
2 drives in mirror, one spare. All same size.
2 devices in the mirror, one boot (about 100MB), one that fills the rest of
disk and contains LVM partitions.
I was thinking of taking out the spare and adding a 500gb drive.
I
2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions
md0,md1 and md2 with md2 as 400+ gigs
Now it is almost 36 hours the status is
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb1[1] hda1[0]
104320 blocks [2/2] [UU]
resync=DELAYED
md1 : active raid1 hdb2[1] hda2[0]
4096448 blocks [2/2] [UU]
resync=DELAYED
md2 : active raid1
2007 Apr 25
2
Raid 1 newbie question
Hi
I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output:
[root at server admin]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 hdc2[1] hda2[0]
1052160 blocks [2/2] [UU]
md1 : active raid1 hda3[0]
77023552 blocks [2/1] [U_]
md0 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
What happens with md1 ?
My dmesg output is:
[root at
2010 Nov 18
1
kickstart raid disk partitioning
Hello.
A couple of years ago I installed two file-servers
using kickstart. The server has two 1TB sata disks
with two software raid1 partitions as follows:
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb4[1] sda4[0]
933448704 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda2[2](F)
40957568 blocks [2/1] [_U]
Now the drives are starting to be failing and next week
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks
in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on
/dev/md2 97G 918M 91G 1% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/md1 485M 54M 407M 12% /boot
/dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
511936 blocks super 1.0
2013 Mar 05
8
Software RAID complete drives or individual partitions
I have been reading about software raid. I configured my first software raid system about a month ago.
I have 4 500 Gig drives configured in RAID 5 configuration with a total of 1.5TB.
Currently I configured the complete individual drivers as software raid, then created a /dev/md0 with the drives
I then created a /file_storage partition on /dev/md0.
I created my /boot / and swap partitions on
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi.
CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines.
I was trying to setup two disks as a RAID1 array, using these lines
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2
mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2019 Jan 29
2
C7, mdadm issues
I've no idea what happened, but the box I was working on last week has a
*second* bad drive. Actually, I'm starting to wonder about that
particulare hot-swap bay.
Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... but
see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a reliable
way to make either one active.
Actually, I would have expected the linux
2013 Jan 04
2
Syslinux 5.00 - Doesn't boot my system / Not passing the kernel options to the kernel?
Hi,
I encounter a problem with Syslinux 5.00 I cannot really describe. So I
created two small videos:
Booting with Syslinux 5.00 (1.3 MB):
<https://www.dropbox.com/s/b6g8cdf2t9v48c6/boot-syslinux5-fail.mp4>
How I fixed the problem by downgrading to Syslinux 4.06 and how booting
should look like (6.5 MB):
<https://www.dropbox.com/s/lt7cpgfm0qvqtba/boot-syslinux5-how-i-fixed-it.mp4>
2014 Dec 03
7
DegradedArray message
Received the following message in mail to root:
Message 257:
>From root at desk4.localdomain Tue Oct 28 07:25:37 2014
Return-Path: <root at desk4.localdomain>
X-Original-To: root
Delivered-To: root at desk4.localdomain
From: mdadm monitoring <root at desk4.localdomain>
To: root at desk4.localdomain
Subject: DegradedArray event on /dev/md0:desk4
Date: Tue, 28 Oct 2014 07:25:27
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote:
> Il 29/01/19 20:42, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 18:47, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>
>>>>>> I've no idea what happened, but the box I was working on last week