similar to: DegradedArray message

Displaying 20 results from an estimated 3000 matches similar to: "DegradedArray message"

2014 Dec 04
2
DegradedArray message
Thanks for all the responses. A little more digging revealed: md0 is made up of two 250G disks on which the OS and a very large /var partions resides for a number of virtual machines. md1 is made up of two 2T disks on which /home resides. Challenge is that disk 0 of md0 is the problem and it has a 524M /boot partition outside of the raid partition. My plan is to back up /home (md1) and at a
2014 Dec 03
0
DegradedArray message
On Tue, Dec 02, 2014 at 08:14:19PM -0500, David McGuffey wrote: > Received the following message in mail to root: > > Message 257: > >From root at desk4.localdomain Tue Oct 28 07:25:37 2014 > Return-Path: <root at desk4.localdomain> > X-Original-To: root > Delivered-To: root at desk4.localdomain > From: mdadm monitoring <root at desk4.localdomain> > To:
2014 Dec 03
0
DegradedArray message
Hi David, Am 03.12.2014 um 02:14 schrieb David McGuffey <davidmcguffey at verizion.net>: > This is an automatically generated mail message from mdadm > running on desk4 > > A DegradedArray event had been detected on md device /dev/md0. > > Faithfully yours, etc. > > P.S. The /proc/mdstat file currently contains the following: > > Personalities : [raid1]
2014 Dec 09
2
DegradedArray message
On Thu, 2014-12-04 at 16:46 -0800, Gordon Messmer wrote: > On 12/04/2014 05:45 AM, David McGuffey wrote: > In practice, however, there's a bunch of information you didn't provide, > so some of those steps are wrong. > > I'm not sure what dm-0, dm-2 and dm-3 are, but they're indicated in your > mdstat. I'm guessing that you made partitions, and then made
2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array. Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0
2014 Dec 09
0
DegradedArray message
On Mon, 2014-12-08 at 21:11 -0500, David McGuffey wrote: > On Thu, 2014-12-04 at 16:46 -0800, Gordon Messmer wrote: > > On 12/04/2014 05:45 AM, David McGuffey wrote: > > > In practice, however, there's a bunch of information you didn't provide, > > so some of those steps are wrong. > > > > I'm not sure what dm-0, dm-2 and dm-3 are, but they're
2014 Dec 05
0
DegradedArray message
On 12/04/2014 05:45 AM, David McGuffey wrote: > md0 is made up of two 250G disks on which the OS and a very large /var > partions resides for a number of virtual machines. ... > Challenge is that disk 0 of md0 is the problem and it has a 524M /boot > partition outside of the raid partition. Assuming that you have an unused drive port, you can fix that pretty easily. Attach a new
2012 Jun 07
1
mdadm: failed to write superblock to
Hello, i have a little problem. Our server has an broken RAID. # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 2096064 blocks [2/1] [_U] md2 : active raid1 sda3[2](F) sdb3[1] 1462516672 blocks [2/1] [_U] md1 : active raid1 sda2[0] sdb2[1] 524224 blocks [2/2] [UU] unused devices: <none> I have remove the partition: # mdadm --remove
2010 Nov 18
1
kickstart raid disk partitioning
Hello. A couple of years ago I installed two file-servers using kickstart. The server has two 1TB sata disks with two software raid1 partitions as follows: # cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdb4[1] sda4[0] 933448704 blocks [2/2] [UU] md0 : active raid1 sdb1[1] sda2[2](F) 40957568 blocks [2/1] [_U] Now the drives are starting to be failing and next week
2007 Mar 12
1
raid array device partition missing
From looking at mdstat it appears I have a RAID partition missing. There are NO errors in /var/log/messages about disk errors or anything. What happened to my sda1??? I had a power outage last week. Any ideas??? Jerry more /proc/mdstat Personalities : [raid1] md1 : active raid1 sdb3[1] sda3[0] 236717696 blocks [2/2] [UU] md0 : active raid1 sdb1[1] 51199040 blocks [2/1] [_U]
2006 Feb 14
4
ChanIsAvail
Hi, So I've done my research on Chanisavail, read the wiki, checked the archive but can't seem to find anything to suit my scenario. I've played around with it a lot, but I'm still scratching my head on what I need to do. What I need is to be able to accept a call by SIP and ring all telephones that are not in use (which just so happen to be on Zap interfaces, but might be SIP
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list, I'm new with UEFI and GPT. For several years I've used MBR partition table. I've installed my system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to concerning raid1 installation, I must put each partition on a different md devices. I've asked times ago if it's more correct create the
2009 Apr 28
2
new install and software raid
Is there a reason why after a software raid install (from kickstart) that md1 is always unclean. md0 seems fine. boot screen says md1 is dirty and cat /proc/mdstat show md1 as being rebuilt. Any ideas? Jerry --------------- my kickstart -------------- echo "bootloader --location=mbr --driveorder=$HD1SHORT --append=\"rhgb quiet\" " >
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64. I have noticed when building a new software RAID-6 array on CentOS 6.3 that the mismatch_cnt grows monotonically while the array is building: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2006 Mar 02
3
Advice on setting up Raid and LVM
Hi all, I'm setting up Centos4.2 on 2x80GB SATA drives. The partition scheme is like this: /boot = 300MB / = 9.2GB /home = 70GB swap = 500MB The RAID is RAID 1. md0 = 300MB = /boot md1 = 9.2GB = LVM md2 = 70GB = LVM md3 = 500MB = LVM Now, the confusing part is: 1. When creating VolGroup00, should I include all PV (md1, md2, md3)? Then create the LV. 2. When setting up RAID 1, should I
2009 Jul 02
4
Upgrading drives in raid 1
I think I have solved my issue and would like some input from anyone who has done this for pitfalls, errors, or if I am just wrong. Centos 5.x, software raid, 250gb drives. 2 drives in mirror, one spare. All same size. 2 devices in the mirror, one boot (about 100MB), one that fills the rest of disk and contains LVM partitions. I was thinking of taking out the spare and adding a 500gb drive. I
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi. CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. I was trying to setup two disks as a RAID1 array, using these lines mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2 mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2018 Aug 29
3
Kickstart file for software raid
I am using a kickstart file for CentOS 7 raid / --device=md0 --fstype="xfs" --level=1 --useexisting raid /home --noformat --device=md1 --level=1 --useexisting It is erroring out on the --useexisting. The exact text is: RAID volume "0" specified with "--useexisting" does not exist. What did I do wrong? Jerry
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2