similar to: Software RAID1 Failure Help

Displaying 20 results from an estimated 1100 matches similar to: "Software RAID1 Failure Help"

2010 Jan 05
4
Software RAID1 Disk I/O
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1. [root at server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root at server ~]# Its barebones
2010 Jul 21
4
Fsck on mdraid array
Something seems to be wrong with my file systems, and I want to fsck everything. But I cannot. The setup consists of 2 hds, carrying 3 raid1 (ext3) file systems (boot, /, swap). OS is up-to-date CentOS 5. So I boot from CentOS 5.3 dvd in rescue mode, do not mount the file systems, and try to run fsck -y /dev/md0 fsck -y /dev/md1 fsck -y /dev/md2 For each try I get an error message:
2006 Aug 10
3
MD raid tools ... did i missed something?
Hi I have a degraded array /dev/md2 ===================================================================== $ mdadm -D /dev/md2 /dev/md2: Version : 00.90.01 Creation Time : Thu Oct 6 20:31:57 2005 Raid Level : raid5 Array Size : 221953536 (211.67 GiB 227.28 GB) Device Size : 110976768 (105.84 GiB 113.64 GB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 2
2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions md0,md1 and md2 with md2 as 400+ gigs Now it is almost 36 hours the status is cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdb1[1] hda1[0] 104320 blocks [2/2] [UU] resync=DELAYED md1 : active raid1 hdb2[1] hda2[0] 4096448 blocks [2/2] [UU] resync=DELAYED md2 : active raid1
2007 Apr 25
2
Raid 1 newbie question
Hi I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output: [root at server admin]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc2[1] hda2[0] 1052160 blocks [2/2] [UU] md1 : active raid1 hda3[0] 77023552 blocks [2/1] [U_] md0 : active raid1 hdc1[1] hda1[0] 104320 blocks [2/2] [UU] What happens with md1 ? My dmesg output is: [root at
2011 Aug 17
1
RAID5 suddenly broken
Hello, I have a RAID5 array on my CentOS 5.6 x86_64 workstation which "suddenly" failed to work (actually after the system could not resume from a suspend). I had recently issues after moving the workstation to another office, where one of the disks got accidently unplugged. But the RAID was working and it had reconstructed (as far as I can tell) the data. After I replugged the disk,
2009 May 20
2
help with rebuilding md0 (Raid5)
Sorry, this is going to be a rather long post...Here's the situation; I have 4 IDE disks from an old snap server which fails to mount the raid array. We believe there is a controller error on the SNAP so we've put them in another box running CentOS 5 and can see the disks OK. hda thru hdd looks like this Disk /dev/hdd: 185.2 GB, 185283624960 bytes 255 heads, 63 sectors/track, 22526
2006 Apr 23
1
RAID question
Hello. I have several systems (CentOS 3 and CentOS 4) with software raid and I've observed a difference in the raid state: In CentOS 3 systems we find that mdadm --detail /dev/md* show: .............. Version : 00.90.00 Creation Time : Fri Apr 16 14:59:43 2004 Raid Level : raid1 Array Size : 20289984 (19.35 GiB 20.78 GB) Device Size : 20289984 (19.35 GiB 20.78 GB)
2010 Nov 14
3
RAID Resynch...??
So still coming up to speed with mdadm and I notice this morning one of my servers acting sluggish...so when I looked at the mdadm raid device I see this: mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Mon Sep 27 22:47:44 2010 Raid Level : raid10 Array Size : 976759808 (931.51 GiB 1000.20 GB) Used Dev Size : 976759808 (931.51 GiB 1000.20 GB) Raid
2006 Jun 24
3
recover data from linear raid
Hello, I had a scientific linux 3.0.4 system (rhel compatible), with 3 ide disks, one for / and two others in linear raid (250 gb and 300 gb each). This system was obsoleted so i move the raid disks to a new scientific linux 3.0.7 installation. However, the raid array was not detected ( I put the disks on the same channels and same master/lsave setup as in the previous setup). In fact
2010 Feb 28
3
puzzling md error ?
this has never happened to me before, and I'm somewhat at a loss. got a email from the cron thing... /etc/cron.weekly/99-raid-check: WARNING: mismatch_cnt is not 0 on /dev/md10 WARNING: mismatch_cnt is not 0 on /dev/md11 ok, md10 and md11 are each raid1's made from 2 x 72GB scsi drives, on a dell 2850 or something dual single-core 3ghz server. these two md's are in
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across
2008 Feb 06
4
Installation problems with large mirrored drives
I am trying to install CentOS 4.6 to a pair of 750GB hard drives. I can successfully install to either of the drives as a single drive, but when I try to use both drives and mirror the partitions, I start having problems. Anaconda crashes as it is trying to format the drives. This is what I'm trying to create: /dev/md0: 200MB, /boot /dev/md1: 2GB, swap /dev/md2: rest of the
2012 Jun 07
1
mdadm: failed to write superblock to
Hello, i have a little problem. Our server has an broken RAID. # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 2096064 blocks [2/1] [_U] md2 : active raid1 sda3[2](F) sdb3[1] 1462516672 blocks [2/1] [_U] md1 : active raid1 sda2[0] sdb2[1] 524224 blocks [2/2] [UU] unused devices: <none> I have remove the partition: # mdadm --remove
2008 Mar 23
4
md raid1 - no speed improvement
Hi, I have two 320 GB SATA disks (/dev/sda, /dev/sdb) in a server running CentOS release 5. They both have three partitions setup as RAID1 using md (boot, swap, and an LVM data partition). # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[0] 104320 blocks [2/2] [UU] md1 : active raid1 sdb2[1] sda2[0] 4192896 blocks [2/2] [UU] md2 : active raid1 sdb3[1]
2010 Aug 18
3
Wrong disk size problem.
Hi,we have Centos 5.4 server and according to me we have strange problem. Disk size and other indormation like below.Normally,md2 partition should have 46GB free disk size but available value is zero.Why it show zero ? If you help me,I will be happy. df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 19G 2.1G 16G 12% / /dev/md2 880G 834G 0
2014 Dec 03
7
DegradedArray message
Received the following message in mail to root: Message 257: >From root at desk4.localdomain Tue Oct 28 07:25:37 2014 Return-Path: <root at desk4.localdomain> X-Original-To: root Delivered-To: root at desk4.localdomain From: mdadm monitoring <root at desk4.localdomain> To: root at desk4.localdomain Subject: DegradedArray event on /dev/md0:desk4 Date: Tue, 28 Oct 2014 07:25:27
2006 May 10
1
pop3 problem with small messages with no Subject: and no To: headers
Hello, Seeing a strange thing with dovecot 1.0 b7 pop3. If a user gets a particular _very_ short spam message (not sure what virus makes these..), with NO Subject and NO To: headers and NO body, (example available upon request), the user is unable to download the mail. The log entry is: May 10 10:51:35 mail dovecot: POP3(user): Disconnected top=0/0, retr=1/1344, del=0/75, size=16383381
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :) Il 30/03/2023 11:26, Hu Bert ha scritto: > Just an observation: is there a performance difference between a sw > raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick) Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks. > with > the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario >
2009 Jul 02
4
Upgrading drives in raid 1
I think I have solved my issue and would like some input from anyone who has done this for pitfalls, errors, or if I am just wrong. Centos 5.x, software raid, 250gb drives. 2 drives in mirror, one spare. All same size. 2 devices in the mirror, one boot (about 100MB), one that fills the rest of disk and contains LVM partitions. I was thinking of taking out the spare and adding a 500gb drive. I