similar to: Centos´s way of handling mdadm

Displaying 20 results from an estimated 10000 matches similar to: "Centos´s way of handling mdadm"

2018 Jan 24
0
Centos´s way of handling mdadm
> Am 24.01.2018 um 19:17 schrieb hw <hw at adminart.net>: > > Hi, > > what?s the proposed way of handling mdadm in Centos 7? I did not get > any notification when a disk in a RAID1 failed, and now that the > configuration has changed after resolving the problem, I might be > supposed to somehow update /etc/mdadm.conf. > > Am I not supposed to be notified by
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2
2005 May 21
1
Software RAID CentOS4
Hi, I have a system with two IDE controllers running RAID1. As a test I powered down, removed one drive (hdc), and powered back up. System came up fine, so powered down installed a new drive (hdc) And powered back up. /proc/mdstat indicatd RAID1 active with hda only. I thought it would Auto add the new hdc drive... Also when I removed the new drive and added The original hdc, the swap partitions
2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2012 Jun 07
1
mdadm: failed to write superblock to
Hello, i have a little problem. Our server has an broken RAID. # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 2096064 blocks [2/1] [_U] md2 : active raid1 sda3[2](F) sdb3[1] 1462516672 blocks [2/1] [_U] md1 : active raid1 sda2[0] sdb2[1] 524224 blocks [2/2] [UU] unused devices: <none> I have remove the partition: # mdadm --remove
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>>>
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote: > Il 29/01/19 20:42, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 18:47, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 15:03, mark ha scritto: >>>>> >>>>>> I've no idea what happened, but the box I was working on last week
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi. CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. I was trying to setup two disks as a RAID1 array, using these lines mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2 mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2011 Nov 16
1
[PATCH] New API: mdadm-detail
Return a hash containing metadata about a specific Linux MD device, based on the output of 'mdadm -DY'. --- daemon/md.c | 78 ++++++++++++++++++++++++++++++++++++++++ generator/generator_actions.ml | 31 ++++++++++++++++ regressions/test-mdadm.sh | 60 ++++++++++++++++++++++++++++++- src/MAX_PROC_NR | 2 +- 4 files changed, 169
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because two drives became unavailable. After adjusting the cables on several occasions and shutting down and restarting, I was able to see the drives again. This is when I snatched defeat from the jaws of victory. Please, someone with vast knowledge of how RAID 5 with mdadm works, tell me if I have any chance at all
2011 Oct 08
1
CentOS 6.0 CR mdadm-3.2.2 breaks Intel BIOS RAID
I just upgraded my home KVM server to CentOS 6.0 CR to make use of the latest libvirt and now my RAID array with my VM storage is missing. It seems that the upgrade to mdadm-3.2.2 is the culprit. This is the output from mdadm when scanning that array, # mdadm --detail --scan ARRAY /dev/md0 metadata=imsm UUID=734f79cf:22200a5a:73be2b52:3388006b ARRAY /dev/md126 metadata=imsm
2008 Jun 11
3
mdmonitor not triggering program on fail events
Hi, Can anyone help me with this since I want to get it done "the correct way" I'm trying to make mdmonitor to execute a program when it detects a fail event automatically. Currently, from what I see, init is calling mdmonitor with these options mdadm --monitor --scan -f (note that the --program is not there) and this is in my /etc/mdadm.conf MAILADDR root PROGRAM
2022 Apr 18
1
Installing mdadm and C7 on new computer
I have a new computer with 2 x 2TB SSDs where I wanted to install C7 and use mdadm for RAID1 configuration and encrypting the /home partition. On the net I found https://tuxfixer.com/centos-7-installation-with-lvm-raid-1-mirroring/ which I adopted slightly with respect to partition sizes, using RAID1 for /boot and /root as well and added the /home partition with RAID1 and chose to have /home
2014 Jul 25
2
Convert "bare partition" to RAID1 / mdadm?
I have a large disk full of data that I'd like to upgrade to SW RAID 1 with a minimum of downtime. Taking it offline for a day or more to rsync all the files over is a non-starter. Since I've mounted SW RAID1 drives directly with "mount -t ext3 /dev/sdX" it would seem possible to flip the process around, perhaps change the partition type with fdisk or parted, and remount as
2011 Feb 23
2
LVM problem after adding new (md) PV
Hello, I have a weird problem after adding new PV do LMV volume group. It seems the error comes out only during boot time. Please read the story. I have couple of 1U machines. They all have two, four or more Fujitsu-Siemens SAS 2,5" disks, which are bounded in Raid1 pairs with Linux mdadm. First pair of disks has always two arrays (md0, md1). Small md0 is used for booting and the rest - md1
2022 Apr 24
3
Installing mdadm and C7 on new computer
On 04/23/2022 09:19 PM, H wrote: > On 04/19/2022 09:57 AM, Roberto Ragusa wrote: >> On 4/18/22 1:27 PM, H wrote: >>> I have a new computer with 2 x 2TB SSDs where I wanted to install C7 and use mdadm for RAID1 configuration and encrypting the /home partition. On the net I found https://tuxfixer.com/centos-7-installation-with-lvm-raid-1-mirroring/ which I adopted slightly with
2009 Nov 11
2
Lost raid when server reboots
Hi all, I have setup a raid1 between two iscsi disks and mdadm command goes well. Problems starts when i rebooted the server (CentOS 5.4 fully updated): raid is lost, and i don't understand why. "mdadm --detail --scan" doesn't returns me any output. "mdadm --examine --scan" returns me: ARRAY /dev/md0 level=raid1 num-devices=2
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across
2007 Aug 29
2
Setting up RAID using mdadm on a proliant DL320 G4
Hi, I have a new server HP proliant DL320 G4, with two 160 GB SATA hdds.. I have installed CentOS 4.5 with mdadm without any problem, but when I disconnect one disk the server does not boot or I received a kernel panic when booting... I have disabled the SATA embeded raid (BIOS) and nothing.. I've also download the driver from HP site HP (Embedded SATA RAID Controller Driver Diskette for Red
2019 Feb 26
2
Problem with mdadm, raid1 and automatically adds any disk to raid
> On Mon, Feb 25, 2019 at 11:54 PM Simon Matter via CentOS > <centos at centos.org> > wrote: > >> > >> > What makes you think this has *anything* to do with systemd? Bitching >> > about systemd every time you hit a problem isn't helpful. Don't. >> >> If it's not systemd, who else does it? Can you elaborate, please? >> >