similar to: mdadm on reboot

Displaying 20 results from an estimated 7000 matches similar to: "mdadm on reboot"

2008 Jun 07
1
Software raid tutorial and hardware raid questions.
I remember seeing one with an example migrating from an old fashioned filesystem on a partition to a new filesystem on a mirrored lvm logical volume but one only one side of the mirror is set up at this time. First I need to copy stuff from what will become the second side of the mirror to filesystem on the first side or the mirror Then I will be ready to follow the rest of the tutorial and
2007 Oct 05
1
How to enable my RAID again
Hi I have setup software RAID two years ago using FC3 using the graphical installer (RAID1). In the mean time I installed CENTOS4.5 and everything is running fine. At least that was my perception. It now turns out that raid is not working and that LVS just finds 4 partitions from four disks and that's it.: /dev/sda2 VolGroup00 lvm2 a- 94.88G 0 /dev/sdb1 VolGroup00 lvm2 a-
2017 Sep 20
3
xfs not getting it right?
Chris Adams wrote: > Once upon a time, hw <hw at gc-24.de> said: >> xfs is supposed to detect the layout of a md-RAID devices when creating the >> file system, but it doesn?t seem to do that: >> >> >> # cat /proc/mdstat >> Personalities : [raid1] >> md10 : active raid1 sde[1] sdd[0] >> 499976512 blocks super 1.2 [2/2] [UU] >>
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi. CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. I was trying to setup two disks as a RAID1 array, using these lines mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2 mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2012 Jun 07
1
mdadm: failed to write superblock to
Hello, i have a little problem. Our server has an broken RAID. # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 2096064 blocks [2/1] [_U] md2 : active raid1 sda3[2](F) sdb3[1] 1462516672 blocks [2/1] [_U] md1 : active raid1 sda2[0] sdb2[1] 524224 blocks [2/2] [UU] unused devices: <none> I have remove the partition: # mdadm --remove
2017 Sep 20
4
xfs not getting it right?
Hi, xfs is supposed to detect the layout of a md-RAID devices when creating the file system, but it doesn?t seem to do that: # cat /proc/mdstat Personalities : [raid1] md10 : active raid1 sde[1] sdd[0] 499976512 blocks super 1.2 [2/2] [UU] bitmap: 0/4 pages [0KB], 65536KB chunk # mkfs.xfs /dev/md10p2 meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 blks
2009 Aug 06
10
RAID[56] status
If we''ve abandoned the idea of putting the number of redundant blocks into the top bits of the type bitmask (and I hope we have), then we''re fairly much there. Current code is at: git://, http://git.infradead.org/users/dwmw2/btrfs-raid56.git git://, http://git.infradead.org/users/dwmw2/btrfs-progs-raid56.git We have recovery working, as well as both full-stripe writes
2020 Nov 16
1
mdadm raid-check
On Sat, 2020-11-14 at 21:55 -0600, Valeri Galtsev wrote: > > On Nov 14, 2020, at 8:20 PM, hw <hw at gc-24.de> wrote: > > > > > > Hi, > > > > is it required to run /usr/sbin/raid-check once per week? Centos 7 does > > this. Maybe it's sufficient to run it monthly? IIRC Debian did it monthly. > > On hardware RAIDs I do RAID
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote: > Il 29/01/19 20:42, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 18:47, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 15:03, mark ha scritto: >>>>> >>>>>> I've no idea what happened, but the box I was working on last week
2019 Jan 29
2
C7, mdadm issues
I've no idea what happened, but the box I was working on last week has a *second* bad drive. Actually, I'm starting to wonder about that particulare hot-swap bay. Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a reliable way to make either one active. Actually, I would have expected the linux
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>>>
2009 Sep 24
4
mdadm size issues
Hi, I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit) All 10 drives are 2T in size. device sd{a,b,c,d,e,f} are on my motherboard device sd{i,j,k,l} are on a pci express areca card (relevant lspci info below) #lspci 06:0e.0 RAID bus controller: Areca Technology Corp. ARC-1210 4-Port PCI-Express to SATA RAID Controller The controller is set to JBOD the drives. All
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 15:03, mark ha scritto: > >> I've no idea what happened, but the box I was working on last week has >> a *second* bad drive. Actually, I'm starting to wonder about that >> particulare hot-swap bay. >> >> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... >> but see both /dev/sdh1 and
2010 Mar 01
1
Fwd: Erika DeBenedictis-Recommendation
---------- Forwarded message ---------- From: Celia Einhorn <celia.einhorn at gmail.com> Date: Wed, Feb 17, 2010 at 8:15 PM Subject: Fwd: Erika DeBenedictis-Recommendation To: drew einhorn <drew.einhorn at gmail.com> ---------- Forwarded message ---------- From: David H. Kratzer <dhk at lanl.gov> Date: Tue, Feb 16, 2010 at 9:24 AM Subject: Fwd: Erika
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 18:47, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 15:03, mark ha scritto: >>> >>>> I've no idea what happened, but the box I was working on last week >>>> has a *second* bad drive. Actually, I'm starting to wonder about >>>> that particulare hot-swap bay. >>>>
2014 Jul 25
2
Convert "bare partition" to RAID1 / mdadm?
I have a large disk full of data that I'd like to upgrade to SW RAID 1 with a minimum of downtime. Taking it offline for a day or more to rsync all the files over is a non-starter. Since I've mounted SW RAID1 drives directly with "mount -t ext3 /dev/sdX" it would seem possible to flip the process around, perhaps change the partition type with fdisk or parted, and remount as
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 14:02, mark ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto:
2009 Mar 09
2
LSI Logic MegaRAID 8480 Storage controller
I have LSI Logic MegaRAID 8480 Storage controller I am having trouble reconfigure one of the hardware raid devices It is configured with 4 hardware raid logical volumes on /dev/sda /dev/sdb /dev/sdc /dev/sdd. I am in the middle of rebuilding the system, ?and at this point I am only using one of the volumes, /dev/sda /dev/sda, /dev/sdb, and /dev/sdd are all 2 drive raid1 mirrors I will be
2013 Feb 11
1
mdadm: hot remove failed for /dev/sdg: Device or resource busy
Hello all, I have run into a sticky problem with a failed device in an md array, and I asked about it on the linux raid mailing list, but since the problem may not be md-specific, I am hoping to find some insight here. (If you are on the MD list, and are seeing this twice, I humbly apologize.) The summary is that during a reshape of a raid6 on an up to date CentOS 6.3 box, one disk failed, and
2017 Feb 17
3
RAID questions
On 2017-02-15, John R Pierce <pierce at hogranch.com> wrote: > On 2/14/2017 4:48 PM, tdukes at palmettoshopper.com wrote: > >> 3 - Can additional drive(s) be added later with a changein RAID level >> without current data loss? > > Only some systems support that sort of restriping, and its a dangerous > activity (if the power fails or system crashes midway through