similar to: Lost mdadm.conf

Displaying 20 results from an estimated 4000 matches similar to: "Lost mdadm.conf"

2009 Dec 20
1
mdadm help
Hey List, So I had a 4 drive software RAID 5 set up consisting of /dev/sdb1, /dev/sdc1, /dev/sdd1 and /dev/sde1. I reinstalled my OS and after the reinstall I made the mistake of re-assembling the array incorrectly by typing "sudo mdadm --assemble /dev/md0 /dev/sdb /dev/sdc /dev/sdd /dev/sde" in a moment of stupidity. Obviously this didn't work and the array wouldn't mount and
2007 Jun 10
1
mdadm Linux Raid 10: is it 0+1 or 1+0?
The relevance of this question can be found here: http://aput.net/~jheiss/raid10/ I read the mdadm documents but I could not find a positive answer. I even read the raid10 module source but I didn't find the answer there either. Does someone here know it? Thank you!
2011 Nov 11
3
[PATCH v2] Add mdadm-create, list-md-devices APIs.
This adds the mdadm-create API for creating RAID devices, and includes various fixes for the other two patches. Rich.
2018 Sep 11
1
[PATCH] daemon: consider /etc/mdadm/mdadm.conf while inspecting mountpoints.
From: Nikolay Ivanets <stenavin@gmail.com> Inspection code checks /etc/mdadm.conf to map MD device paths listed in mdadm.conf to MD device paths in the guestfs appliance. However on some operating systems (e.g. Ubuntu) mdadm.conf has alternative location: /etc/mdadm/mdadm.conf. This patch consider an alternative location of mdadm.conf as well. --- daemon/inspect_fs_unix_fstab.ml | 13
2018 Jan 24
2
CentosĀ“s way of handling mdadm
Hi, what?s the proposed way of handling mdadm in Centos 7? I did not get any notification when a disk in a RAID1 failed, and now that the configuration has changed after resolving the problem, I might be supposed to somehow update /etc/mdadm.conf. Am I not supposed to be notified by default when something goes wrong with an array? How do I update /etc/mdadm.conf? I?m used to all this working
2011 Nov 24
1
mdadm / Ubuntu 10.04 error
md_create: mdadm: boot: mdadm: boot is not a block device. at /home/rjones/d/libguestfs/images/guest-aux/make-fedora-img.pl line 95. Looking into this, it appears the old version of mdadm shipped in Ubuntu (mdadm 2.6.7) doesn't support the notion of giving arbitrary names to devices. Thus you have to do: mdadm --create /dev/md0 [devices] We do: mdadm --create boot [devices] which it
2019 Jan 29
2
C7, mdadm issues
I've no idea what happened, but the box I was working on last week has a *second* bad drive. Actually, I'm starting to wonder about that particulare hot-swap bay. Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a reliable way to make either one active. Actually, I would have expected the linux
2011 Nov 16
1
[PATCH] New API: mdadm-detail
Return a hash containing metadata about a specific Linux MD device, based on the output of 'mdadm -DY'. --- daemon/md.c | 78 ++++++++++++++++++++++++++++++++++++++++ generator/generator_actions.ml | 31 ++++++++++++++++ regressions/test-mdadm.sh | 60 ++++++++++++++++++++++++++++++- src/MAX_PROC_NR | 2 +- 4 files changed, 169
2011 Oct 08
1
CentOS 6.0 CR mdadm-3.2.2 breaks Intel BIOS RAID
I just upgraded my home KVM server to CentOS 6.0 CR to make use of the latest libvirt and now my RAID array with my VM storage is missing. It seems that the upgrade to mdadm-3.2.2 is the culprit. This is the output from mdadm when scanning that array, # mdadm --detail --scan ARRAY /dev/md0 metadata=imsm UUID=734f79cf:22200a5a:73be2b52:3388006b ARRAY /dev/md126 metadata=imsm
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 15:03, mark ha scritto: > >> I've no idea what happened, but the box I was working on last week has >> a *second* bad drive. Actually, I'm starting to wonder about that >> particulare hot-swap bay. >> >> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... >> but see both /dev/sdh1 and
2007 Oct 15
2
mdadm exim mysql
I installed a CentOS-5 core OS (using --nobase in my kickstart). For some reason, it included mysql-5.0.22. When I do "yum remove mysql", it says it will also remove exim and mdadm for dependencies. I don't care that exim will be removed, but I need mdadm as I'm doing software RAID. But why are these even related? When I do: rpm -q --requires mysql neither exim or mdadm is
2007 Aug 27
3
mdadm --create on Centos5?
Is there some new trick to making raid devices on Centos5? # mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdc1 mdadm: error opening /dev/md3: No such file or directory I thought that worked on earlier versions. Do I have to do something udev related first? -- Les Mikesell lesmikesell at gmail.com
2019 Feb 26
2
Problem with mdadm, raid1 and automatically adds any disk to raid
> On 2/24/19 9:01 PM, Jobst Schmalenbach wrote: >> I tried to delete the MDX, I removed the disks by failing them, then >> removing each array md0, md1 and md2. >> I also did >> >> dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz >> /dev/sdX)-1024)) count=1024 > > > Clearing the initial sectors doesn't do anything to clear the
2012 Jun 07
1
mdadm: failed to write superblock to
Hello, i have a little problem. Our server has an broken RAID. # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 2096064 blocks [2/1] [_U] md2 : active raid1 sda3[2](F) sdb3[1] 1462516672 blocks [2/1] [_U] md1 : active raid1 sda2[0] sdb2[1] 524224 blocks [2/2] [UU] unused devices: <none> I have remove the partition: # mdadm --remove
2011 Nov 23
2
[PATCH] New API: mdadm-stop for stopping MD devices.
This API is used to stop a md device. When we want to move a device to another md array, we should stop the md device which contained this device first. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- daemon/md.c | 16 ++++++++++++++++ generator/generator_actions.ml | 9 +++++++++ regressions/test-mdadm.sh | 14 ++++++++++++++ src/MAX_PROC_NR
2008 Jun 16
2
mdadm on reboot
Hi, I'm in the process of trying mdadm for the first time I've been trying stuff out of tutorials, etc. At this point I know how to create stripes, and mirrors. My stripe is automatically restarting on reboot, but the degraded mirror isn't. -- Drew Einhorn -------------- next part -------------- An HTML attachment was scrubbed... URL:
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 18:47, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 15:03, mark ha scritto: >>> >>>> I've no idea what happened, but the box I was working on last week >>>> has a *second* bad drive. Actually, I'm starting to wonder about >>>> that particulare hot-swap bay. >>>>
2009 Sep 24
4
mdadm size issues
Hi, I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit) All 10 drives are 2T in size. device sd{a,b,c,d,e,f} are on my motherboard device sd{i,j,k,l} are on a pci express areca card (relevant lspci info below) #lspci 06:0e.0 RAID bus controller: Areca Technology Corp. ARC-1210 4-Port PCI-Express to SATA RAID Controller The controller is set to JBOD the drives. All
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 16:33, mark ha scritto: > >> Alessandro Baggi wrote: >> >>> Il 30/01/19 14:02, mark ha scritto: >>> >>>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> >>>>> Il 29/01/19 20:42, mark ha scritto: >>>>> >>>>>> Alessandro Baggi wrote:
2007 Nov 02
1
mdadm syntax
Hi All, I am trying to create an MD device. I am using the command: /sbin/mdadm --create --a /dev/md12 --level=1 --run --raid-devices=2 /dev/sda12 /dev/sdb12 to create the device, and to dynamically create the device file if needed. What I want is the device file to be created as /dev/md12, but with the -a flag it creates it as /dev/md<first unwsed minor number>. I have tried various