similar to: mdraid Q on c6...

Displaying 20 results from an estimated 10000 matches similar to: "mdraid Q on c6..."

2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list, I'm new with UEFI and GPT. For several years I've used MBR partition table. I've installed my system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to concerning raid1 installation, I must put each partition on a different md devices. I've asked times ago if it's more correct create the
2020 Nov 15
5
(C8) root on mdraid
Hello everyone. I'm trying to install CentOS 8 with root and swap partitions on software raid. The plan is: - create md0 raid level 1 with 2 hard drives: /dev/sda and /dev/sdb, using Linux Rscue CD, - install CentOS 8 with Virtual Box on my laptop, - rsync CentOS 8 root partition on /dev/md0p1, - chroot in CentOS 8 root partition, - configure /etc/mdadm.conf, grub.cfg, initramfs, install
2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2019 Apr 09
2
Kernel panic after removing SW RAID1 partitions, setting up ZFS.
System is CentOS 6 all up to date, previously had two drives in MD RAID configuration. md0: sda1/sdb1, 20 GB, OS / Partition md1: sda2/sdb2, 1 TB, data mounted as /home Installed kmod ZFS via yum, reboot, zpool works fine. Backed up the /home data 2x, then stopped the sd[ab]2 partition with: mdadm --stop /dev/md1; mdadm --zero-superblock /dev/sd[ab]1; Removed /home in /etc/fstab. Used
2011 Mar 21
4
mdraid on top of mdraid
Is it possible or will there be any problems with using mdraid on top of mdraid? specifically say mdraid 1/5 on top of mdraid multipath. e.g. 4 storage machines exporting iSCSI targets via two different physical network switches then use multipath to create md block devices then use mdraid on these md block devices The purpose being the storage array surviving a physical network switch
2006 Mar 14
2
Help. Failed event on md1
Hi all, This morning I received this notification from mdadm: This is an automatically generated mail message from mdadm running on server-mail.mydomain.kom A Fail event had been detected on md device /dev/md1. Faithfully yours, etc. In /proc/mdstat I see this: Personalities : [raid1] md1 : active raid1 sdb2[2](F) sda2[0] 77842880 blocks [2/1] [U_] md0 : active raid1 sdb1[1] sda1[0]
2009 Jul 02
4
Upgrading drives in raid 1
I think I have solved my issue and would like some input from anyone who has done this for pitfalls, errors, or if I am just wrong. Centos 5.x, software raid, 250gb drives. 2 drives in mirror, one spare. All same size. 2 devices in the mirror, one boot (about 100MB), one that fills the rest of disk and contains LVM partitions. I was thinking of taking out the spare and adding a 500gb drive. I
2012 Jun 07
1
mdadm: failed to write superblock to
Hello, i have a little problem. Our server has an broken RAID. # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 2096064 blocks [2/1] [_U] md2 : active raid1 sda3[2](F) sdb3[1] 1462516672 blocks [2/1] [_U] md1 : active raid1 sda2[0] sdb2[1] 524224 blocks [2/2] [UU] unused devices: <none> I have remove the partition: # mdadm --remove
2010 Jul 21
4
Fsck on mdraid array
Something seems to be wrong with my file systems, and I want to fsck everything. But I cannot. The setup consists of 2 hds, carrying 3 raid1 (ext3) file systems (boot, /, swap). OS is up-to-date CentOS 5. So I boot from CentOS 5.3 dvd in rescue mode, do not mount the file systems, and try to run fsck -y /dev/md0 fsck -y /dev/md1 fsck -y /dev/md2 For each try I get an error message:
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because two drives became unavailable. After adjusting the cables on several occasions and shutting down and restarting, I was able to see the drives again. This is when I snatched defeat from the jaws of victory. Please, someone with vast knowledge of how RAID 5 with mdadm works, tell me if I have any chance at all
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I noticed one drive is giving errors. Good thing I had RAID. I planned on upgrading this server in next month or so. Just wandering if there was an easy way to fix this to avoid rushing the upgrade? Having a single drive is slowing down reads as well, I think. Thanks. Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb
2022 Apr 24
3
Installing mdadm and C7 on new computer
On 04/23/2022 09:19 PM, H wrote: > On 04/19/2022 09:57 AM, Roberto Ragusa wrote: >> On 4/18/22 1:27 PM, H wrote: >>> I have a new computer with 2 x 2TB SSDs where I wanted to install C7 and use mdadm for RAID1 configuration and encrypting the /home partition. On the net I found https://tuxfixer.com/centos-7-installation-with-lvm-raid-1-mirroring/ which I adopted slightly with
2013 Feb 04
3
Questions about software RAID, LVM.
I am planning to increase the disk space on my desktop system. It is running CentOS 5.9 w/XEN. I have two 160Gig 2.5" laptop (2.5") SATA drives in two slots of a 4-slot hot swap bay configured like this: Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End
2011 Nov 23
8
[PATCH 0/8] Add MD inspection support to libguestfs
This series fixes inspection in the case that fstab contains references to md devices. I've made a few changes since the previous posting, which I've summarised below. [PATCH 1/8] build: Create an MD variant of the dummy Fedora image I've double checked that no timestamp is required in the Makefile. The script will not run a second time to build fedora-md2.img. [PATCH 2/8] build:
2010 Nov 14
3
RAID Resynch...??
So still coming up to speed with mdadm and I notice this morning one of my servers acting sluggish...so when I looked at the mdadm raid device I see this: mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Mon Sep 27 22:47:44 2010 Raid Level : raid10 Array Size : 976759808 (931.51 GiB 1000.20 GB) Used Dev Size : 976759808 (931.51 GiB 1000.20 GB) Raid
2020 Nov 16
3
(C8) root on mdraid
Sun, 15 Nov 2020 14:16:48 -0800 Gordon Messmer <gordon.messmer at gmail.com>: > On 11/15/20 3:32 AM, ?ukasz Posadowski wrote: > > Do anyone can suggest what else I forgot to do? > > > Use metadata version 1.2 instead of 0.9. > > You need for the filesystem to be not visible until after the RAID is > assembled, and the easiest way to do that is to put the
2007 Nov 29
1
RAID, LVM, extra disks...
Hi, This is my current config: /dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1 sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf. What should I do if I
2014 Dec 09
2
DegradedArray message
On Thu, 2014-12-04 at 16:46 -0800, Gordon Messmer wrote: > On 12/04/2014 05:45 AM, David McGuffey wrote: > In practice, however, there's a bunch of information you didn't provide, > so some of those steps are wrong. > > I'm not sure what dm-0, dm-2 and dm-3 are, but they're indicated in your > mdstat. I'm guessing that you made partitions, and then made
2010 Jan 05
4
Software RAID1 Disk I/O
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1. [root at server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root at server ~]# Its barebones
2006 Apr 12
2
Building software RAID mdmad adding a second disk
I'am running CentOS 4.3 on an (Intel) with one SATA disk (/dev/sda). The output of fdisk -l: Disk /dev/sda: 81.9 GB, 81964302336 bytes 255 heads, 63 sectors/track, 9964 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 19 152586 83 Linux /dev/sda2 20 2569