search for: mdraids

Displaying 20 results from an estimated 174 matches for "mdraids".

Did you mean: mdraid
2011 Mar 21
4
mdraid on top of mdraid
Is it possible or will there be any problems with using mdraid on top of mdraid? specifically say mdraid 1/5 on top of mdraid multipath. e.g. 4 storage machines exporting iSCSI targets via two different physical network switches then use multipath to create md block devices then use mdraid on these md block devices The purpose being the storage array surviving a physical network switch
2013 Oct 09
1
mdraid strange surprises...
Hey, I installed 2 new data servers with a big (12TB) RAID6 mdraid. I formated the whole arrays with bad blocks checks. One server is moderately used (nfs on one md), while the other not. One week later, after the raid-check from cron, I get on both servers a few block_mismatch... 1976162368 on the used one and a tiny bit less on the other...? That seems a tiny little bit high... I do the
2011 Mar 29
4
VMware vSphere Hypervisor (free ESXi) and mdraid
Can I combine VMWare ESXi (free version) virtualization and CentOS mdraid level 1? Any pointers how to do it? I never used VMWare before. - Jussi -- Jussi Hirvi * Green Spot Topeliuksenkatu 15 C * 00250 Helsinki * Finland Tel. +358 9 493 981 * Mobile +358 40 771 2098 (only sms) jussi.hirvi at greenspot.fi * http://www.greenspot.fi
2015 Aug 17
1
fsck mdraid root partition
There are some errors on my root filesystem, so I need to fsck it. In order to do this while the filesystem is unmounted, I'm booting from the install disk. However, since the filesystem is on an mdraid device, I'm not sure of the right way to get it assembled so I can check it. If I do, mdadm --examine --scan, then I get this: ARRAY /dev/md/2 metadata=... (and others, but I'm
2011 Aug 15
1
SAS storage arrays, C6, and SES lights
So I'm curious how SAS JBOD arrays and linux MDraid as implemented in CentOS6, and SES (SCSI/SAS Enclosure Services) backplane controllers 'get along' and how much configuration is needed to get the warning lights to work properly. scenario: whitebox server with a SAS backplane or two, daisy chained on a SAS HBA (like an LSI Logic 2008), and disks organized as several raid5/6
2011 Sep 20
0
Kickstart mdraid on two disks, from usb key detected as sda instead of sdc...
Hi, I am trying to adapt my kickstart usb key to optionally auto-setup mdraid on two disks...? But I have one server that keeps attaching the usb key to sda instead of sdc... My kickstart creates the raid devices on sdb and sdc partitions; but then I expect it not to work once the key is unplugged and the disks fall back to sda and sdb... Can I just modify mdadm.conf at the end, just
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list, I'm new with UEFI and GPT. For several years I've used MBR partition table. I've installed my system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to concerning raid1 installation, I must put each partition on a different md devices. I've asked times ago if it's more correct create the
2023 Jan 12
2
Upgrading system from non-RAID to RAID1
On 01/11/2023 01:33 PM, H wrote: > On 01/11/2023 02:09 AM, Simon Matter wrote: >> What I usually do is this: "cut" the large disk into several pieces of >> equal size and create individual RAID1 arrays. Then add them as LVM PVs to >> one large VG. The advantage is that with one error on one disk, you wont >> lose redundancy on the whole RAID mirror but only on
2015 Mar 01
0
mdraid vs hardware raid, was: Looking for a life-save LVM Guru
On Sat, Feb 28, 2015 at 5:14 PM, Chris Murphy <lists at colorremedies.com> wrote: > "Drives, and hardware RAID cards are subject to firmware bugs, just as > we have software bugs in the kernel." makes no assessment of how > common such bugs are relative to each other. I don't want to underestimate the value of good hardware RAID with BBWC implementations when it comes
2020 Jul 01
1
Not getting bootloader installed with CentOS 8 + mdraid
I am trying to use a kickstart to install CentOS 8.2 on a server with a pair of drives with Linux software RAID 1. The install completes, but the resulting system will not boot - I get "Booting from Hard drive C:" from the BIOS (Dell in legacy BIOS mode, not UEFI) and it stops. If I then start the installer in rescue mode and run grub2-install on the two drives, it boots okay. If I
2020 Nov 15
0
(C8) root on mdraid
On 11/15/20 3:32 AM, ?ukasz Posadowski wrote: > Do anyone can suggest what else I forgot to do? Use metadata version 1.2 instead of 0.9. You need for the filesystem to be not visible until after the RAID is assembled, and the easiest way to do that is to put the metadata at the beginning of the drive and the partition table inside the RAID volume. With metadata version 0.9, the partition
2020 Nov 16
0
(C8) root on mdraid
In article <20201115123245.db62b8248e1f248afe02844a at lukaszposadowski.pl>, Lukasz Posadowski <mail at lukaszposadowski.pl> wrote: > > Hello everyone. > > I'm trying to install CentOS 8 with root and swap partitions on > software raid. The plan is: > - create md0 raid level 1 with 2 hard drives: /dev/sda and /dev/sdb, > using Linux Rscue CD, > - install
2020 Nov 16
0
(C8) root on mdraid
On 11/15/20 10:40 PM, ?ukasz Posadowski wrote: > Sun, 15 Nov 2020 14:16:48 -0800 Gordon Messmer <gordon.messmer at gmail.com>: > > >> Use metadata version 1.2 instead of 0.9. >> > Thanks, I'll try that. I'm use to metadata 0.9, because GRUB have > (had?) some issue with the newer ones. If that doesn't work, and you need to use metadata 0.9, then
2010 Jul 21
4
Fsck on mdraid array
Something seems to be wrong with my file systems, and I want to fsck everything. But I cannot. The setup consists of 2 hds, carrying 3 raid1 (ext3) file systems (boot, /, swap). OS is up-to-date CentOS 5. So I boot from CentOS 5.3 dvd in rescue mode, do not mount the file systems, and try to run fsck -y /dev/md0 fsck -y /dev/md1 fsck -y /dev/md2 For each try I get an error message:
2011 Apr 26
0
mdraid woes (missing superblock?)
I have a raid1 array which is somehow faulty. There is 1,5 TB of stuff, I would not want to lose it (though I have full backup). The array cannot be mounted on startup (error message was "missing superblock"). I had to boot from DVD with linux rescue and remove the array from fstab. Here is some info - I am a little dumbfounded. [root at a134-224 log]# cat /proc/mdstat (...) md5 :
2020 Nov 16
3
(C8) root on mdraid
Sun, 15 Nov 2020 14:16:48 -0800 Gordon Messmer <gordon.messmer at gmail.com>: > On 11/15/20 3:32 AM, ?ukasz Posadowski wrote: > > Do anyone can suggest what else I forgot to do? > > > Use metadata version 1.2 instead of 0.9. > > You need for the filesystem to be not visible until after the RAID is > assembled, and the easiest way to do that is to put the
2016 Oct 12
5
Backup Suggestion on C7
Hi list, I'm building a backup server for 3 hosts (1 workstation, 2 server). I will use bacula to perform backups. The backup is performed on disks (2 x 3TB on mdraid mirror) and for each hosts I've created a logical volume with related size. This 3 hosts have different data size with different disk change rate. Each host must have a limited sized resource and a reserved space. If a
2017 Feb 15
3
RAID questions
Hello, Just a couple questions regarding RAID. Here's thesituation. I bought a 4TB drive before I upgraded from 6.8 to 7.3. I'm not too far into this that Ican't start over. I wanted disk space to backup 3 other machines. I way overestimated what I needed for full, incremental and image backups with UrBackup.I've used less than 1TB so far. I would like to add an additional drive
2017 Jun 30
2
mdraid doesn't allow creation: device or resource busy
Dear fellow CentOS users, I have never experienced this problem with hard disk management before and cannot explain it to myself on any rational basis. The setup: I have a workstation for testing, running latest CentOS 7.3 AMD64. I am evaluating oVirt and a storage-ha as part of my bachelors thesis. I have already been running a RAID1 (mdraid, lvm2) for the system and some oVirt 4.1 testing.
2016 Nov 23
1
New laptop recomendation
On 11/23/2016 2:42 PM, Gordon Messmer wrote: > > Many modern Intel systems come configured for an Intel "RAID" mode. > While configured for that mode, the SATA controller changes its PCI ID > so that the standard Windows drivers don't bind to it, allowing the > Intel RAID drivers to bind to it instead. There are no Linux drivers > that bind to the