similar to: CentOS 7: software RAID 5 array with 4 disks and no spares?

Displaying 20 results from an estimated 1000 matches similar to: "CentOS 7: software RAID 5 array with 4 disks and no spares?"

2015 Feb 18
3
CentOS 7: software RAID 5 array with 4 disks and no spares?
Le 18/02/2015 09:24, Michael Volz a ?crit : > Hi Niki, > > md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? [root at nestor:~] # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 232,9G 0 disk ??sda1 8:1 0 3,9G 0 part ? ??md126 9:126 0 3,9G 0 raid1 [SWAP] ??sda2 8:2
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi Niki, md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? Regards Michael ----- Urspr?ngliche Mail ----- Von: "Niki Kovacs" <info at microlinux.fr> An: "CentOS mailing list" <CentOS at centos.org> Gesendet: Mittwoch, 18. Februar 2015 08:09:13 Betreff: [CentOS] CentOS 7: software RAID 5
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?: > In the rescue mode, recreate the partition table which was on the sdb > by copying over what is on sda > > > sfdisk ?d /dev/sda | sfdisk /dev/sdb > > This will give the kernel enough to know it has things to do on > rebuilding parts. Once I made sure I retrieved all my data, I followed your suggestion, and it looks
2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present. This will prevent starting arrays in degraded state. Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state. Two new tests are added. This fixes rhbz1527852. Here is boot-benchmark
2018 Dec 21
2
upgrading 7.5 ==> 7.6
> On Wed, Dec 19, 2018 at 01:50:06PM -0500, Fred Smith wrote: >> hI ALL! >> >> There have been a large enough number of people posting here about >> difficulties when upgrading from 7. to 7.6 that I'm being somewhat >> paranoid about it. >> >> I have several machines to upgrade, but so far the only one I've dared >> to work on (least
2019 Apr 03
4
New post message
Hello! On my server PC i have Centos 7 installed. CentOS Linux release 7.6.1810. There are four arrays RAID1 (software RAID) md124 - /boot/efi md125 - /boot md126 - /bd md127 - / I have configured booting from both drives, everything works fine if both drives are connected. But if I disable any drive from which RAID1 is built the system crashes, there is a partial boot and as a result the
2018 Dec 04
5
Accidentally nuked my system - any suggestions ?
Hi, My workstation is running CentOS 7 on two disks (sda and sdb) in a software RAID 1 setup. It looks like I accidentally nuked it. I wanted to write an installation ISO file to a USB disk, and instead of typing dd if=install.iso of=/dev/sdc I typed /dev/sdb. As soon as I hit <Enter>, the screen froze. I tried a hard reset, but of course, the boot process would stop short very early in
2018 Dec 04
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:10, Gordon Messmer a ?crit : > The system should boot normally if you disconnect sdb. Have you > tried that? Unfortunately that didn't work. The boot process stops here: [OK] Reached target Basic System. Now what ? -- Microlinux - Solutions informatiques durables 7, place de l'?glise - 30730 Montpezat Site : https://www.microlinux.fr Blog :
2019 Jul 08
2
Server fails to boot
First some history. This is an Intel MB and processor some 6 years old, initially running CentOS 6. It has 4 x 1TB sata drives set up in two mdraid 1 mirrors. It has performed really well in a rural setting with frequent power cuts which the UPS has dealt with and auto shuts down the server after a few minutes and then auto restarts when power is restored. The clients needed a Windoze server
2013 May 23
11
raid6: rmw writes all the time?
Hi all, we got a new test system here and I just also tested btrfs raid6 on that. Write performance is slightly lower than hw-raid (LSI megasas) and md-raid6, but it probably would be much better than any of these two, if it wouldn''t read all the during the writes. Is this a known issue? This is with linux-3.9.2. Thanks, Bernd -- To unsubscribe from this list: send the line
2019 Apr 04
2
RAID1 boot issue
Right, that's my problem. a drive is unplugged... while the system is not running mdadm will not reassemble the array on boot. Red Hat Bugzilla ? Bug 1451660 Write that Fixed In Version:dracut-033-546.el7 I have drucat version 033-554.el7 and this bag don't fixed! >I believe you are hitting this bug: > > ? https://bugzilla.redhat.com/show_bug.cgi?id=1451660 > >That is,
2011 Oct 08
1
CentOS 6.0 CR mdadm-3.2.2 breaks Intel BIOS RAID
I just upgraded my home KVM server to CentOS 6.0 CR to make use of the latest libvirt and now my RAID array with my VM storage is missing. It seems that the upgrade to mdadm-3.2.2 is the culprit. This is the output from mdadm when scanning that array, # mdadm --detail --scan ARRAY /dev/md0 metadata=imsm UUID=734f79cf:22200a5a:73be2b52:3388006b ARRAY /dev/md126 metadata=imsm
2011 Aug 17
1
RAID5 suddenly broken
Hello, I have a RAID5 array on my CentOS 5.6 x86_64 workstation which "suddenly" failed to work (actually after the system could not resume from a suspend). I had recently issues after moving the workstation to another office, where one of the disks got accidently unplugged. But the RAID was working and it had reconstructed (as far as I can tell) the data. After I replugged the disk,
2016 Nov 05
3
Avago (LSI) SAS-3 controller, poor performance on CentOS 7
I have a handful of new systems where I've seen unexpectedly low disk performance on an Avago SAS controller, when using CentOS 7. It looked like a regression, so I installed CentOS 6 on one of them and reloaded CentOS 7 on the other. Immediately after install, a difference is apparent in the RAID rebuild speed. The CentOS 6 system is initializing its software RAID5 array at somewhere
2018 Dec 05
0
Accidentally nuked my system - any suggestions ?
On 05/12/2018 05:37, Nicolas Kovacs wrote: > Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?: >> In the rescue mode, recreate the partition table which was on the sdb >> by copying over what is on sda >> >> >> sfdisk ?d /dev/sda | sfdisk /dev/sdb >> >> This will give the kernel enough to know it has things to do on >> rebuilding parts. >
2015 Feb 18
1
CentOS 7: software RAID 5 array with 4 disks and no spares?
On 02/18/2015 03:01 AM, Niki Kovacs wrote: > Le 18/02/2015 09:59, Niki Kovacs a ?crit : >> ??sdd3 8:51 0 76,4G 0 part >> ??md127 9:127 0 229G 0 raid5 / >> >> Any idea what's going on ? > > Ooops, just saw it. /dev/sdd3 apparently has the wrong size. > > As to why this is so, it's a mystery. > > I'll investigate further into
2011 Oct 31
2
libguestfs and md devices
We've recently discovered that libguestfs can't handle guests which use md. There are (at least) 2 reasons for this: Firstly, the appliance doesn't include mdadm. Without this, md devices aren't detected during the boot process. Simply adding mdadm to the appliance package list fixes this. Secondly, md devices referenced in fstab as, e.g. /dev/md0, aren't handled
2011 Nov 11
3
[PATCH v2] Add mdadm-create, list-md-devices APIs.
This adds the mdadm-create API for creating RAID devices, and includes various fixes for the other two patches. Rich.