Displaying 20 results from an estimated 4000 matches similar to: "more software raid questions"
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?:
> In the rescue mode, recreate the partition table which was on the sdb
> by copying over what is on sda
>
>
> sfdisk ?d /dev/sda | sfdisk /dev/sdb
>
> This will give the kernel enough to know it has things to do on
> rebuilding parts.
Once I made sure I retrieved all my data, I followed your suggestion,
and it looks
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi,
I just replaced Slackware64 14.1 running on my office's HP Proliant
Microserver with a fresh installation of CentOS 7.
The server has 4 x 250 GB disks.
Every disk is configured like this :
* 200 MB /dev/sdX1 for /boot
* 4 GB /dev/sdX2 for swap
* 248 GB /dev/sdX3 for /
There are supposed to be no spare devices.
/boot and swap are all supposed to be assembled in RAID level 1 across
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present.
This will prevent starting arrays in degraded state.
Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state.
Two new tests are added.
This fixes rhbz1527852.
Here is boot-benchmark
2018 Dec 04
5
Accidentally nuked my system - any suggestions ?
Hi,
My workstation is running CentOS 7 on two disks (sda and sdb) in a
software RAID 1 setup.
It looks like I accidentally nuked it. I wanted to write an installation
ISO file to a USB disk, and instead of typing dd if=install.iso
of=/dev/sdc I typed /dev/sdb. As soon as I hit <Enter>, the screen froze.
I tried a hard reset, but of course, the boot process would stop short
very early in
2019 Jul 08
2
Server fails to boot
First some history. This is an Intel MB and processor some 6 years old,
initially running CentOS 6. It has 4 x 1TB sata drives set up in two
mdraid 1 mirrors. It has performed really well in a rural setting with
frequent power cuts which the UPS has dealt with and auto shuts down the
server after a few minutes and then auto restarts when power is restored.
The clients needed a Windoze server
2018 Dec 21
2
upgrading 7.5 ==> 7.6
> On Wed, Dec 19, 2018 at 01:50:06PM -0500, Fred Smith wrote:
>> hI ALL!
>>
>> There have been a large enough number of people posting here about
>> difficulties when upgrading from 7. to 7.6 that I'm being somewhat
>> paranoid about it.
>>
>> I have several machines to upgrade, but so far the only one I've dared
>> to work on (least
2018 Dec 04
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:10, Gordon Messmer a ?crit :
> The system should boot normally if you disconnect sdb. Have you
> tried that?
Unfortunately that didn't work. The boot process stops here:
[OK] Reached target Basic System.
Now what ?
--
Microlinux - Solutions informatiques durables
7, place de l'?glise - 30730 Montpezat
Site : https://www.microlinux.fr
Blog :
2018 Dec 05
0
Accidentally nuked my system - any suggestions ?
On 05/12/2018 05:37, Nicolas Kovacs wrote:
> Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?:
>> In the rescue mode, recreate the partition table which was on the sdb
>> by copying over what is on sda
>>
>>
>> sfdisk ?d /dev/sda | sfdisk /dev/sdb
>>
>> This will give the kernel enough to know it has things to do on
>> rebuilding parts.
>
2019 Apr 03
4
New post message
Hello!
On my server PC i have Centos 7 installed.
CentOS Linux release 7.6.1810.
There are four arrays RAID1 (software RAID)
md124 - /boot/efi
md125 - /boot
md126 - /bd
md127 - /
I have configured booting from both drives, everything works fine if both drives are connected.
But if I disable any drive from which RAID1 is built the system crashes, there is a partial boot and as a result the
2017 Nov 13
1
Shared storage showing 100% used
Hello list,
I recently enabled shared storage on a working cluster with nfs-ganesha
and am just storing my ganesha.conf file there so that all 4 nodes can
access it(baby steps).? It was all working great for a couple of weeks
until I was alerted that /run/gluster/shared_storage was full, see
below.? There was no warning; it went from fine to critical overnight.
2015 Feb 18
3
CentOS 7: software RAID 5 array with 4 disks and no spares?
Le 18/02/2015 09:24, Michael Volz a ?crit :
> Hi Niki,
>
> md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk?
[root at nestor:~] # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 232,9G 0 disk
??sda1 8:1 0 3,9G 0 part
? ??md126 9:126 0 3,9G 0 raid1 [SWAP]
??sda2 8:2
2018 Dec 25
0
upgrading 7.5 ==> 7.6
On Fri, Dec 21, 2018 at 06:13:26AM +0100, Simon Matter via CentOS wrote:
>
> I didn't see any issues with RAID. I think those problems arise only if
> you have old RAID devices created with older CentOS releases than 7. Those
> were probably created with 0.9 metadata version which may be a problem.
> Currently 1.2 metadata version is created and I didn't see any issues with
2019 Jul 23
2
mdadm issue
Just rebuilt a C6 box last week as C7. Four drives, and sda and sdb for
root, with RAID-1 and luks encryption.
Layout:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE
MOUNTPOINT
sda 8:0 0 931.5G 0 disk
??sda1 8:1 0 200M 0 part
/boot/efi
??sda2
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi Niki,
md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk?
Regards
Michael
----- Urspr?ngliche Mail -----
Von: "Niki Kovacs" <info at microlinux.fr>
An: "CentOS mailing list" <CentOS at centos.org>
Gesendet: Mittwoch, 18. Februar 2015 08:09:13
Betreff: [CentOS] CentOS 7: software RAID 5
2014 Dec 03
7
DegradedArray message
Received the following message in mail to root:
Message 257:
>From root at desk4.localdomain Tue Oct 28 07:25:37 2014
Return-Path: <root at desk4.localdomain>
X-Original-To: root
Delivered-To: root at desk4.localdomain
From: mdadm monitoring <root at desk4.localdomain>
To: root at desk4.localdomain
Subject: DegradedArray event on /dev/md0:desk4
Date: Tue, 28 Oct 2014 07:25:27
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file:
more /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382
ARRAY /dev/md2 level=raid1 num-devices=2
2018 Dec 04
0
Accidentally nuked my system - any suggestions ?
Nicolas Kovacs wrote:
>
> My workstation is running CentOS 7 on two disks (sda and sdb) in a
> software RAID 1 setup.
>
> It looks like I accidentally nuked it. I wanted to write an installation
> ISO file to a USB disk, and instead of typing dd if=install.iso
> of=/dev/sdc I typed /dev/sdb. As soon as I hit <Enter>, the screen froze.
>
> I tried a hard reset, but
2019 Apr 04
2
RAID1 boot issue
Right, that's my problem.
a drive is unplugged... while the system is not running mdadm will not reassemble the array on boot.
Red Hat Bugzilla ? Bug 1451660
Write that Fixed In Version:dracut-033-546.el7
I have drucat version 033-554.el7 and this bag don't fixed!
>I believe you are hitting this bug:
>
> ? https://bugzilla.redhat.com/show_bug.cgi?id=1451660
>
>That is,
2018 Mar 04
3
sqlinux weirdness
Every now and then I get an alert like this one. I have no clue what this
"rear" subsystem is, or why madam would be trying to write to its log
file.
Can anyone enlighten me?
thanks in advance!
-------------------------
SELinux is preventing /usr/sbin/mdadm from write access on the file /var/log/rear/rear-fcshome.log.lockless.
***** Plugin restorecon (93.9 confidence) suggests
2014 Dec 09
2
DegradedArray message
On Thu, 2014-12-04 at 16:46 -0800, Gordon Messmer wrote:
> On 12/04/2014 05:45 AM, David McGuffey wrote:
> In practice, however, there's a bunch of information you didn't provide,
> so some of those steps are wrong.
>
> I'm not sure what dm-0, dm-2 and dm-3 are, but they're indicated in your
> mdstat. I'm guessing that you made partitions, and then made