similar to: New post message

Displaying 20 results from an estimated 200 matches similar to: "New post message"

2019 Apr 04
2
RAID1 boot issue
Right, that's my problem. a drive is unplugged... while the system is not running mdadm will not reassemble the array on boot. Red Hat Bugzilla ? Bug 1451660 Write that Fixed In Version:dracut-033-546.el7 I have drucat version 033-554.el7 and this bag don't fixed! >I believe you are hitting this bug: > > ? https://bugzilla.redhat.com/show_bug.cgi?id=1451660 > >That is,
2018 Dec 25
0
upgrading 7.5 ==> 7.6
On Fri, Dec 21, 2018 at 06:13:26AM +0100, Simon Matter via CentOS wrote: > > I didn't see any issues with RAID. I think those problems arise only if > you have old RAID devices created with older CentOS releases than 7. Those > were probably created with 0.9 metadata version which may be a problem. > Currently 1.2 metadata version is created and I didn't see any issues with
2017 Nov 13
1
Shared storage showing 100% used
Hello list, I recently enabled shared storage on a working cluster with nfs-ganesha and am just storing my ganesha.conf file there so that all 4 nodes can access it(baby steps).? It was all working great for a couple of weeks until I was alerted that /run/gluster/shared_storage was full, see below.? There was no warning; it went from fine to critical overnight.
2018 Dec 21
2
upgrading 7.5 ==> 7.6
> On Wed, Dec 19, 2018 at 01:50:06PM -0500, Fred Smith wrote: >> hI ALL! >> >> There have been a large enough number of people posting here about >> difficulties when upgrading from 7. to 7.6 that I'm being somewhat >> paranoid about it. >> >> I have several machines to upgrade, but so far the only one I've dared >> to work on (least
2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2019 Jul 08
2
Server fails to boot
First some history. This is an Intel MB and processor some 6 years old, initially running CentOS 6. It has 4 x 1TB sata drives set up in two mdraid 1 mirrors. It has performed really well in a rural setting with frequent power cuts which the UPS has dealt with and auto shuts down the server after a few minutes and then auto restarts when power is restored. The clients needed a Windoze server
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?: > In the rescue mode, recreate the partition table which was on the sdb > by copying over what is on sda > > > sfdisk ?d /dev/sda | sfdisk /dev/sdb > > This will give the kernel enough to know it has things to do on > rebuilding parts. Once I made sure I retrieved all my data, I followed your suggestion, and it looks
2018 Dec 05
0
Accidentally nuked my system - any suggestions ?
On 05/12/2018 05:37, Nicolas Kovacs wrote: > Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?: >> In the rescue mode, recreate the partition table which was on the sdb >> by copying over what is on sda >> >> >> sfdisk ?d /dev/sda | sfdisk /dev/sdb >> >> This will give the kernel enough to know it has things to do on >> rebuilding parts. >
2015 Feb 18
3
CentOS 7: software RAID 5 array with 4 disks and no spares?
Le 18/02/2015 09:24, Michael Volz a ?crit : > Hi Niki, > > md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? [root at nestor:~] # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 232,9G 0 disk ??sda1 8:1 0 3,9G 0 part ? ??md126 9:126 0 3,9G 0 raid1 [SWAP] ??sda2 8:2
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across
2019 Jul 23
2
mdadm issue
Just rebuilt a C6 box last week as C7. Four drives, and sda and sdb for root, with RAID-1 and luks encryption. Layout: lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk ??sda1 8:1 0 200M 0 part /boot/efi ??sda2
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present. This will prevent starting arrays in degraded state. Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state. Two new tests are added. This fixes rhbz1527852. Here is boot-benchmark
2018 Dec 04
5
Accidentally nuked my system - any suggestions ?
Hi, My workstation is running CentOS 7 on two disks (sda and sdb) in a software RAID 1 setup. It looks like I accidentally nuked it. I wanted to write an installation ISO file to a USB disk, and instead of typing dd if=install.iso of=/dev/sdc I typed /dev/sdb. As soon as I hit <Enter>, the screen froze. I tried a hard reset, but of course, the boot process would stop short very early in
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi Niki, md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? Regards Michael ----- Urspr?ngliche Mail ----- Von: "Niki Kovacs" <info at microlinux.fr> An: "CentOS mailing list" <CentOS at centos.org> Gesendet: Mittwoch, 18. Februar 2015 08:09:13 Betreff: [CentOS] CentOS 7: software RAID 5
2018 Dec 04
0
Accidentally nuked my system - any suggestions ?
Nicolas Kovacs wrote: > > My workstation is running CentOS 7 on two disks (sda and sdb) in a > software RAID 1 setup. > > It looks like I accidentally nuked it. I wanted to write an installation > ISO file to a USB disk, and instead of typing dd if=install.iso > of=/dev/sdc I typed /dev/sdb. As soon as I hit <Enter>, the screen froze. > > I tried a hard reset, but
2018 Dec 04
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:10, Gordon Messmer a ?crit : > The system should boot normally if you disconnect sdb. Have you > tried that? Unfortunately that didn't work. The boot process stops here: [OK] Reached target Basic System. Now what ? -- Microlinux - Solutions informatiques durables 7, place de l'?glise - 30730 Montpezat Site : https://www.microlinux.fr Blog :
2019 Oct 28
1
NFS shutdown issue
Hi all, I have an odd interaction on a CentOS 7 file server. The basic setup is a minimal 7.x install. I have 4 internal drives (/dev/sd[a-d]) configured in a RAID5 and mounted locally on /data. This is exported via NFS to ~12 workstations which use the exported file systems for /home. I have an external drive connected via USB (/dev/sde) and mounted on /rsnapshot. I use rsnapshot to back up
2020 Oct 22
3
ThinkStation with BIOS RAID and disk error messages in gparted
My ThinkStation runs CentOS 7 which I installed on a BIOS RAID 0 setup with two identical 256 Gb SSDs after removing Windows. It runs fine but I just discovered in gparted something that does not seem right: - Launching gparted it complains "invalid argument during seek for red on /dev/md126" and when I click on Ignore I get another error "The backup GPT table is corrupt, but the
2020 Oct 23
0
ThinkStation with BIOS RAID and disk error messages in gparted
> My ThinkStation runs CentOS 7 which I installed on a BIOS RAID 0 setup > with two identical 256 Gb SSDs after removing Windows. It runs fine but I > just discovered in gparted something that does not seem right: > > - Launching gparted it complains "invalid argument during seek for red on > /dev/md126" and when I click on Ignore I get another error "The backup
2016 Nov 05
3
Avago (LSI) SAS-3 controller, poor performance on CentOS 7
I have a handful of new systems where I've seen unexpectedly low disk performance on an Avago SAS controller, when using CentOS 7. It looked like a regression, so I installed CentOS 6 on one of them and reloaded CentOS 7 on the other. Immediately after install, a difference is apparent in the RAID rebuild speed. The CentOS 6 system is initializing its software RAID5 array at somewhere