Displaying 20 results from an estimated 3000 matches similar to: "mdadm: hot remove failed for /dev/sdg: Device or resource busy"
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it.
We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs:
2x E5-2650
128 GB RAM
12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA
Dual port 10 GB NIC
The drives are configured as one large
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one.
Here is an iostat example from a host within the same cluster, but without the RAID check running:
[root at r2k1 ~] # iostat -xdmc 1 10
Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2011 Oct 08
1
CentOS 6.0 CR mdadm-3.2.2 breaks Intel BIOS RAID
I just upgraded my home KVM server to CentOS 6.0 CR to make use of the
latest libvirt and now my RAID array with my VM storage is missing. It
seems that the upgrade to mdadm-3.2.2 is the culprit.
This is the output from mdadm when scanning that array,
# mdadm --detail --scan
ARRAY /dev/md0 metadata=imsm UUID=734f79cf:22200a5a:73be2b52:3388006b
ARRAY /dev/md126 metadata=imsm
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because
two drives became unavailable. After adjusting the cables on several
occasions and shutting down and restarting, I was able to see the
drives again. This is when I snatched defeat from the jaws of
victory. Please, someone with vast knowledge of how RAID 5 with mdadm
works, tell me if I have any chance at all
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi,
I just replaced Slackware64 14.1 running on my office's HP Proliant
Microserver with a fresh installation of CentOS 7.
The server has 4 x 250 GB disks.
Every disk is configured like this :
* 200 MB /dev/sdX1 for /boot
* 4 GB /dev/sdX2 for swap
* 248 GB /dev/sdX3 for /
There are supposed to be no spare devices.
/boot and swap are all supposed to be assembled in RAID level 1 across
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote:
> Hdparm didn?t get far:
>
> [root at r1k1 ~] # hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached reads: Alarm clock
> [root at r1k1 ~] #
Hi Kelly,
Try running 'iostat -xdmc 1'. Look for a single drive that has
substantially greater await than ~10msec. If all the drives
except one are taking 6-8msec, but one is very
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec.
I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2015 Mar 17
3
unable to recover software raid1 install
Hello All,
on a Centos5 system installed with software raid I'm getting:
raid1: raid set md127 active with 2 out of 2 mirrors
md:.... autorun DONE
md: Autodetecting RAID arrays
md: autorun.....
md : autorun DONE
trying to resume form /dev/md1
creating root device
mounting root device
mounting root filesystem
ext3-fs : unable to read superblock
mount :
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?:
> In the rescue mode, recreate the partition table which was on the sdb
> by copying over what is on sda
>
>
> sfdisk ?d /dev/sda | sfdisk /dev/sdb
>
> This will give the kernel enough to know it has things to do on
> rebuilding parts.
Once I made sure I retrieved all my data, I followed your suggestion,
and it looks
2019 Jul 23
2
mdadm issue
Just rebuilt a C6 box last week as C7. Four drives, and sda and sdb for
root, with RAID-1 and luks encryption.
Layout:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE
MOUNTPOINT
sda 8:0 0 931.5G 0 disk
??sda1 8:1 0 200M 0 part
/boot/efi
??sda2
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file:
more /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382
ARRAY /dev/md2 level=raid1 num-devices=2
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present.
This will prevent starting arrays in degraded state.
Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state.
Two new tests are added.
This fixes rhbz1527852.
Here is boot-benchmark
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my
options?
Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA).
mdadm.conf:
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/root level=raid10 num-devices=4
UUID=942f512e:2db8dc6c:71667abc:daf408c3
/proc/mdstat:
Personalities : [raid10]
md127 : active raid10 sdf1[2](F)
2014 Jun 27
2
virt_blk BUG: sleeping function called from invalid context
Hi All,
We've had a report[1] of the virt_blk driver causing a lot of spew
because it's calling a sleeping function from an invalid context. The
backtrace is below. This is with kernel v3.16-rc2-69-gd91d66e88ea9.
The reporter is on CC and can give you relevant details.
josh
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1113805
[drm] Initialized bochs-drm 1.0.0 20130925 for
2014 Jun 27
2
virt_blk BUG: sleeping function called from invalid context
Hi All,
We've had a report[1] of the virt_blk driver causing a lot of spew
because it's calling a sleeping function from an invalid context. The
backtrace is below. This is with kernel v3.16-rc2-69-gd91d66e88ea9.
The reporter is on CC and can give you relevant details.
josh
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1113805
[drm] Initialized bochs-drm 1.0.0 20130925 for
2011 Nov 11
3
[PATCH v2] Add mdadm-create, list-md-devices APIs.
This adds the mdadm-create API for creating RAID devices, and
includes various fixes for the other two patches.
Rich.
2015 Mar 07
2
which uuid to specify a raid in fstab
I'm confused about which UUID to use to identify a software RAID in fstab.
lsblk -fs shows:
md127p1 ext4 c43af789-82aa-49e9-a8ed-acd52b1cdd58 /y
--- md127 ext4 39c20575-4257-4fd7-b5c8-8a15757e9e8e
--- sdb1 linux_r hostname:0
af77830e-8cfd-9012-62ce-e57105c3bf6c
--- sdb
--- sdc1 linux_r hostname:0
af77830e-8cfd-9012-62ce-e57105c3bf6c
2016 Oct 19
0
renaming mdadm name
Hi
I have a disk which two of the partitions are part of a RAID1 setup. I'm
trying to rename the the second raided partition
mdadm -E /dev/sdc4
/dev/sdc4:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 83d7657b:ebfddcb7:36b0fa14:d29a350c
Name : oldname:2
Creation Time : Tue Aug 30 15:25:10 2016
Raid Level : raid1
Raid Devices : 2
2010 May 28
2
permanently add md device
Hi All
Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run:
$mdadm --create /dev/md2 -v --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq
the device file "md2" is created and the raid is being configured. but somehow
2020 Nov 23
3
Replacing SW RAID-1 with SSD RAID-1
On 11/23/20 10:46 AM, Simon Matter wrote:
>> Hi,
>>
>> I want to replace my hard drives based SW RAID-1 with SSD's.
>>
>> What would be the recommended procedure? Can I just remove one drive,
>> replace with SSD and rebuild, then repeat with the other drive?
>
> I suggest to "mdadm --fail" one drive, then "mdadm --remove" it.