similar to: Mount removed raid disk back on same machine as? original raid

Displaying 20 results from an estimated 2000 matches similar to: "Mount removed raid disk back on same machine as? original raid"

2023 Mar 08
1
Mount removed raid disk back on same machine as original raid
I have a Centos 7 system with an mdraid array (raid 1).? I removed a drive from it a couple of months ago and replaced it with a new drive.? Now I want to recover some information from that old drive. I know how to mount the drive, and have done so on another system to confirm that the information I want is there. My question is this: What is going to happen when I try to mount a drive that
2020 Sep 18
0
Drive failed in 4-drive md RAID 10
> I got the email that a drive in my 4-drive RAID10 setup failed. What are > my > options? > > Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). > > mdadm.conf: > > # mdadm.conf written out by anaconda > MAILADDR root > AUTO +imsm +1.x -all > ARRAY /dev/md/root level=raid10 num-devices=4 > UUID=942f512e:2db8dc6c:71667abc:daf408c3 > >
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my options? Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). mdadm.conf: # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/root level=raid10 num-devices=4 UUID=942f512e:2db8dc6c:71667abc:daf408c3 /proc/mdstat: Personalities : [raid10] md127 : active raid10 sdf1[2](F)
2023 Mar 08
1
Mount removed raid disk back on same machine as original raid
Once upon a time, Bowie Bailey <Bowie_Bailey at BUC.com> said: > What is going to happen when I try to mount a drive that the system > thinks is part of an existing array? I don't _think_ anything special will happen - md RAID doesn't go actively looking for drives like that AFAIK. And RAID 1 means you should be able to ignore RAID and just access the contents directly.
2006 Nov 09
2
USB disk dropping out under light load
Hi all, I'm running a pretty updated CentOS4 x86_64 server (Still on kernel 2.6.9-42.0.2, but appart from that fully up to date against the official repos) with a USB-disk attached (the USB-disk is a 750G Seagate disk in a Seagate enclosure) over a USB hub. I've noticed several times that after longish periods of activity, the disk drops out (log from last time, below). In this case,
2017 Sep 28
1
upgrade to 3.12.1 from 3.10: df returns wrong numbers
Hi, When I upgraded my cluster, df started returning some odd numbers for my legacy volumes. Newly created volumes after the upgrade, df works just fine. I have been researching since Monday and have not found any reference to this symptom. "vm-images" is the old legacy volume, "test" is the new one. [root at st-srv-03 ~]# (df -h|grep bricks;ssh st-srv-02 'df -h|grep
2015 Mar 18
0
unable to recover software raid1 install
On Tue, 2015-03-17 at 23:28 +0100, johan.vermeulen7 at telenet.be wrote: > > on a Centos5 system installed with software raid I'm getting: > > raid1: raid set md127 active with 2 out of 2 mirrors > > md:.... autorun DONE > > md: Autodetecting RAID arrays > > md: autorun..... > > md : autorun DONE > > trying to resume form /dev/md1 Hi
2009 Oct 26
1
Bootable USB key...
Hi, I have a 'little' issue with my bootable USB keys... The following used to work (isolinux 3.11-4): Device Boot Start End Blocks Id System /dev/sdg1 * 1 3 23126 6 FAT16 /dev/sdg2 4 1023 7873380 83 Linux mkfs.vfat -n BOOT /dev/sdg1 mkfs.ext2 -m 0 -b 4096 -L DATA /dev/sdg2 syslinux -s /dev/sdg1 cd
2015 Feb 18
3
CentOS 7: software RAID 5 array with 4 disks and no spares?
Le 18/02/2015 09:24, Michael Volz a ?crit : > Hi Niki, > > md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? [root at nestor:~] # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 232,9G 0 disk ??sda1 8:1 0 3,9G 0 part ? ??md126 9:126 0 3,9G 0 raid1 [SWAP] ??sda2 8:2
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi Niki, md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? Regards Michael ----- Urspr?ngliche Mail ----- Von: "Niki Kovacs" <info at microlinux.fr> An: "CentOS mailing list" <CentOS at centos.org> Gesendet: Mittwoch, 18. Februar 2015 08:09:13 Betreff: [CentOS] CentOS 7: software RAID 5
2014 Jun 29
0
virt_blk BUG: sleeping function called from invalid context
On Fri, Jun 27, 2014 at 07:57:38AM -0400, Josh Boyer wrote: > Hi All, > > We've had a report[1] of the virt_blk driver causing a lot of spew > because it's calling a sleeping function from an invalid context. The > backtrace is below. This is with kernel v3.16-rc2-69-gd91d66e88ea9. Hi Jens, pls see below - it looks like the call to blk_mq_end_io from IRQ context is
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec. I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2011 Jul 22
0
Strange problem with LVM, device-mapper, and software RAID...
Running on a up-to-date CentOS 5.6 x86_64 machine: [heller at ravel ~]$ uname -a Linux ravel.60villagedrive 2.6.18-238.19.1.el5 #1 SMP Fri Jul 15 07:31:24 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux with a TYAN Computer Corp S4881 motherboard, which has a nVidia 4 channel SATA controller. It also has a Marvell Technology Group Ltd. 88SX7042 PCI-e 4-port SATA-II (rev 02). This machine has a 120G
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2014 Jun 27
2
virt_blk BUG: sleeping function called from invalid context
Hi All, We've had a report[1] of the virt_blk driver causing a lot of spew because it's calling a sleeping function from an invalid context. The backtrace is below. This is with kernel v3.16-rc2-69-gd91d66e88ea9. The reporter is on CC and can give you relevant details. josh [1] https://bugzilla.redhat.com/show_bug.cgi?id=1113805 [drm] Initialized bochs-drm 1.0.0 20130925 for
2014 Jun 27
2
virt_blk BUG: sleeping function called from invalid context
Hi All, We've had a report[1] of the virt_blk driver causing a lot of spew because it's calling a sleeping function from an invalid context. The backtrace is below. This is with kernel v3.16-rc2-69-gd91d66e88ea9. The reporter is on CC and can give you relevant details. josh [1] https://bugzilla.redhat.com/show_bug.cgi?id=1113805 [drm] Initialized bochs-drm 1.0.0 20130925 for
2013 Feb 11
1
mdadm: hot remove failed for /dev/sdg: Device or resource busy
Hello all, I have run into a sticky problem with a failed device in an md array, and I asked about it on the linux raid mailing list, but since the problem may not be md-specific, I am hoping to find some insight here. (If you are on the MD list, and are seeing this twice, I humbly apologize.) The summary is that during a reshape of a raid6 on an up to date CentOS 6.3 box, one disk failed, and
2018 Dec 05
0
Accidentally nuked my system - any suggestions ?
On 05/12/2018 05:37, Nicolas Kovacs wrote: > Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?: >> In the rescue mode, recreate the partition table which was on the sdb >> by copying over what is on sda >> >> >> sfdisk ?d /dev/sda | sfdisk /dev/sdb >> >> This will give the kernel enough to know it has things to do on >> rebuilding parts. >
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across