Displaying 20 results from an estimated 10000 matches similar to: "Replacing failed software RAID drive"
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it.
We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs:
2x E5-2650
128 GB RAM
12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA
Dual port 10 GB NIC
The drives are configured as one large
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy.  The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. 
Here is an iostat example from a host within the same cluster, but without the RAID check running:
[root at r2k1 ~] # iostat -xdmc 1 10
Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 	05/27/16 	_x86_64_	(32 CPU)
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because  
two drives became unavailable.  After adjusting the cables on several  
occasions and shutting down and restarting, I was able to see the  
drives again.  This is when I snatched defeat from the jaws of  
victory.  Please, someone with vast knowledge of how RAID 5 with mdadm  
works, tell me if I have any chance at all
2006 Sep 12
3
RE: Help: Xen HVM Domain can ONLY support four hard drivesat most???
>-----Original Message-----
>From: xen-users-bounces@lists.xensource.com
>[mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Liang Yang
>Sent: 2006年9月12日 8:57
>To: xen-users@lists.xensource.com
>Subject: [Xen-users] Help: Xen HVM Domain can ONLY support four hard
>drivesat most???
>
>Hi,
>
>I have 5 SATA hard drives and I want to expose all these five
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote:
> Hdparm didn?t get far:
> 
> [root at r1k1 ~] # hdparm -tT /dev/sda
> 
> /dev/sda:
>  Timing cached reads:   Alarm clock
> [root at r1k1 ~] #
Hi Kelly,
Try running 'iostat -xdmc 1'. Look for a single drive that has
substantially greater await than ~10msec. If all the drives 
except one are taking 6-8msec, but one is very
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec.
I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2008 Mar 17
0
Partition/filesystem expansion difficulties
CentOS 4.6
Hi All:
I am now officially stumped. I have tried everything that can think
of and I have googled until I am blue in the face and tried just
about everything that I could find that looked the least bit like it
might apply and still no (or at least partial) joy. I am going to
cram as much information into this email as I can and hope that
someone out there can either tell me what I am
2012 Sep 05
3
BTRFS thinks device is busy [kernel 3.5.3]
Hi,
I''m running OpenSuse 12.2 with kernel 3.5.3
HBA= LSI 1068e using the MPTSAS driver (patched)
(https://patchwork.kernel.org/patch/1379181/)
SANOS1:/media # uname -a
Linux SANOS1 3.5.3 #3 SMP Sun Sep 2 18:44:37 CEST 2012 x86_64 x86_64
x86_64 GNU/Linux
I''ve tried to simulate a disk replacement but it seems that now
/dev/sdg is stuck in the btrfs pool (RAID10)
SANOS1:/media #
2020 Sep 09
0
Btrfs RAID-10 performance
>>>>> "Miloslav" == Miloslav H?la <miloslav.hula at gmail.com> writes:
Miloslav> Hello,
Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I got reply: 
Miloslav> "RAID-1 would be preferable" 
Miloslav> (https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2112 at lechevalier.se/T/). 
Miloslav> May I ask you
2020 Sep 09
4
Btrfs RAID-10 performance
Hi, thank you for your reply. I'll continue inline...
Dne 09.09.2020 v 3:15 John Stoffel napsal(a):
> Miloslav> Hello,
> Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I got reply:
> Miloslav> "RAID-1 would be preferable"
> Miloslav> (https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2112 at lechevalier.se/T/).
>
2020 Sep 09
0
Btrfs RAID-10 performance
The 9361-8i does support passthrough ( JBOD mode ). Make sure you
have the latest firmware.
On Wednesday, 09/09/2020 at 03:55 Miloslav H?la wrote:
Hi, thank you for your reply. I'll continue inline...
Dne 09.09.2020 v 3:15 John Stoffel napsal(a):
> Miloslav> Hello,
> Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I
got reply:
> Miloslav> "RAID-1
2010 May 28
2
permanently add md device
Hi All
Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run:
$mdadm  --create /dev/md2 -v  --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq
the device file "md2" is created and the raid is being configured. but somehow
2020 Sep 09
0
Btrfs RAID-10 performance
>>>>> "Miloslav" == Miloslav H?la <miloslav.hula at gmail.com> writes:
Miloslav> Hi, thank you for your reply. I'll continue inline...
Me too... please look for further comments.  Esp about 'fio' and
Netapp useage.
Miloslav> Dne 09.09.2020 v 3:15 John Stoffel napsal(a):
Miloslav> Hello,
Miloslav> I sent this into the Linux Kernel Btrfs
2019 Jun 14
3
zfs
Hi, folks,
   testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I
pulled one drive (11-drive, one hot spare pool), and it resilvered with
the hot spare.  zpool status -x shows me
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
	invalid.  Sufficient replicas exist for the pool to continue
	functioning in a degraded state.
2020 Sep 07
4
Btrfs RAID-10 performance
Hello,
I sent this into the Linux Kernel Btrfs mailing list and I got reply: 
"RAID-1 would be preferable" 
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2112 at lechevalier.se/T/). 
May I ask you for the comments as from people around the Dovecot?
We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro 
server with Intel(R) Xeon(R) CPU E5-2620 v4 @
2024 Oct 19
2
How much disk can fail after a catastrophic failure occur?
Hi there.
I have 2 servers with this number of disks in each side:
pve01:~# df | grep disco
/dev/sdd          1.0T  9.4G 1015G   1% /disco1TB-0
/dev/sdh          1.0T  9.3G 1015G   1% /disco1TB-3
/dev/sde          1.0T  9.5G 1015G   1% /disco1TB-1
/dev/sdf          1.0T  9.4G 1015G   1% /disco1TB-2
/dev/sdg          2.0T   19G  2.0T   1% /disco2TB-1
/dev/sdc          2.0T   19G  2.0T   1%
2009 Jan 13
2
mounted.ocfs2 -f return Unknown: Bad magic number in inode
Hello,
I have installed ocfs2 without problem and use it for a RAC10gR2.
Only Clusterware files are ocfs2 type.
multipath is also used.
When I issue : mounted.ocfs2 -f
I have a strange result:
Device                FS     Nodes
/dev/sda              ocfs2  Unknown: Bad magic number in inode
/dev/sda1             ocfs2  pocrhel2, pocrhel1
/dev/sdb              ocfs2  Not mounted
/dev/sdf         
2010 Sep 13
3
Proper procedure when device names have changed
I am running zfs-fuse on an Ubuntu 10.04 box.  I have a dual mirrored pool:
mirror sdd sde mirror sdf sdg
Recently the device names shifted on my box and the devices are now sdc sdd sde and sdf.  The pool is of course very unhappy about the mirrors are no longer matched up and one device is "missing".  What is the proper procedure to deal with this?
-brian
-- 
This message posted from
2004 Oct 13
1
i-node showing 100 % used whereas the partitions are empty
Output df -i
------------------
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/ 
/dev/sde              348548  348195     353  100% /ocfsa01
/dev/sdf              348548  348195     353  100% /ocfsa02
/dev/sdg              348548  348195     353  100% /ocfsa03
/dev/sdk              139410  138073    1337  100% /ocfsq01
 
Output df -kP
-----------------------
Filesystem        
2005 Oct 11
0
AW: Re: xen 3.0 boot problem
> > Well, i''m using here the qla2340 on several boxes. It works 
> > with Xen 2.0 but noch with Xen 3.0. as part of SUSE Linux 10.0:
> 
> Interesting. If the driver really doews work flawlessly in 
> Xen 2, then I think the culprit has to be interrupt routeing.
> 
> Under Xen 3, does /proc/interrupts show you''re receiving interrupts?
I cannot boot with