Displaying 20 results from an estimated 1000 matches similar to: "Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2"
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one.
Here is an iostat example from a host within the same cluster, but without the RAID check running:
[root at r2k1 ~] # iostat -xdmc 1 10
Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec.
I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2016 May 27
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 05/25/2016 09:54 AM, Kelly Lesperance wrote:
> What we're seeing is that when the weekly raid-check script executes, performance nose dives, and I/O wait skyrockets. The raid check starts out fairly fast (20000K/sec - the limit that's been set), but then quickly drops down to about 4000K/Sec. dev.raid.speed sysctls are at the defaults:
It looks like some pretty heavy writes are
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote:
> Hdparm didn?t get far:
>
> [root at r1k1 ~] # hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached reads: Alarm clock
> [root at r1k1 ~] #
Hi Kelly,
Try running 'iostat -xdmc 1'. Look for a single drive that has
substantially greater await than ~10msec. If all the drives
except one are taking 6-8msec, but one is very
2013 Apr 05
0
DRBD + Remus High IO load frozen
Dear all,
I have installed DRBD 8.3.11 compiled from sources. However the backend
block will freeze if there is high IO load. I use Remus to support high
availability and checkpointing is controlled by remus for each 400ms.
If I check the Iostat I got the idle CPU will decreasing extremely each
checkpointing and when its reach 0% of idle cpu the local backing device
will freeze and damage the
2020 Jul 03
0
Slow terminal response Centos 7.7 1908
It was found that the software NIC team created in Centos was having
issues due to a failing network cable. The team was going berserk with
up/down changes.
On Fri, Jul 3, 2020 at 10:12 AM Erick Perez - Quadrian Enterprises <
eperez at quadrianweb.com> wrote:
> Hey!
> I have a strange condition in one of the servers that I don't where to
> start looking.
> I login to the
2020 Jul 03
1
Slow terminal response Centos 7.7 1908
Hi Erick,
what was the value of 'si' in top ?
Best Regards,
Strahil Nikolov
?? 3 ??? 2020 ?. 18:48:30 GMT+03:00, Erick Perez - Quadrian Enterprises <eperez at quadrianweb.com> ??????:
>It was found that the software NIC team created in Centos was having
>issues due to a failing network cable. The team was going berserk with
>up/down changes.
>
>
>On Fri, Jul 3,
2020 Jul 03
2
Slow terminal response Centos 7.7 1908
Hey!
I have a strange condition in one of the servers that I don't where to
start looking.
I login to the server via SSH (cant doit any other way) and anything that I
type is slow
HTTP sessions timeout waiting for screen redraw. So, the server is acting
"slow".
server is bare metal. no virtual services.
no alarms in the disk raid
note: server was restarted because of power failure.
2008 May 08
7
100% iowait in domU with no IO tasks.
Hi.
I entered one of our domU tonight and see following problem:
# iowait -k 5
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.00 100.00 0.00 0.00
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda1 0.00 0.00 0.00 0 0
sda2 0.00 0.00 0.00 0
2012 May 16
1
Very High Load on Dovecot 2 and Errors in mail.err.
Hi,
I have a DELL PE R610 (32GB RAM 2x Six Core CPU and about 1,4 TB RAID 10)
running with 20.000 Mailaccounts behind 2 Dovecot IMAP/POP3 Proxies on a Debian Lenny.
The Server was running about 1 year without any problems. 15Min Load was between 0,5 and max 8.
No high IOWAIT. CPU Idletime about 98%.
But since yesterday morning the Systemload on the Server has been increased over 500. I Think
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because
two drives became unavailable. After adjusting the cables on several
occasions and shutting down and restarting, I was able to see the
drives again. This is when I snatched defeat from the jaws of
victory. Please, someone with vast knowledge of how RAID 5 with mdadm
works, tell me if I have any chance at all
2012 Sep 05
3
BTRFS thinks device is busy [kernel 3.5.3]
Hi,
I''m running OpenSuse 12.2 with kernel 3.5.3
HBA= LSI 1068e using the MPTSAS driver (patched)
(https://patchwork.kernel.org/patch/1379181/)
SANOS1:/media # uname -a
Linux SANOS1 3.5.3 #3 SMP Sun Sep 2 18:44:37 CEST 2012 x86_64 x86_64
x86_64 GNU/Linux
I''ve tried to simulate a disk replacement but it seems that now
/dev/sdg is stuck in the btrfs pool (RAID10)
SANOS1:/media #
2010 May 28
2
permanently add md device
Hi All
Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run:
$mdadm --create /dev/md2 -v --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq
the device file "md2" is created and the raid is being configured. but somehow
2019 Jun 14
3
zfs
Hi, folks,
testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I
pulled one drive (11-drive, one hot spare pool), and it resilvered with
the hot spare. zpool status -x shows me
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
2016 May 25
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
[merging]
The HBA the drives are attached to has no configuration that I?m aware of. We would have had to accidentally change 23 of them ?
Thanks,
Kelly
On 2016-05-25, 1:25 PM, "Kelly Lesperance" <klesperance at blackberry.com> wrote:
>They are:
>
>[root at r1k1 ~] # hdparm -I /dev/sda
>
>/dev/sda:
>
>ATA device, with non-removable media
> Model
2005 Oct 11
0
AW: Re: xen 3.0 boot problem
> > Well, i''m using here the qla2340 on several boxes. It works
> > with Xen 2.0 but noch with Xen 3.0. as part of SUSE Linux 10.0:
>
> Interesting. If the driver really doews work flawlessly in
> Xen 2, then I think the culprit has to be interrupt routeing.
>
> Under Xen 3, does /proc/interrupts show you''re receiving interrupts?
I cannot boot with
2005 Oct 11
0
AW: Re: xen 3.0 boot problem
> > Well, i''m using here the qla2340 on several boxes. It works
> > with Xen 2.0 but noch with Xen 3.0. as part of SUSE Linux 10.0:
>
> Interesting. If the driver really doews work flawlessly in
> Xen 2, then I think the culprit has to be interrupt routeing.
>
> Under Xen 3, does /proc/interrupts show you''re receiving interrupts?
I cannot boot with
2016 Jun 01
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
Kelly Lesperance wrote:
> I did some additional testing - I stopped Kafka on the host, and kicked
> off a disk check, and it ran at the expected speed overnight. I started
> kafka this morning, and the raid check's speed immediately dropped down to
> ~2000K/Sec.
>
> I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*).
> The raid check is now running
2010 Feb 24
2
How to read percentage and currency data?
I'm struggling to find any help on this seemingly simple question - how does
one read data with percentage (%) or currency (?,$ etc.) signs? When I try
to read a data file which has any of those symbols in the data fields, they
are read as characters rather than values. Is there a function or library
which can deal with such values?
As an example, I use this sample from one of chinna's
2009 Apr 10
0
Anaconda kickstart laying out / randomly; ignoring --ondisk in part command
This problem occurred in both CentOS 5.2 and the new (awesome!) 5.3:
I'm using a SuperMicro motherboard with a 6 port NVidia SATA controller on
the motherboard and an 8 port SuperMicro Marvel controller in a slot.
In my kickstart file, I have
part raid.01 . --ondisk=sda .
part raid.02 . --ondisk=sdb .
part raid.03 . --ondisk=sdc .
part raid.04 . --ondisk=sdd .
.
raid /