similar to: DRBD + Remus High IO load frozen

Displaying 20 results from an estimated 500 matches similar to: "DRBD + Remus High IO load frozen"

2020 Jul 03
0
Slow terminal response Centos 7.7 1908
It was found that the software NIC team created in Centos was having issues due to a failing network cable. The team was going berserk with up/down changes. On Fri, Jul 3, 2020 at 10:12 AM Erick Perez - Quadrian Enterprises < eperez at quadrianweb.com> wrote: > Hey! > I have a strange condition in one of the servers that I don't where to > start looking. > I login to the
2020 Jul 03
1
Slow terminal response Centos 7.7 1908
Hi Erick, what was the value of 'si' in top ? Best Regards, Strahil Nikolov ?? 3 ??? 2020 ?. 18:48:30 GMT+03:00, Erick Perez - Quadrian Enterprises <eperez at quadrianweb.com> ??????: >It was found that the software NIC team created in Centos was having >issues due to a failing network cable. The team was going berserk with >up/down changes. > > >On Fri, Jul 3,
2020 Jul 03
2
Slow terminal response Centos 7.7 1908
Hey! I have a strange condition in one of the servers that I don't where to start looking. I login to the server via SSH (cant doit any other way) and anything that I type is slow HTTP sessions timeout waiting for screen redraw. So, the server is acting "slow". server is bare metal. no virtual services. no alarms in the disk raid note: server was restarted because of power failure.
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2012 May 16
1
Very High Load on Dovecot 2 and Errors in mail.err.
Hi, I have a DELL PE R610 (32GB RAM 2x Six Core CPU and about 1,4 TB RAID 10) running with 20.000 Mailaccounts behind 2 Dovecot IMAP/POP3 Proxies on a Debian Lenny. The Server was running about 1 year without any problems. 15Min Load was between 0,5 and max 8. No high IOWAIT. CPU Idletime about 98%. But since yesterday morning the Systemload on the Server has been increased over 500. I Think
2016 May 27
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 05/25/2016 09:54 AM, Kelly Lesperance wrote: > What we're seeing is that when the weekly raid-check script executes, performance nose dives, and I/O wait skyrockets. The raid check starts out fairly fast (20000K/sec - the limit that's been set), but then quickly drops down to about 4000K/Sec. dev.raid.speed sysctls are at the defaults: It looks like some pretty heavy writes are
2008 May 08
7
100% iowait in domU with no IO tasks.
Hi. I entered one of our domU tonight and see following problem: # iowait -k 5 avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.00 100.00 0.00 0.00 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec. I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2013 Mar 28
1
Xen Remus DRBD dual primary frozen
Dear all, I have sent this problem earlier but maybe its not detail, here I try to write more detail. I hope anybody can help me to point out the problem. First of all I used Ubuntu 12.04 x64 both for domain0 and domainU with modification to run under xen hypervisor and work with remus. I follow and configured the remus with this notes
2013 Mar 19
0
Remus DRBD frozen
Hi all, I don''t know if my question doesn''t related to xen at all, how ever I am trying to use DRBD as my disk replication when I ran Remus. However when I run remus sometimes my Dom-U will be freezing. I see the log file and it seem caused by drbd frozen : 875.616068] block drbd1: Local backing block device frozen? [ 887.648072] block drbd1: Local backing block device
2006 Jun 12
1
kernel BUG at /usr/src/ocfs2-1.2.1/fs/ocfs2/file.c:494!
Hi, First of all, I'm new to ocfs2 and drbd. I set up two identical servers (Athlon64, 1GB RAM, GB-Ethernet) with Debian Etch, compiled my own kernel (2.6.16.20), then compiled the drbd-modules and ocfs (modules and tools) from source. The process of getting everything up and running was very easy. I have one big 140GB partition that is synced with drbd (in c-mode) and has an ocfs2
2014 Sep 02
1
samba_spnupdate invoked oom-killer
Hello All, Anyone have seen this before? Did samba_spnupdate really caused the crash??? [ 49.753564] block drbd1: drbd_sync_handshake: [ 49.753571] block drbd1: self BB16E125AF60AEDC:0000000000000000:30D97136FB1DA7A3:30D87136FB1DA7A3 bits:0 flags:0 [ 49.753576] block drbd1: peer 6365B5AFF049F16D:BB16E125AF60AEDD:30D97136FB1DA7A2:30D87136FB1DA7A3 bits:1 flags:0 [ 49.753580] block drbd1:
2013 Feb 18
2
Kernel Error with Debian Squeeze, DRBD, 3.2.0-0.bpo.4-amd64 and Xen4.0
Hello List, i am running Debian Squeeze and i installed DRBD, 3.2.0-0.bpo.4-amd64 and Xen4.0 from the Backports. Sometimes i get such ugly Kernel message: [257318.441757] BUG: unable to handle kernel paging request at ffff880025f19000 Log: [256820.643918] xen-blkback:ring-ref 772, event-channel 16, protocol 1 (x86_64-abi) [256830.802492] vif86.0: no IPv6 routers present [256835.674481]
2007 Jun 29
0
centos drbd - mounts/ replication
Hi, I would normally post this to the drbd list but it so low traffic/low volume (plus Austria might be asleep right now) I figured i'd ask someone here in case they have gotten drbd working on centos. Right now my system says i'm only the 971st person to even install it... It's been out for years, so likely this just means version 8. But you'll only see a couple of posts
2011 Mar 03
1
OCFS2 1.4 + DRBD + iSCSI problem with DLM
An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110303/0fbefee6/attachment.html
2014 Aug 22
2
ocfs2 problem on ctdb cluster
Ubuntu 14.04, drbd Hi On a drbd Primary node, when attempting to mount our cluster partition: sudo mount -t ocfs2 /dev/drbd1 /cluster we get: mount.ocfs2: Unable to access cluster service while trying to join the group We then call: sudo dpkg-reconfigure ocfs2-tools Setting cluster stack "o2cb": OK Starting O2CB cluster ocfs2: OK And all is well: Aug 22 13:48:23 uc1 kernel: [
2011 Mar 23
3
EXT4 Filesystem Mount Failed (bad geometry: block count)
Dear All, Currently using RHEL6 Linux and Kernel Version is 2.6.32-71.el6.i686 and DRBD Version is 8.3.10 DRBD is build from source and Configured DRBD with 2 Node testing with Simplex Setup Server 1 : 192.168.13.131 IP Address and hostname is primary Server 2 : 192.168.13.132 IP Address and hostname is secondary Finally found that drbd0, drbd1 mount failed problem *Found some error messages
2012 Nov 29
0
Windows NLB crashing VM's
Hi All, We have a somewhat serious issue around NLB on Windows 2012 and Xen. First, let me describe our environment and then I''ll let you know what''s wrong. 2 X Debian-squeeze boxes running the latest provided AMD64 Xen kernel and about 100GB of RAM. These boxes are connected via infiniband and DRBD is running over this(IPoIB). Each VPS runs on a mirrored DRBD devices. Each
2010 Sep 27
1
RAID rebuild time and disk utilization....
So I'm in the process of building and testing a raid setup and it appeared to take along time to build I came across some settings for setting the min amount of time and that helped but it appears that one of the disks is struggling (100 utilization) vs the other one...I was wondering if anyone else has seen this and if so, is their a solution for it...my 2 disks are 1 Samsung F3 1tb /dev/sdb