Displaying 12 results from an estimated 12 matches for "512.00".
Did you mean:
12.00
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it.
We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs:
2x E5-2650
128 GB RAM
12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA
Dual port 10 GB NIC
The drives are configured as one large
2016 May 27
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 05/25/2016 09:54 AM, Kelly Lesperance wrote:
> What we're seeing is that when the weekly raid-check script executes, performance nose dives, and I/O wait skyrockets. The raid check starts out fairly fast (20000K/sec - the limit that's been set), but then quickly drops down to about 4000K/Sec. dev.raid.speed sysctls are at the defaults:
It looks like some pretty heavy writes are
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one.
Here is an iostat example from a host within the same cluster, but without the RAID check running:
[root at r2k1 ~] # iostat -xdmc 1 10
Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec.
I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2005 Aug 02
0
LVM access problem
I''m using LVM to create CoW disk on my Xen box. The problem is that the VMs are not able to access the LVM devices after running for some hours.
I have the folowing LVM configurations (result of a lvscan):
ACTIVE Original ''/dev/xen-vg/RH-EL.WS.4'' [4.00 GB] inherit
ACTIVE Snapshot ''/dev/xen-vg/RH-EL.WS.4-01'' [200.00 MB] inherit
ACTIVE
2010 Dec 02
1
latex tables for 3+ dimensional tables/arrays
I'm looking for an R method to produce latex versions of tables for
table/array objects of 3 or more dimensions,
which, of necessity is flattened to a 2D display, for example with
ftable(), or vcd::structable, as shown below.
I'd be happy to settle for a flexible solution for the 3D case.
> UCB <- aperm(UCBAdmissions, c(2, 1, 3))
> ftable(UCB)
Dept A B
2007 Jun 10
6
Problem getting lvm to work
Hi,
I have problems getting lvm to work on my domU. The dom0 is a CentOS 5
installed on lvm. I''ve added two extra lv, one for root and one for
swap.
[root@surr log]# lvscan
ACTIVE ''/dev/VolGroup00/dom0vol'' [3.91 GB] inherit
ACTIVE ''/dev/VolGroup00/dom0swap'' [1.94 GB] inherit
ACTIVE
2005 Oct 20
0
lvm over software raid error
Came across a strange problem,
I have a software raid1 with LV sitting on top. When I try to mirror
an existing LV to a new LV with the tar command I get these error
messages.
mount /dev/vg00/base.centos /mnt/orig
mount /dev/vg00/test /mnt/new
cd /mnt/orig
tar cf - ./|(cd /mnt/new; tar xf - )
... File shrank by 406815 bytes; padding with zeros
... Read error at byte 49152, reading 10240 bytes:
2009 Sep 23
1
steps to add a new physical disk to existing LVM setup in new centos box?
Not a centos specific question here, but if anyone can save me from
shooting myself in the foot would appreciate any pointers.... I have an
older centos 4.7 box that I recently replaced with a newer one running
centos 5.3. I'd like to add the hard disk from the older one to the
newer one before I scrap it for good. I don't care about the data on
it, would just like the extra drive
2008 Aug 13
3
DRBD 8.2 crashes CentOS 5.2 on rsync from remote host
I've got a pair of HA servers I'm trying to get into production.
Here are some specs :
Xeon X3210 Quad Core (aka Core 2 Quad) 2.13Ghz (four logical
processors, no Hyper Threading)
4GB memory
Hardware (3ware) Raid 1 mirror, 2 x Seagate 750GB SATA2
650GB DRBD partition run on top of an LVM2 partition.
CentOS 5.2 2.6.18-92.1.6.el5.centos.plus
DRBD 8.2 (drbd82-8.2.6-1.el5.centos)
Kernel
2003 May 09
3
rsync of symbolic links bug
I am seeing a problem with the way the rsync of symbolic links
is done. Here is a simple example to duplicate this issue:
prompt> # Create a test case to show rsync of symbolic link problem
prompt> mkdir foo bar
prompt> touch foo/file1 bar/file2
prompt> ln -s bar/file2 file
prompt> rsync -av . ../duplicate
building file list ... done
created directory ../duplicate
./
bar/
2008 Jun 25
6
dm-multipath use
Are folks in the Centos community succesfully using device-mapper-multipath?
I am looking to deploy it for error handling on our iSCSI setup but there
seems to be little traffic about this package on the Centos forums, as far
as I can tell, and there seems to be a number of small issues based on my
reading the dm-multipath developer lists and related resources.
-geoff
Geoff Galitz
Blankenheim