search for: 128.00

Displaying 17 results from an estimated 17 matches for "128.00".

Did you mean: 125.00
2010 Sep 27
1
RAID rebuild time and disk utilization....
So I'm in the process of building and testing a raid setup and it appeared to take along time to build I came across some settings for setting the min amount of time and that helped but it appears that one of the disks is struggling (100 utilization) vs the other one...I was wondering if anyone else has seen this and if so, is their a solution for it...my 2 disks are 1 Samsung F3 1tb /dev/sdb
2013 Apr 05
0
DRBD + Remus High IO load frozen
Dear all, I have installed DRBD 8.3.11 compiled from sources. However the backend block will freeze if there is high IO load. I use Remus to support high availability and checkpointing is controlled by remus for each 400ms. If I check the Iostat I got the idle CPU will decreasing extremely each checkpointing and when its reach 0% of idle cpu the local backing device will freeze and damage the
2003 Aug 13
4
Question on --include-from option.
Hello list, I am running rsync 2.5.5 on some Solaris 7 and 8 boxes. I'd like to sync different directories from one box to another. This is my include file: ppukweb2% more rsync-include-file /tmp/loris/testrsync1 /tmp/loris/testrsync2 /tmp/loris/testrsync3 This is the command I run: rsync -azv -e ssh --stats --include-from=/tmp/rsync-include-file ppukweb8:/tmp/loris and this is the
2011 Oct 09
1
Btrfs High IO-Wait
Hi, I have high IO-Wait on the ods (ceph), the osd are running a v3.1-rc9 kernel. I also experience high IO-rates, around 500IO/s reported via iostat. Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 6.80 0.00 62.40 18.35 0.04 5.29 0.00 5.29 5.29 3.60 sdb
2018 Apr 11
2
Unreasonably poor performance of replicated volumes
Hello everybody! I have 3 gluster servers (*gluster 3.12.6, Centos 7.2*; those are actually virtual machines located on 3 separate physical XenServer7.1 servers) They are all connected via infiniband network. Iperf3 shows around *23 Gbit/s network bandwidth *between each 2 of them. Each server has 3 HDD put into a *stripe*3 thin pool (LVM2) *with logical volume created on top of it, formatted
2018 Apr 12
0
Unreasonably poor performance of replicated volumes
Guess you went through user lists and tried something like this already http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html I have a same exact setup and below is as far as it went after months of trail and error. We all have somewhat same setup and same issue with this - you can find same post as yours on the daily basis. On Wed, Apr 11, 2018 at 3:03 PM, Anastasia Belyaeva
2002 Sep 24
3
Samba performance issues
Hi all We are implementing samba-ldap to act as an nt pdc and are seeing performance problems. We have a 1ghz, 3gb Ram, 36gb box that is running samba-2.2.5 and openldap-2.0.23 under redhat 7.3 with kernel 2.4.18-3. Clients are all Win2k SP3. All the ldap requests are to the localhost interface. The box is acting as the PDC for the domain, and also sharing diskspace and printers. When we get
2013 Sep 22
1
Question on weird output from a Compaq R3000
Hi All, I put my Compaq R3000 UPS on NUT. Every once in a while the battery alarm light turns on, on the front of the UPS. Maybe once every couple of days or so. When that happens I get the following output in /var/log/messages: Sep 22 01:51:46 mail upscode2[90734]: Unknown response to UPDS: .20 MOUL1 Sep 22 01:51:46 mail upscode2[90734]: Unknown response to UPDS: 0119.20 MOIL1 Sep 22
2018 Apr 13
1
Unreasonably poor performance of replicated volumes
Thanks a lot for your reply! You guessed it right though - mailing lists, various blogs, documentation, videos and even source code at this point. Changing some off the options does make performance slightly better, but nothing particularly groundbreaking. So, if I understand you correctly, no one has yet managed to get acceptable performance (relative to underlying hardware capabilities) with
2019 Oct 28
1
NFS shutdown issue
Hi all, I have an odd interaction on a CentOS 7 file server. The basic setup is a minimal 7.x install. I have 4 internal drives (/dev/sd[a-d]) configured in a RAID5 and mounted locally on /data. This is exported via NFS to ~12 workstations which use the exported file systems for /home. I have an external drive connected via USB (/dev/sde) and mounted on /rsnapshot. I use rsnapshot to back up
2004 Apr 08
4
recommended SSL-friendly crypto accelerator
Hi, I'm pondering building my own SSL accelerator out of a multi-CPU FreeBSD system and a crypto accelerator. What's the recommended hardware crypto accelerator card these days? Thanks, ==ml -- Michael Lucas mwlucas@FreeBSD.org, mwlucas@BlackHelicopters.org Today's chance of throwing it all away to start a goat farm: 49.1% http://www.BlackHelicopters.org/~mwlucas/
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec. I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2017 Sep 05
0
Slow performance of gluster volume
OK my understanding is that with preallocated disks the performance with and without shard will be the same. In any case, please attach the volume profile[1], so we can see what else is slowing things down. -Krutika [1] - https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command On Tue, Sep 5, 2017 at 2:32 PM, Abi Askushi
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2017 Sep 05
3
Slow performance of gluster volume
Hi Krutika, I already have a preallocated disk on VM. Now I am checking performance with dd on the hypervisors which have the gluster volume configured. I tried also several values of shard-block-size and I keep getting the same low values on write performance. Enabling client-io-threads also did not have any affect. The version of gluster I am using is glusterfs 3.8.12 built on May 11 2017
2004 Dec 09
1
resize2fs on LVM on MD raid on Fedora Core 3 - inode table conflicts in fsck
Hi. I'm attempting to setup a box here to be a file-server for all my data. I'm attempting to resize an ext3 partition to demonstrate this capability to myself before fully committing to this system as the primary data storage. I'm having some problems resizing an ext3 filesystem after I've resized the underlying logical volume. Following the ext3 resize, fsck spits out lots