similar to: Bugs in 3Ware 9.2? -- FWD: [Bug 121434] Extremely high iowait with 3Ware array and moderate disk activity

Displaying 20 results from an estimated 1000 matches similar to: "Bugs in 3Ware 9.2? -- FWD: [Bug 121434] Extremely high iowait with 3Ware array and moderate disk activity"

2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
Dear list, I thought I'd just share my experiences with this 3Ware card, and see if anyone might have any suggestions. System: Supermicro H8DA8 with 2 x Opteron 250 2.4GHz and 4GB RAM installed. 9550SX-8LP hosting 4x Seagate ST3250820SV 250GB in a RAID 1 plus 2 hot spare config. The array is properly initialized, write cache is on, as is queueing (and supported by the drives). StoreSave
2005 Dec 31
1
3ware error
I am getting this error:- 3w-9xxx: scsi0: ERROR: (0x03:0x0104): SGL entry has illegal length:address=0x3744A000, length=0xFF, cmd=X. In dmesg it finds the controller OK:- 3ware 9000 Storage Controller device driver for Linux v2.26.02.001. ACPI: PCI interrupt 0000:01:06.0[A] -> GSI 11 (level, low) -> IRQ 11 scsi0 : 3ware 9000 Storage Controller 3w-9xxx: scsi0: Found a 3ware 9000 Storage
2010 Mar 18
0
Extremely high iowait
Hello, We have a 5 node OCFS2 volume backed by a Sun (Oracle) StorageTek 2540. Each system is running OEL5.4 and OCFS2 1.4.2, using device-mapper-multipath to load balance over 2 active paths. We are using the default multipath configuration for our SAN. We are observing iowait time between 60% - 90%, sustaining at over 80% as I'm writing this, driving load averages to >25 during an rsync
2007 Nov 28
2
kernel-2.6.9-55.0.12.EL not booting (lvm? 3ware?)
Hi, Here's he setup. Original Kernel: kernel-smp-2.6.9-42.EL, booting fine. Error with kernel-2.6.9-55.0.12.EL (and kernel-2.6.9-55.0.12.ELsmp) No volume groups found Volume group "VolGroup00" not found. ERROR: /bin/lvm exited abnormally! The system has a 3ware 9000 Storage Controller Model: 9500S-4LP Firmware FE9X 2.08.00.006, BIOS BE9X 2.03.01.052, Ports: 4. Anyone seen
2007 Nov 02
1
OT: RH 4/5 and 3Ware 9500 firmware upgrades
I've just noticed that 3Ware has a more current version of the firmware for the 9500S than I have on several machines. All of mine are running: 3w-9xxx: scsi0: Firmware FE9X 3.04.00.005, BIOS BE9X 3.04.00.002, Ports: 8. and the latest rev appears to be 3.08. I'm wondering if there is any compelling reason to upgrade or just let these sleeping dogs lie. :) Any sage (or other)
2005 Aug 31
1
[OT] 3Ware 9.2.1.1 firmware for 9000 series released
Here are the version numbers I get after installing the 9.2.1.1 release: Model 9500S-4LP Firmware FE9X 2.08.00.003 Driver 2.26.03.019fw BIOS BE9X 2.03.01.052 Boot Loader BL9X 2.02.00.001 A bit confusing. I thought I saw a map of code set releases to various version numbers but I couldn't find it on 3Ware's site. On Sun, 28 Aug 2005 at 9:20am, Bryan J. Smith wrote >
2005 Nov 17
2
hotspares possible?
Hi, I could not find any hints about hotspares with zfs. Are hotspares not possible? Thanks for this really great filesystem Willi
2006 Oct 20
0
3ware 9550SXU-4LP performance
Hi, I used xen-3.0.2-2 (2.6.16) before and the performance of my 3ware 9550SXU-4LP wasn''t too bad, but now with 3.0.3.0 (2.6.16.29) throughput decreased by about 10MB/sec in write performance. sync; ./bonnie++ -n 0 -r 512 -s 20480 -f -b -d /mnt/blabla -u someuser now (2.6.16.29): Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per
2009 Aug 06
0
High iowait in dom0 after creating a new guest, with file-based domU disk on GFS
Hopefully I summed up the jist of the problem in the subject line. ;) I have a GFS cluster with ten Xen 3.0 dom0s sharing an iSCSI LUN. There are on average 8 domUs running on each Xen server. The dom0 on each server is hard-coded to 2 CPUs and 2GB of RAM with no ballooning, and has 2GB of partition-based swap. When creating a new domU on any of the Xen servers, just after the completion of
2008 May 08
7
100% iowait in domU with no IO tasks.
Hi. I entered one of our domU tonight and see following problem: # iowait -k 5 avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.00 100.00 0.00 0.00 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0
2016 May 27
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 05/25/2016 09:54 AM, Kelly Lesperance wrote: > What we're seeing is that when the weekly raid-check script executes, performance nose dives, and I/O wait skyrockets. The raid check starts out fairly fast (20000K/sec - the limit that's been set), but then quickly drops down to about 4000K/Sec. dev.raid.speed sysctls are at the defaults: It looks like some pretty heavy writes are
2016 May 25
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
Kelly Lesperance wrote: > I?ve posted this on the forums at > https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 > - posting to the list in the hopes of getting more eyeballs on it. > > We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: > > 2x E5-2650 > 128 GB RAM > 12 x 4 TB 7200 RPM SATA drives connected to an HP
2016 May 25
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
Kelly Lesperance wrote: > I?ve posted this on the forums at > https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 > - posting to the list in the hopes of getting more eyeballs on it. > > We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: > > 2x E5-2650 > 128 GB RAM > 12 x 4 TB 7200 RPM SATA drives connected to an HP
2016 May 25
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
What is the HBA the drives are attached to? Have you done a quick benchmark on a single disk to check if this is a raid problem or further down the stack? Regards, Dennis On 25.05.2016 19:26, Kelly Lesperance wrote: > [merging] > > The HBA the drives are attached to has no configuration that I?m aware of. We would have had to accidentally change 23 of them ? > > Thanks, >
2016 May 25
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 5/25/2016 11:44 AM, Kelly Lesperance wrote: > The HBA is an HP H220. OH. its a very good idea to verify the driver is at the same revision level as the firmware. not 100% sure how you do this under CentOS, my H220 system is running FreeBSD, and is at revision P20, both firmware and driver. HP's firmware, at least what I could find, was a fairly old P14 or something, so I
2016 May 25
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
LSI/Avago?s web pages don?t have any downloads for the SAS2308, so I think I?m out of luck wrt MegaRAID. Bounced the node, confirmed MPT Firmware 15.10.09.00-IT. HP Driver is v 15.10.04.00. Both are the latest from HP. Unsure why, but the module itself reports version 20.100.00.00: [root at r1k1 sys] # cat module/mpt2sas/version 20.100.00.00 On 2016-05-25, 3:20 PM, "centos-bounces at
2016 May 25
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
Already done ? they?re not being very helpful, as we don?t have a support contract, just standard warranty. On 2016-05-25, 4:27 PM, "centos-bounces at centos.org on behalf of m.roth at 5-cent.us" <centos-bounces at centos.org on behalf of m.roth at 5-cent.us> wrote: >Kelly Lesperance wrote: >> LSI/Avago?s web pages don?t have any downloads for the SAS2308, so I think
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
Software RAID 10. Servers are HP DL380 Gen 8s, with 12x4 TB 7200 RPM drives. On 2016-06-01, 3:52 PM, "centos-bounces at centos.org on behalf of m.roth at 5-cent.us" <centos-bounces at centos.org on behalf of m.roth at 5-cent.us> wrote: >Kelly Lesperance wrote: >> I did some additional testing - I stopped Kafka on the host, and kicked >> off a disk check, and it
2018 Jan 09
2
Samba 4.7.x IOWAIT / load average
Hi, I want to report a strange problem with samba 4.7.x and IO wait / CPU usage. I am using linux (archlinux, kernel 4.14.2) and encounter a very strange bug with only samba 4.7.x (from 4.7.0 to last 4.7.4). With these versions CPU usage and IO WAIT is very high with a single CIFS client with very low transfert (about 1MB/s). I rollback to 4.6.9 to 4.6.12 and no problem with these versions.
2018 Jan 10
2
Samba 4.7.x IOWAIT / load average
Hi Jeremy, What do you need exactly ? Thanks -- Christophe Yayon > On 10 Jan 2018, at 01:38, Jeremy Allison <jra at samba.org> wrote: > >> On Tue, Jan 09, 2018 at 03:27:17PM +0100, Christophe Yayon via samba wrote: >> Hi, >> >> I want to report a strange problem with samba 4.7.x and IO wait / CPU usage. >> >> I am using linux (archlinux, kernel