Displaying 20 results from an estimated 2000 matches similar to: "Any experiences with newer WD Red drives?"
2016 Mar 01
0
Any experiences with newer WD Red drives?
On 3/1/2016 9:53 AM, Emmanuel Noobadmin wrote:
> However, the latest C7 server I built, ran into problems with them on
> on a Intel C236 board (SuperMicro X11SSH) with tons of "ata bus error
> write fpdma queued". Googling on it threw up old suggestions to limit
> SATA link speed to 1.5Gbps using libata.force boot options and/or
> noncq. Lowering the link speed helped to
2016 Mar 01
0
Any experiences with newer WD Red drives?
Emmanuel Noobadmin wrote:
> Might be slightly OT as it isn't necessarily a CentOS related issue.
>
> I've been using WD Reds as mdraid components which worked pretty well
> for non-IOPS intensive workloads.
>
> However, the latest C7 server I built, ran into problems with them on
> on a Intel C236 board (SuperMicro X11SSH) with tons of "ata bus error
> write
2016 Mar 01
0
Any experiences with newer WD Red drives?
> However, the latest C7 server I built, ran into problems with them on
> on a Intel C236 board (SuperMicro X11SSH) with tons of "ata bus error
> write fpdma queued". Googling on it threw up old suggestions to limit
> SATA link speed to 1.5Gbps using libata.force boot options and/or
> noncq. Lowering the link speed helped to reduce the frequency of the
> errors (from
2016 Mar 01
0
Any experiences with newer WD Red drives?
On 03/01/2016 09:53 AM, Emmanuel Noobadmin wrote:
>
> Since I'm likely to use Reds again, it is a bit of a concern. So
> wondering if I just happen to get an unlucky batch, or is there some
> incompatibility between the Reds and the Intel C236 chipset, or
> between Red / C236 / Centos 7 combo or the unlikely chance WD has
> decided to do something with the firmware to make
2011 Jan 13
6
bug: kernel 2.6.37-12 READ FPDMA QUEUED
I''ve been trying to install a 2.6.37-12 kernel from kernel-ppa on one of
my Ubuntu machines without success.
It keeps giving errors like this:
[ 9.115544] ata9: exception Emask 0x0 SAct 0xf SErr 0x0 action 0x10
frozen
[ 9.115550] ata9.00: failed command: READ FPDMA QUEUED
[ 9.115556] ata9.00: cmd 60/04:00:
d4:82:85/00:00:1f:00:00/40 tag 0 ncq 2048 in
[ 9.115557]
2012 Jun 22
2
SATA errors in log
Hi,
I have a SATA PCIe 6Gbps 4 port controller card made by Startech. The
kernel (Linux viz1 2.6.32-220.4.1.el6.x86_64) sees it as
Marvell Technology Group Ltd. 88SE9123
I use it to provide extra SATA ports to a raid system.
The HD's are all "WD2003FYYS" and so run at 3Gbps on the 6Gbps controller.
However I am seeing lots of instances of errors like this
2012 Feb 29
7
Software RAID1 with CentOS-6.2
Hello,
Having a problem with software RAID that is driving me crazy.
Here's the details:
1. CentOS 6.2 x86_64 install from the minimal iso (via pxeboot).
2. Reasonably good PC hardware (i.e. not budget, but not server grade either)
with a pair of 1TB Western Digital SATA3 Drives.
3. Drives are plugged into the SATA3 ports on the mainboard (both drives and
cables say they can do 6Gb/s).
4.
2010 Nov 12
6
xen guest not booting
Hi,
My xen guest stopped booting suddenly and giving me the below error
message. Any idea what is going wrong here? DOM 0 boots OK though.
ata5.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0
ata5.00: irq_stat 0x40000008
ata5.00: failed command: READ FPDMA QUEUED
ata5.00: cmd 60/00:00:cd:ee:36/02:00:09:00:00/40 tag 0 ncq 262144 in
res 51/40:72:5b:f0:36/d9:00:09:00:00/40 Emask
2010 Nov 12
6
xen guest not booting
Hi,
My xen guest stopped booting suddenly and giving me the below error
message. Any idea what is going wrong here? DOM 0 boots OK though.
ata5.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0
ata5.00: irq_stat 0x40000008
ata5.00: failed command: READ FPDMA QUEUED
ata5.00: cmd 60/00:00:cd:ee:36/02:00:09:00:00/40 tag 0 ncq 262144 in
res 51/40:72:5b:f0:36/d9:00:09:00:00/40 Emask
2012 Apr 17
10
Very Strange and Probably Obscure Problem
Hi...
I really hope that somebody can help me to brainstorm this problem. Please let me describe my enviornment.
Environment
---------------
Fedora 16 x86_64(fully up-to-date)
Wine 1.5.1 32bit and noarch binaries only
Nvidia GPU - GTS 450
Nvidia Driver - 295.40 (including 32bit libraries)
I've installed Perfectworld International under wine. Everything was running fine until the most recent
2008 Jul 10
49
Supermicro AOC-USAS-L8i
On Wed, Jul 9, 2008 at 1:12 PM, Tim <tim at tcsac.net> wrote:
> Perfect. Which means good ol'' supermicro would come through :) WOHOO!
>
> AOC-USAS-L8i
>
> http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm
Is this card new? I''m not finding it at the usual places like Newegg, etc.
It looks like the LSI SAS3081E-R, but probably at 1/2 the
2006 Apr 14
1
Ext3 and 3ware RAID5
I run a decent amount of 3ware hardware, all under centos-4. There seems
to be some sort of fundamental disagreement between ext3 and 3ware's
hardware RAID5 mode that trashes write performance. As a representative
example, one current setup is 2 9550SX-12 boards in hardware RAID5 mode
(256KB stripe size) with a software RAID0 stripe on top (also 256KB
chunks). bonnie++ results look
2012 Mar 07
2
hardware issues? driver issues?
Got a bunch of servers from Penguin. Supermicro m/b's H8QG6. We put a 3tb
drive in for additional workspace for the users, and some of them won't
read, others will go for weeks, then spit out DRDY errors. lshw shows the
controller as an ATI SB7x0/SB8x0/SB9x0 SATA.
I did notice that it shows
*-storage
description: SATA controller
product: SB7x0/SB8x0/SB9x0 SATA
2018 Jan 15
1
lshw in centos 7 withdrawn
Warren
Thanks for the thoughts. Even with 'dmesg', I
found nothing. The reboot got rid of the problem
and it continues to run perfectly in the same configuration.
I, too, have a slight dislike for external USB
disks, and much prefer internal drives for esveral reasons:
- Internal drives are protected by being inside a
tower and thus have less chance of falling or
being bumped than
2010 Sep 10
5
Traffic shaping on CentOS
I've been trying to do traffic shaping on one of my public servers and
after reading up, it seems like the way to do so is via tc/htb.
However, most of the documentation seems at least half a decade old
with nothing new recently.
Furthermore, trying to get documentation on tc filters turned up a
blank. man tc refers to a tc-filters (8) but trying to man that gives
a no such page/section
2010 Jul 10
4
Redundant LAN routing possible?
I've been reading that it's possible to set up a system with multiple
NIC to provide redundant internet connectivity such that it will
switch to a secondary connection if the primary ISP fails.
Is it possible in a similar way to setup redundant LAN routing? I read
that it is possible to aggregate/bond multiple NIC to stackable
switches that support link aggregation and redundancy. But if
2013 Aug 12
1
BTRFS corruptions counter
Hi,
We decided to give BTRFS a try. We find it very flexible and generally
fast. However last week we had a problem with a Marvell controller in
AHCI and one BTRFS formatted hard drive. We isolated the problem by
relocating the disk to an Intel contoller (SATA controller: Marvell
Technology Group Ltd. 88SE9172 SATA 6Gb/s Controller (rev 11) had a
lot of problems and I managed to overcome them by
2011 Jun 08
3
High system load but low cpu usage
I'm trying to figure out what's causing an average system load of 3+
to 5+ on an Intel quad core. The server has with 2 KVM guests
(assigned 1 core and 2 cores) that's lightly loaded (0.1~0.4) each.
Both guest/host are running 64bit CentOS 5.6
Originally I suspected maybe it's i/o but on checking, there is very
little i/o wait % as well. Plenty of free disk space available on all
2011 Jun 09
4
Possible to use multiple disk to bypass I/O wait?
I'm trying to resolve an I/O problem on a CentOS 5.6 server. The
process basically scans through Maildirs, checking for space usage and
quota. Because there are hundred odd user folders and several 10s of
thousands of small files, this sends the I/O wait % way high. The
server hits a very high load level and stops responding to other
requests until the crawl is done.
I am wondering if I add
2011 Jun 23
4
Jumbo frames problem with Realtek NICs?
I was trying to do some performance testing between using iSCSI on the
host as a diskfile to a guest vs the VM guest using the iSCSI device
directly.
However, in the process of trying to establish a baseline performance
figure, I started increasing the MTU settings on the PCI-express NICs
with RTL8168B chips.
First bottleneck was discovering the max MTU allowed on these is 7K
instead of 9K but