similar to: Native Command Queueing

Displaying 20 results from an estimated 3000 matches similar to: "Native Command Queueing"

2007 Nov 13
4
Need advice on storage
Hi all,? I have a CentOS 4.5 server running on a workstation mainboard (PCI Slots only).? We have now one 200 Gigs IDE disk dedicated for e-mail server storage.? We use Communigate Pro and the server has 45 Outlook clients with the MAPI connector (All mailboxes on the server).? When a user opens Outlook, a refresh of the local cache is performed for his data.? There is a big "Public"
2007 Jul 07
12
ZFS Performance as a function of Disk Slice
First Post! Sorry, I had to get that out of the way to break the ice... I was wondering if it makes sense to zone ZFS pools by disk slice, and if it makes a difference with RAIDZ. As I''m sure we''re all aware, the end of a drive is half as fast as the beginning ([i]where the zoning stipulates that the physical outside is the beginning and going towards the spindle increases hex
2007 May 07
5
Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install. Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs
2008 Mar 11
1
Question on SATA DVD using centos 5.1
On my machine I have SATA0: HD SATA1: HD these two drives are set as RAID1 SATA2: HD extra SATA3: DVD SATA4: external USB disk Snip from dmesg shows the ATAPI device being detected. ata3: SATA max UDMA/133 cmd 0x9e0 ctl 0xbe0 bmdma 0xe400 irq 10 ata4: SATA max UDMA/133 cmd 0x960 ctl 0xb60 bmdma 0xe408 irq 10 ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata3.00: ATAPI: PIONEER BD-ROM
2015 Jan 07
5
reboot - is there a timeout on filesystem flush?
> On Jan 6, 2015, at 5:50 PM, Les Mikesell <lesmikesell at gmail.com> wrote: > > On Tue, Jan 6, 2015 at 6:37 PM, Gary Greene <ggreene at minervanetworks.com> wrote: >> >> >> Almost every controller and drive out there now lies about what is and isn?t flushed to disk, making it nigh on impossible for the Kernel to reliably know 100% of the time that the
2009 Oct 09
6
disk I/O problems and Solutions
Hey folks, CentOS / PostgreSQL shop over here. I'm hitting 3 of my favorite lists with this, so here's hoping that the BCC trick is the right way to do it :-) We've just discovered thanks to a new Munin plugin http://blogs.amd.co.at/robe/2008/12/graphing-linux-disk-io-statistics-with-munin.html that our production DB is completely maxing out in I/O for about a 3 hour stretch from
2006 Dec 15
3
ZFS works in waves
A little back story: I have a Norco DS-1220, a 12 bay SATA box, it is connected to eSATA (SiI3124) via PCI-X two drives are straight connections, then the other two ports go to 5x multipliers within the box. My needs/hopes for this was using 12 500GB drives and ZFS make a very large & simple data dump spot on my network for other servers to rsync to daily & use zfs snapshots for
2013 Feb 23
1
Old ICH7 SATA-2 question
Hello there, I've got a question about SATA. I've got ASUS P5GC-MX/1333 with ICH7. (SATA2 support) A few HDD with SATA2. system: uname -a FreeBSD diablo.miekoff.local 9.1-STABLE FreeBSD 9.1-STABLE #1 r246666: Tue Feb 12 00:19:07 MSK 2013 root at diablo.miekoff.local:/usr/obj/usr/src/sys/DIABLO64 amd64 camcontrol info camcontrol iden ada2 pass2: <ST3500320AS SD1A> ATA-8 SATA 2.x
2006 Nov 25
3
SATA Native Command Queuing
Does CentOS support NCQ on SATA drives? If so is there something I must do to turn this support on? Matt
2010 Dec 23
31
SAS/short stroking vs. SSDs for ZIL
Hi, as I have learned from the discussion about which SSD to use as ZIL drives, I stumbled across this article, that discusses short stroking for increasing IOPs on SAS and SATA drives: http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html Now, I am wondering if using a mirror of such 15k SAS drives would be a good-enough fit for a ZIL on a zpool that is mainly used for file
2010 Sep 14
5
IOwaits over NFS
Hello. We have a number of Xen 3.4.2. boxes which have constant iowaits at around 10% with spikes up to 100% when accessing data over NFS. We have been unable to nail down the issue. Any advice? System info: release : 2.6.18-194.3.1.el5xen version : #1 SMP Thu May 13 13:49:53 EDT 2010 machine : x86_64 nr_cpus : 16 nr_nodes
2010 Jan 05
4
Software RAID1 Disk I/O
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1. [root at server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root at server ~]# Its barebones
2018 May 03
0
Finding performance bottlenecks
It worries me how many threads talk about low performance. I'm about to build out a replica 3 setup and run Ovirt with a bunch of Windows VMs. Are the issues Tony is experiencing "normal" for Gluster? Does anyone here have a system with windows VMs and have good performance? *Vincent Royer* *778-825-1057* <http://www.epicenergy.ca/> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
2009 Dec 02
7
Slightly OT: FakeRaid or Software Raid
I have had great luck with nvidia fakeraid on RAID1, but I see there are preferences for software raid. I have very little hands on with full Linux software RAID and that was about 14 years ago. I am trying to determine which to use on a rebuild in a "standard" CentOS/Xen enviroment. It seems to me that while FakeRaid is/can be completely taken care of in dom0 dmraid whereas with
2018 May 01
3
Finding performance bottlenecks
On 01/05/2018 02:27, Thing wrote: > Hi, > > So is the KVM or Vmware as the host(s)?? I basically have the same setup > ie 3 x 1TB "raid1" nodes and VMs, but 1gb networking.? I do notice with > vmware using NFS disk was pretty slow (40% of a single disk) but this > was over 1gb networking which was clearly saturating.? Hence I am moving > to KVM to use glusterfs
2013 Sep 02
1
heavy IO load when working with sparse files (centos 6.4)
Dear List, We have noticed a variety of reproducible conditions working with sparse files on multiple servers under load with CentOS 6.4. The short story is that processes that read / write sparse files with large "holes" can generate an IO storm. Oddly, this only happens with holes and not with the sections of the files that contain data. We have seen extremely high IO load for
2007 Oct 23
6
Any Xen kernel based on something newer than 2.6.18 ?
Hello all, I''m trying to get my servers to work with Xen, but as I use sata2, it looks like only a recent kernel will do the job. But unferotunately, ony and official 2.6.8 is provided, which doesn''t boot at all on those machines. I''ve tried ubuntu''s 2.6.22-xen, but ti looks very buggy. Where could we find patches for 2.6.22/23 to add Xen? How could I gete a
2010 Jan 06
16
8-15 TB storage: any recommendations?
Hello everyone, This is not directly related to CentOS but still: we are trying to set up some storage servers to run under Linux - most likely CentOS. The storage volume would be in the range specified: 8-15 TB. Any recommendations as far as hardware? Thanks. Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2018 May 03
1
Finding performance bottlenecks
Tony?s performance sounds significantly sub par from my experience. I did some testing with gluster 3.12 and Ovirt 3.9, on my running production cluster when I enabled the glfsapi, even my pre numbers are significantly better than what Tony is reporting: ??????????????????? Before using gfapi: ]# dd if=/dev/urandom of=test.file bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824
2010 Nov 12
6
xen guest not booting
Hi, My xen guest stopped booting suddenly and giving me the below error message. Any idea what is going wrong here? DOM 0 boots OK though. ata5.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 ata5.00: irq_stat 0x40000008 ata5.00: failed command: READ FPDMA QUEUED ata5.00: cmd 60/00:00:cd:ee:36/02:00:09:00:00/40 tag 0 ncq 262144 in res 51/40:72:5b:f0:36/d9:00:09:00:00/40 Emask