Hello, We have long running problem with NetApp filers. When we connect server to the filer sequential read performance is ~70MB/s. But once we run database on the server seq read performance drops to ~11MB/s. That's happening with two servers. One is running Oracle another - MySQL. During speed tests database load is very light (less than 1MB/s of reads and writes). During the tests NetApp was serving only these two servers. What could be causing such slowdown and overall slow read performance? Writes are fast >100MB/s. NetApp support is claiming that such performance is normal. Somehow I do not believe that 2007 model should deliver such XXth century performance levels. :) RHEL4 fully updated 64 bits connected to NetApp FAS3040 via SAN using Qlogic 2Gb/s FC adapters. Tried both drivers from Redhat and QLogic (for NetApp). Also changing of FC parameters (like queue size) did not help. Regards, Mindaugas
Mindaugas Riauba wrote:> NetApp support is claiming that such performance is normal. Somehow > I do not believe that 2007 model should deliver such XXth century > performance levels. :)How many disks, what RPM are they running at, what I/O block size is being used and what protocol (NFS/iSCSI/FC) is being used? Checking an Oracle DB I used to run it averages 7k I/Os with spikes to 59k. For a 10k RPM disk, 7k I/O size means roughly 800kBytes/second before latency starts to become an issue depending on the controller type. Really high end controllers can go up to about 1,312kB instead of 800kB. The array reports Oracle is using an average of 985 kBytes/second with spikes to 28MBytes/second. A MySQL DB I used to run averages 41k I/Os with spikes up to 333k. For a 10k RPM disk 41k I/O is 4500 kBytes/second. The array reports MySQL using an average of 3200 kBytes/second with spikes to 34.1MBytes/second. The array throughput numbers include benefit from the disk cache, while the raw spindle performance assumes no cache (or "worst case" performance). Both of those are connected via Fiber channel so performance will be quite a bit higher than that of NFS or iSCSI. So the numbers your seeing could be perfectly reasonable as NetApp suggests depending on what the exact workload is and your array configuration. For the workload you should look to the array for statistics, I'm not too familiar with NetApp arrays but I assume they offer a wide range of statistics, hopefully I/O size is among them as that is the most critical to determine throughput. The array running the above databases is a system running 40 10k RPM disks with the data evenly distributed across all spindles for max performance. The array also is host to about 25 other systems as well. NetApps certainly aren't the fastest thing in the west but given your performance levels it sounds like you don't have many disks connected and are limited by the disks rather than the controller(s). Most low end arrays don't offer the level of visibility that the enterprise ones do. On that note I'm getting a new 150TB array in today, pretty excited about that. 3PAR T400 virtualized storage system. nate
> -----Original Message----- > From: centos-bounces at centos.org [mailto:centos-bounces at centos.org]On > Behalf Of Mindaugas Riauba > Sent: Thursday, November 06, 2008 3:45 AM > To: nahant-list at redhat.com; CentOS at centos.org > Subject: [CentOS] Painfully slow NetApp with databas > > We have long running problem with NetApp filers. When we connect > server to the filer sequential read performance is ~70MB/s. But once > we run database on the server seq read performance drops to ~11MB/s.There many factors which impact database i/o performance. You are comparing two probably dissimilar i/o paths. Configuration differences (disk, controller, connection, kernel parameters) may be part of the problem. Physical vs. logical sequence, i.e. disk head movement, is particularly significant for sequential access. Throughput increases of 200% to 500% are not uncommon after packing (dump/restore) of a large table. The database translates and theoretically optimizes sql statements generating query plans. They should be examined to determine if there are doing what is expected.> That's happening with two servers. One is running Oracle another - > MySQL. During speed tests database load is very light (less than 1MB/s > of reads and writes). During the tests NetApp was serving only these > two servers. > > What could be causing such slowdown and overall slow read > performance? Writes are fast >100MB/s. >Database performance data is another good source of diagnostic info. It can help pinpoint query plan problems. This article may help you put all the pieces together. It is one of the best I have found: http://www.miracleas.com/BAARF/oow2000_same.pdf The author is Juan Loaitza from Oracle. Hope some of this helps. david _______________________________________ No viruses found in this outgoing message Scanned by iolo AntiVirus 1.5.5.5 http://www.iolo.com