I am looking at a nas software from nexenta, and after some initial testing i like what i see. So i think we will find in funding the budget for a dual setup. We are looking at a dual cpu Supermicro server with about 32gb ram and 2 x250gb OS disks, 21 x 1TB SATA disks, and 1 x 64gb SSD disk. The system will use nexenta''s auto-cdp which i think are based on AVS to remote mirror to a system a few miles away. The system will mostly be serving as a NFS server for our Vmware servers. We have about 80 vm''s who access the vmfs datastores. I have read that its smart to use a few small raid groups in a larger pools, but i am uncertain about placing 21 disks in 1pool. The setup i have though of so far are: 1 pool with 3 x raidz2 groups with 6x1tb disks. 2x 64gb ssd for cache and 2 spare disks. This should give us about 12TB An another setup i have been thinking about is: 1 pool with 9 x mirror with 2 x 1TB, also with 2 spares and 2 64gb SSD. Do anyone have a recommendation on what might be a good setup? -- This message posted from opensolaris.org
On Mon, 3 Aug 2009, Joachim Sandvik wrote:> We are looking at a dual cpu Supermicro server with about 32gb ram > and 2 x250gb OS disks, 21 x 1TB SATA disks, and 1 x 64gb SSD disk. > > The system will use nexenta''s auto-cdp which i think are based on > AVS to remote mirror to a system a few miles away. The system will > mostly be serving as a NFS server for our Vmware servers. We have > about 80 vm''s who access the vmfs datastores. > > I have read that its smart to use a few small raid groups in a > larger pools, but i am uncertain about placing 21 disks in 1pool.21 disks in 1 pool should not be a problem.> The setup i have though of so far are: > > 1 pool with 3 x raidz2 groups with 6x1tb disks. 2x 64gb ssd for > cache and 2 spare disks. This should give us about 12TB > > An another setup i have been thinking about is: > > 1 pool with 9 x mirror with 2 x 1TB, also with 2 spares and 2 64gb SSD. > > Do anyone have a recommendation on what might be a good setup?The mirror configuration will provide much better multiuser performance (IOPS) since it offers so many more vdevs (9 vs 3) and the vdevs are simple. The MTTDL is not quite as good as raidz2 but since you are providing spare disks, and because resilver of a mirror disk is fast, it should be quite reliable. If you do a periodic scrub of the array, there should not be much concern about read errors during resilver. The mirror configuration will provide much better random read performance than raidz2 since any drive in the mirror may be used to return a read request (similar to 18 readable vdevs vs 3). Usually when raidz2 is used, many more disks are used per vdev so that it is more space efficient but in your case it is not that much more space efficent than mirroring (12GB vs 9GB). If 9GB is enough space for your needs, then it is likely that mirrors are the best configuration. It is safer to err in the direction of more performance since you can always add disk later if you need more space, but adding performance is more expensive once the data is in place. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Will the IOPS in the mirrored setup be so good, that a ssd cache disk might not be needed? And i then might go for 10 x mirror with 2 x 1tb instead of 9? I really dont think that space will be an issue on this system as we for now are using about 3tb, and i have been testing compression with great results also. How would memory inpact the performance? memory is quite cheap now a days, so going to 48gb will be quite inexpencive. or should i go for 24gb and use the rest of the money for faster ssd''s? -- This message posted from opensolaris.org
On Mon, 3 Aug 2009, Joachim Sandvik wrote:> Will the IOPS in the mirrored setup be so good, that a ssd cache > disk might not be needed? And i then might go for 10 x mirror with 2 > x 1tb instead of 9? I really dont think that space will be an issueThis really depends on how many synchronous writes you have. NFS writes are mostly/all synchronous writes. Other protocols may require the same or fewer synchronous writes. SSDs can be spectacular at improving NFS write performance.> How would memory inpact the performance? memory is quite cheap now a > days, so going to 48gb will be quite inexpencive. or should i go for > 24gb and use the rest of the money for faster ssd''s?This depends on the balance of reads vs writes, and how many synchronous writes you have to worry about. It also depends on the size of the working set. If the RAM is smaller than the working set, then IOPS will be wasted. Even though synchronous writes are the bottleneck for some uses, most servers do mostly reads. Nothing reads faster than RAM. A SSD read cache might return data at 200MB/s but RAM can return it at 3000MB/s or more. ZFS writes will also be faster with more RAM when existing data is updated since the 128K block to be updated does not need to be read again before being written. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Mon, Aug 3, 2009 at 3:34 PM, Joachim Sandvik <no-reply at opensolaris.org>wrote:> I am looking at a nas software from nexenta, and after some initial testing > i like what i see. So i think we will find in funding the budget for a dual > setup. > > We are looking at a dual cpu Supermicro server with about 32gb ram and 2 > x250gb OS disks, 21 x 1TB SATA disks, and 1 x 64gb SSD disk. > > The system will use nexenta''s auto-cdp which i think are based on AVS to > remote mirror to a system a few miles away. The system will mostly be > serving as a NFS server for our Vmware servers. We have about 80 vm''s who > access the vmfs datastores. > > I have read that its smart to use a few small raid groups in a larger > pools, but i am uncertain about placing 21 disks in 1pool. > > The setup i have though of so far are: > > 1 pool with 3 x raidz2 groups with 6x1tb disks. 2x 64gb ssd for cache and 2 > spare disks. This should give us about 12TB > > An another setup i have been thinking about is: > > 1 pool with 9 x mirror with 2 x 1TB, also with 2 spares and 2 64gb SSD. > > Do anyone have a recommendation on what might be a good setup?FWIW, I think you''re nuts putting that many VM''s on SATA disk, SSD as cache or not. If there''s ANY kind of I/O load those disks are going to fall flat on their face. VM I/O looks like completely random I/O from the storage perspective, and it tends to be pretty darn latency sensitive. Good luck, I''d be happy to be proven wrong. Every test I''ve ever done has shown you need SAS/FC for vmware workloads though. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090803/f2f76e87/attachment.html>
On Mon, Aug 3, 2009 at 10:18 PM, Tim Cook <tim at cook.ms> wrote:> > > On Mon, Aug 3, 2009 at 3:34 PM, Joachim Sandvik <no-reply at opensolaris.org>wrote: > >> I am looking at a nas software from nexenta, and after some initial >> testing i like what i see. So i think we will find in funding the budget for >> a dual setup. >> >> We are looking at a dual cpu Supermicro server with about 32gb ram and 2 >> x250gb OS disks, 21 x 1TB SATA disks, and 1 x 64gb SSD disk. >> >> The system will use nexenta''s auto-cdp which i think are based on AVS to >> remote mirror to a system a few miles away. The system will mostly be >> serving as a NFS server for our Vmware servers. We have about 80 vm''s who >> access the vmfs datastores. >> >> I have read that its smart to use a few small raid groups in a larger >> pools, but i am uncertain about placing 21 disks in 1pool. >> >> The setup i have though of so far are: >> >> 1 pool with 3 x raidz2 groups with 6x1tb disks. 2x 64gb ssd for cache and >> 2 spare disks. This should give us about 12TB >> >> An another setup i have been thinking about is: >> >> 1 pool with 9 x mirror with 2 x 1TB, also with 2 spares and 2 64gb SSD. >> >> Do anyone have a recommendation on what might be a good setup? > > > > > FWIW, I think you''re nuts putting that many VM''s on SATA disk, SSD as cache > or not. If there''s ANY kind of I/O load those disks are going to fall flat > on their face. > > VM I/O looks like completely random I/O from the storage perspective, and > it tends to be pretty darn latency sensitive. Good luck, I''d be happy to be > proven wrong. Every test I''ve ever done has shown you need SAS/FC for > vmware workloads though. > > --Tim >As has been so kindly pointed out to me, I should clarify. When I say "SATA" disks, I mean 7200RPM or 5400RPM disks meant for bulk storage. The interface of that disk is not important (FC, SATA, or SAS). VMware should go on 15k disks if at all possible, regardless of interface. While I''m unaware of any 15k disks that use anything but FC, SAS, or SCSI interfaces, it is technically possible for them to use SATA. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090803/428d505f/attachment.html>
Were those tests you mentioned on Raid-5/6/Raid-Z/z2 or on Mirrored volumes of some kind? We''ve found here that VM loads on raid 10 sata volumes, with relatively high numbers of disks actually works pretty well - and depending size of the drives, you quite often get more usuable space too. ;-) I suspect 80 VM''s on 20 sata disks might be pushing things though, but it''ll depend on the workload. T ________________________________ From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Tim Cook Sent: Tuesday, August 04, 2009 1:18 PM To: Joachim Sandvik Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] Need tips on zfs pool setup.. On Mon, Aug 3, 2009 at 3:34 PM, Joachim Sandvik <no-reply at opensolaris.org> wrote: I am looking at a nas software from nexenta, and after some initial testing i like what i see. So i think we will find in funding the budget for a dual setup. We are looking at a dual cpu Supermicro server with about 32gb ram and 2 x250gb OS disks, 21 x 1TB SATA disks, and 1 x 64gb SSD disk. The system will use nexenta''s auto-cdp which i think are based on AVS to remote mirror to a system a few miles away. The system will mostly be serving as a NFS server for our Vmware servers. We have about 80 vm''s who access the vmfs datastores. I have read that its smart to use a few small raid groups in a larger pools, but i am uncertain about placing 21 disks in 1pool. The setup i have though of so far are: 1 pool with 3 x raidz2 groups with 6x1tb disks. 2x 64gb ssd for cache and 2 spare disks. This should give us about 12TB An another setup i have been thinking about is: 1 pool with 9 x mirror with 2 x 1TB, also with 2 spares and 2 64gb SSD. Do anyone have a recommendation on what might be a good setup? FWIW, I think you''re nuts putting that many VM''s on SATA disk, SSD as cache or not. If there''s ANY kind of I/O load those disks are going to fall flat on their face. VM I/O looks like completely random I/O from the storage perspective, and it tends to be pretty darn latency sensitive. Good luck, I''d be happy to be proven wrong. Every test I''ve ever done has shown you need SAS/FC for vmware workloads though. --Tim ______________________________________________________________________ This email has been scanned by the MessageLabs Email Security System. For more information please visit http://www.messagelabs.com/email ______________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090804/77c51e1b/attachment.html>
Thanks for your input, its good to read that not all are to positive. I will do a lot more testing before i do the final choice. I have never tested more than 3-5vm''s on sata raids, but we use 40x sata with great result our backup box, but then its only 1 servers. does anybody have some numbers on speed on sata vs 15k sas? Is it really a big difference? We could also go for a mix of sata and 15K sas on the storage box, with 10 sata in 1 pool, and 10 sas in another pool. -- This message posted from opensolaris.org
>does anybody have some numbers on speed on sata vs 15k sas?The next chance I get, I will do a comparison.>Is it really a big difference?I noticed a huge improvement when I moved a virtualized pool off a series of 7200 RPM SATA discs to even 10k SAS drives. Night and day... jlc
On 04/08/2009, at 9:42 PM, Joseph L. Casale wrote:> I noticed a huge improvement when I moved a virtualized pool > off a series of 7200 RPM SATA discs to even 10k SAS drives. > Night and day...What I would really like to know is if it makes a big difference comparing say 7200RPM drives in mirror+stripe mode vs 15kRPM drives in raidz2.... And how much of a difference raidz2 is compared to mirror+stripe in a contentious multi-client environment. cheers, James
Le 4 ao?t 09 ? 13:42, Joseph L. Casale a ?crit :>> does anybody have some numbers on speed on sata vs 15k sas? > > The next chance I get, I will do a comparison. > >> Is it really a big difference? > > I noticed a huge improvement when I moved a virtualized pool > off a series of 7200 RPM SATA discs to even 10k SAS drives. > Night and day... >If by ''huge'' you mean much more than 10K/7.2K in the data path with otherwise same number of spindles, then that has got to be because of something not specified here. -r> jlc > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2431 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090804/9e3840d7/attachment.bin>
On Aug 4, 2009, at 7:26 AM, Joachim Sandvik <no-reply at opensolaris.org> wrote:> > does anybody have some numbers on speed on sata vs 15k sas? Is it > really a big difference?For random io the number of IOPS is 1000/(mean access + avg rotational latency) (in ms) Avg rotational latency is 1/2 the rotational latency which is 1/ (rotations per second), which for 15K = 250 and for 7200 = 120. This means for 15K drives the arl = 2ms and for 7200 the arl = 4.150ms. So for each random io take the mean access time add the arl and divide 1000 by that number. SAS disks tend to have faster access times, so say a top SAS disk has an access time of 4ms + ARL = 6ms, a top SATA disk has an access time of 8ms + ARL = 12ms 1000/6 = 167 IOPS for SAS 1000/12 = 83 IOPS for SATA Then follow the guidelines for how RAIDed disks increase or decrease the IOPS by the RAID type.> We could also go for a mix of sata and 15K sas on the storage box, > with 10 sata in 1 pool, and 10 sas in another pool.I recommend SAS for all storage that does primariy random io and SATA for storage that does primarily sequential io. -Ross
On Tue, Aug 4, 2009 at 7:33 AM, Roch Bourbonnais <Roch.Bourbonnais at sun.com>wrote:> > Le 4 ao?t 09 ? 13:42, Joseph L. Casale a ?crit : > > does anybody have some numbers on speed on sata vs 15k sas? >>> >> >> The next chance I get, I will do a comparison. >> >> Is it really a big difference? >>> >> >> I noticed a huge improvement when I moved a virtualized pool >> off a series of 7200 RPM SATA discs to even 10k SAS drives. >> Night and day... >> >> > If by ''huge'' you mean much more than 10K/7.2K in the data path with > otherwise same number of spindles, then > that has got to be because of something not specified here. > > -rNo it doesn''t. The response time on 10k drives is night and day better than 7.2k drives. VMware workloads look exactly like DB workloads. Faster spindles=better response time=virtualized platform being much happier. Not to mention, in my experience, the 7.2k drives fall off a cliff when you overwork them. 10k/15k drives tend to have a more linear degradation in performance. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090804/fc20b653/attachment-0001.html>
On Aug 4, 2009, at 7:01 AM, Ross Walker wrote:> > On Aug 4, 2009, at 7:26 AM, Joachim Sandvik <no- > reply at opensolaris.org> wrote: > >> >> does anybody have some numbers on speed on sata vs 15k sas? Is it >> really a big difference? > > For random io the number of IOPS is 1000/(mean access + avg > rotational latency) (in ms) > > Avg rotational latency is 1/2 the rotational latency which is 1/ > (rotations per second), which for 15K = 250 and for 7200 = 120. > > This means for 15K drives the arl = 2ms and for 7200 the arl = > 4.150ms. > > So for each random io take the mean access time add the arl and > divide 1000 by that number. > > SAS disks tend to have faster access times, so say a top SAS disk > has an access time of 4ms + ARL = 6ms, a top SATA disk has an access > time of 8ms + ARL = 12msUnfair! You are comparing 2.5" drives to 3.5" drives. The seek times depend on the size (diameter) of the disk. Smaller diameter == faster seek. Where you have to be careful is that laptop drives tend to be 2.5" but they also run slower, 5,400 rpm. 1 TB 2.5" drives are beginning to appear, but it looks like they will be slower -- targeted at laptops, not servers. From a disk vendor''s perspective, the mechanics can be the same with different electronics. It is the mechanics which drive performance for HDDs, which is why if you can eliminate the mechanics you can go really fast. To look at this another way, a SATA SSD will run circles around a FC or SAS HDD. -- richard
On Tue, Aug 4, 2009 at 10:33 AM, Richard Elling<richard.elling at gmail.com> wrote:> On Aug 4, 2009, at 7:01 AM, Ross Walker wrote: >> >> On Aug 4, 2009, at 7:26 AM, Joachim Sandvik <no-reply at opensolaris.org> >> wrote: >> >>> >>> does anybody have some numbers on speed on sata vs 15k sas? Is it really >>> a big difference? >> >> For random io the number of IOPS is 1000/(mean access + avg rotational >> latency) (in ms) >> >> Avg rotational latency is 1/2 the rotational latency which is 1/(rotations >> per second), which for 15K = 250 and for 7200 = 120. >> >> This means for 15K drives the arl = 2ms and for 7200 the arl = 4.150ms. >> >> So for each random io take the mean access time add the arl and divide >> 1000 by that number. >> >> SAS disks tend to have faster access times, so say a top SAS disk has an >> access time of 4ms + ARL = 6ms, a top SATA disk has an access time of 8ms + >> ARL = 12ms > > Unfair! You are comparing 2.5" drives to 3.5" drives. ?The seek times > depend on the size (diameter) of the disk. Smaller diameter == faster > seek. Where you have to be careful is that laptop drives tend to be 2.5" > but they also run slower, 5,400 rpm. ?1 TB 2.5" drives are beginning to > appear, but it looks like they will be slower -- targeted at laptops, not > servers.It was merely an example of how to calculate the IOPS of a given drive and to show the OP the kind of speed 15K SAS affords over 7200 SATA. You must plug in your drive''s own seek numbers. You are fooling yourself though if you think the size makes a difference in the seek times. Take a look at the technical specifications. Average Latency = Average Rotational Latency, Average Seek Time = Mean Access Time, when looking at the specs.>From what I have read online my example is very optimistic.The platters are getting smaller, but the linear density is increasing and the heads are getting smaller. It''s like a 3.5" drive, but shrunk in proportion. -Ross
>>If by ''huge'' you mean much more than 10K/7.2K in the data path with otherwise same number of spindles, then >>that has got to be because of something not specified here. >> > >No it doesn''t. The response time on 10k drives is night and day better than 7.2k drives. VMware workloads look exactly like DB workloads. Faster spindles=better response >time=virtualized platform being much happier. > >Not to mention, in my experience, the 7.2k drives fall off a cliff when you overwork them. 10k/15k drives tend to have a more linear degradation in performance.Precisely, and then I removed the controllers Raid6 config and exported the now unused 7.2k Sata discs into Solaris as single R0 vol''s for the server to perform Raidz2 with and that allowed the before unusable workload which stalled that IO subsystem out to the point of clients failing writes to my doubling the client count on this system. jlc
Tim Cook writes: > On Tue, Aug 4, 2009 at 7:33 AM, Roch Bourbonnais > <Roch.Bourbonnais at sun.com>wrote: > > > > > Le 4 ao?t 09 ? 13:42, Joseph L. Casale a ?crit : > > > > does anybody have some numbers on speed on sata vs 15k sas? > >>> > >> > >> The next chance I get, I will do a comparison. > >> > >> Is it really a big difference? > >>> > >> > >> I noticed a huge improvement when I moved a virtualized pool > >> off a series of 7200 RPM SATA discs to even 10k SAS drives. > >> Night and day... > >> > >> > > If by ''huge'' you mean much more than 10K/7.2K in the data path with > > otherwise same number of spindles, then > > that has got to be because of something not specified here. > > > > -r > > > > No it doesn''t. The response time on 10k drives is night and day better than > 7.2k drives. VMware workloads look exactly like DB workloads. Faster > spindles=better response time=virtualized platform being much happier. > Yes, at light thread count and to the tune of 10K/7.2K. At high load, if you''re demand exceeds the supply, then there is no bounds to what response time will be. To make a fair comparison, I would pit 2 systems with about the same total RPM. > Not to mention, in my experience, the 7.2k drives fall off a cliff when you > overwork them. 10k/15k drives tend to have a more linear degradation in > performance. > That''s the ''something else''. It''s got to be in the driver/firmware not in the RPM/Interface protocol. -r > --Tim
On Aug 4, 2009, at 8:01 AM, Ross Walker wrote:> On Tue, Aug 4, 2009 at 10:33 AM, Richard Elling<richard.elling at gmail.com > > wrote: >> On Aug 4, 2009, at 7:01 AM, Ross Walker wrote: >>> >>> On Aug 4, 2009, at 7:26 AM, Joachim Sandvik <no-reply at opensolaris.org >>> > >>> wrote: >>> >>>> >>>> does anybody have some numbers on speed on sata vs 15k sas? Is it >>>> really >>>> a big difference? >>> >>> For random io the number of IOPS is 1000/(mean access + avg >>> rotational >>> latency) (in ms) >>> >>> Avg rotational latency is 1/2 the rotational latency which is 1/ >>> (rotations >>> per second), which for 15K = 250 and for 7200 = 120. >>> >>> This means for 15K drives the arl = 2ms and for 7200 the arl = >>> 4.150ms. >>> >>> So for each random io take the mean access time add the arl and >>> divide >>> 1000 by that number. >>> >>> SAS disks tend to have faster access times, so say a top SAS disk >>> has an >>> access time of 4ms + ARL = 6ms, a top SATA disk has an access time >>> of 8ms + >>> ARL = 12ms >> >> Unfair! You are comparing 2.5" drives to 3.5" drives. The seek times >> depend on the size (diameter) of the disk. Smaller diameter == faster >> seek. Where you have to be careful is that laptop drives tend to be >> 2.5" >> but they also run slower, 5,400 rpm. 1 TB 2.5" drives are >> beginning to >> appear, but it looks like they will be slower -- targeted at >> laptops, not >> servers. > > It was merely an example of how to calculate the IOPS of a given drive > and to show the OP the kind of speed 15K SAS affords over 7200 SATA. > You must plug in your drive''s own seek numbers.Yes. I blogged about this a while ago, and included small random read performance models for ZFS. http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance> You are fooling yourself though if you think the size makes a > difference in the seek times. Take a look at the technical > specifications. Average Latency = Average Rotational Latency, Average > Seek Time = Mean Access Time, when looking at the specs.I''ve never found a spec where a 3.5" disk had faster seek time than a 2.5" disk.> From what I have read online my example is very optimistic.Actually, it is about right. I SWAG 80 iops for a 3.5" 7,200 rpm disk and 170 for 2.5" 15k rpm disk. For planning purposes, it seems to work well. Of course, some of the new SSDs they are bragging about do 40,000 iops... game over. -- richard
On Aug 4, 2009, at 2:11 PM, Richard Elling <richard.elling at gmail.com> wrote:> On Aug 4, 2009, at 8:01 AM, Ross Walker wrote: > >> On Tue, Aug 4, 2009 at 10:33 AM, Richard Elling<richard.elling at gmail.com >> > wrote: >>> On Aug 4, 2009, at 7:01 AM, Ross Walker wrote: >>>> >>>> For random io the number of IOPS is 1000/(mean access + avg >>>> rotational >>>> latency) (in ms) >>>> >>>> Avg rotational latency is 1/2 the rotational latency which is 1/ >>>> (rotations >>>> per second), which for 15K = 250 and for 7200 = 120. >>>> >>>> This means for 15K drives the arl = 2ms and for 7200 the arl = >>>> 4.150ms. >>>> >>>> So for each random io take the mean access time add the arl and >>>> divide >>>> 1000 by that number. >>>> >>>> SAS disks tend to have faster access times, so say a top SAS disk >>>> has an >>>> access time of 4ms + ARL = 6ms, a top SATA disk has an access >>>> time of 8ms + >>>> ARL = 12ms >>> >>> Unfair! You are comparing 2.5" drives to 3.5" drives. The seek >>> times >>> depend on the size (diameter) of the disk. Smaller diameter == >>> faster >>> seek. Where you have to be careful is that laptop drives tend to >>> be 2.5" >>> but they also run slower, 5,400 rpm. 1 TB 2.5" drives are >>> beginning to >>> appear, but it looks like they will be slower -- targeted at >>> laptops, not >>> servers. >> You are fooling yourself though if you think the size makes a >> difference in the seek times. Take a look at the technical >> specifications. Average Latency = Average Rotational Latency, Average >> Seek Time = Mean Access Time, when looking at the specs. > > I''ve never found a spec where a 3.5" disk had faster seek time than a > 2.5" disk.In straight apples-to-apples comparison (type, speed, etc) no, they are about equal. But you were alluding to 2.5" being a faster technology, which it really isn''t, but they do use less power to spin the smaller disks and you can fit more per u then 3.5" disks.>> From what I have read online my example is very optimistic. > > Actually, it is about right. I SWAG 80 iops for a 3.5" 7,200 rpm disk > and 170 for 2.5" 15k rpm disk. For planning purposes, it seems to > work well. Of course, some of the new SSDs they are bragging about > do 40,000 iops... game over.Well we still need to see how SSDs fair long term. Not that I wouldn''t love to get rid of my spinning platters, I just need their replacement to be as, if not more reliable. -Ross