I am looking at the performance numbers for the Oracle VDI admin guide. http://docs.oracle.com/html/E26214_02/performance-storage.html From my calculations for 200 desktops running Windows 7 knowledge user (15 iops) with a 30-70 read/write split it comes to 5100 iops. Using 7200 rpm disks the requirement will be 68 disks. This doesn''t seem right, because if you are using clones with caching, you should be able to easily satisfy your reads from ARC and L2ARC. As well, Oracle VDI by default caches writes; therefore the writes will be coalesced and there will be no ZIL activity. Anyone have other guidelines on what they are seeing for iops with vdi? Happy New Year! Geoff
On Jan 2, 2013, at 8:45 PM, Geoff Nordli <geoffn at gnaa.net> wrote:> I am looking at the performance numbers for the Oracle VDI admin guide. > > http://docs.oracle.com/html/E26214_02/performance-storage.html > > From my calculations for 200 desktops running Windows 7 knowledge user (15 iops) with a 30-70 read/write split it comes to 5100 iops. Using 7200 rpm disks the requirement will be 68 disks. > > This doesn''t seem right, because if you are using clones with caching, you should be able to easily satisfy your reads from ARC and L2ARC. As well, Oracle VDI by default caches writes; therefore the writes will be coalesced and there will be no ZIL activity.All of these IOPS <--> VDI users guidelines are wrong. The problem is that the variability of response time is too great for a HDD. The only hope we have of getting the back-of-the-napkin calculations to work is to reduce the variability by using a device that is more consistent in its response (eg SSDs).> > Anyone have other guidelines on what they are seeing for iops with vdi? >The successful VDI implementations I''ve seen have relatively small space requirements for the performance-critical work. So there are a bunch of companies offering SSD-based arrays for that market. If you''re stuck with HDDs, then effective use of snapshots+clones with a few GB of RAM and slog can support quite a few desktops. -- richard -- Richard.Elling at RichardElling.com +1-760-896-4422 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20130103/cfe8b077/attachment.html>
Thanks Richard, Happy New Year. On 13-01-03 09:45 AM, Richard Elling wrote:> On Jan 2, 2013, at 8:45 PM, Geoff Nordli <geoffn at gnaa.net > <mailto:geoffn at gnaa.net>> wrote: > >> I am looking at the performance numbers for the Oracle VDI admin guide. >> >> http://docs.oracle.com/html/E26214_02/performance-storage.html >> >> From my calculations for 200 desktops running Windows 7 knowledge >> user (15 iops) with a 30-70 read/write split it comes to 5100 iops. >> Using 7200 rpm disks the requirement will be 68 disks. >> >> This doesn''t seem right, because if you are using clones with >> caching, you should be able to easily satisfy your reads from ARC and >> L2ARC. As well, Oracle VDI by default caches writes; therefore the >> writes will be coalesced and there will be no ZIL activity. > > All of these IOPS <--> VDI users guidelines are wrong. The problem is > that the variability of > response time is too great for a HDD. The only hope we have of getting > the back-of-the-napkin > calculations to work is to reduce the variability by using a device > that is more consistent in its > response (eg SSDs).For sure there is going to be a lot of variability, but it seems we aren''t even close. Have you seen any back-of-the-napkin calculations which take into consideration SSDs for cache usage?>> >> Anyone have other guidelines on what they are seeing for iops with vdi? >> > > The successful VDI implementations I''ve seen have relatively small > space requirements for > the performance-critical work. So there are a bunch of companies > offering SSD-based arrays > for that market. If you''re stuck with HDDs, then effective use of > snapshots+clones with a few > GB of RAM and slog can support quite a few desktops. > -- richard >Yes, I would like to stick with HDDs. I am just not quite sure what quite a few desktops mean. I thought for sure there would be lots of people around that have done small deployments using a standard ZFS deployment. thanks, Geoff -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20130103/33fa4d78/attachment.html>
On Thu, Jan 3 at 20:38, Geoff Nordli wrote:> On Jan 2, 2013, at 8:45 PM, Geoff Nordli <geoffn at gnaa.net> wrote: > > > I am looking at the performance numbers for the Oracle VDI admin guide. > > http://docs.oracle.com/html/E26214_02/performance-storage.html > > From my calculations for 200 desktops running Windows 7 knowledge user > (15 iops) with a 30-70 read/write split it comes to 5100 iops. Using > 7200 rpm disks the requirement will be 68 disks. > > This doesn''t seem right, because if you are using clones with caching, > you should be able to easily satisfy your reads from ARC and L2ARC. As > well, Oracle VDI by default caches writes; therefore the writes will be > coalesced and there will be no ZIL activity. > > >Yes, I would like to stick with HDDs. > >I am just not quite sure what quite a few desktops mean. > >I thought for sure there would be lots of people around that have done small >deployments using a standard ZFS deployment.Even a single modern SSD should be able to provide hundreds of gigabytes of fast L2ARC to your system, and can scale as your userbase grows for a relatively small initial investment. This is actually about the perfect use case for an L2ARC on SSD. -- Eric D. Mudama edmudama at bounceswoosh.org
On Jan 3, 2013, at 8:38 PM, Geoff Nordli <geoffn at gnaa.net> wrote:> Thanks Richard, Happy New Year. > > On 13-01-03 09:45 AM, Richard Elling wrote: >> On Jan 2, 2013, at 8:45 PM, Geoff Nordli <geoffn at gnaa.net> wrote: >> >>> I am looking at the performance numbers for the Oracle VDI admin guide. >>> >>> http://docs.oracle.com/html/E26214_02/performance-storage.html >>> >>> From my calculations for 200 desktops running Windows 7 knowledge user (15 iops) with a 30-70 read/write split it comes to 5100 iops. Using 7200 rpm disks the requirement will be 68 disks. >>> >>> This doesn''t seem right, because if you are using clones with caching, you should be able to easily satisfy your reads from ARC and L2ARC. As well, Oracle VDI by default caches writes; therefore the writes will be coalesced and there will be no ZIL activity. >> >> All of these IOPS <--> VDI users guidelines are wrong. The problem is that the variability of >> response time is too great for a HDD. The only hope we have of getting the back-of-the-napkin >> calculations to work is to reduce the variability by using a device that is more consistent in its >> response (eg SSDs). > > For sure there is going to be a lot of variability, but it seems we aren''t even close. > > Have you seen any back-of-the-napkin calculations which take into consideration SSDs for cache usage?Yes. I''ve written a white paper on the subject, somewhere on the nexenta.com website (if it is still available). But more current info is presentation at ZFSday. http://www.youtube.com/watch?v=A4yrSfaskwI http://www.slideshare.net/relling>>> >>> Anyone have other guidelines on what they are seeing for iops with vdi? >>> >> >> The successful VDI implementations I''ve seen have relatively small space requirements for >> the performance-critical work. So there are a bunch of companies offering SSD-based arrays >> for that market. If you''re stuck with HDDs, then effective use of snapshots+clones with a few >> GB of RAM and slog can support quite a few desktops. >> -- richard >> > > Yes, I would like to stick with HDDs. > > I am just not quite sure what quite a few desktops mean. > > I thought for sure there would be lots of people around that have done small deployments using a standard ZFS deployment.... and large :-) I did 100 desktops with 2 SSDs two years ago. The presentation was given at OpenStorage Summit 2010. I don''t think there is a video, though :-(. Fundamentally, people like to use sizing in IOPS, but all IOPS are not created equal. An I/O satisfied by ARC is often limited by network bandwidth constraints whereas an I/O that hits a slow pool is often limited by HDD latency. The two are 5 orders of magnitude different when using HDDs in the pool. -- richard -- Richard.Elling at RichardElling.com +1-760-896-4422 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20130104/fb7ecd5f/attachment.html>
On 13-01-04 02:08 PM, Richard Elling wrote:>>> >>> >>> All of these IOPS <--> VDI users guidelines are wrong. The problem >>> is that the variability of >>> response time is too great for a HDD. The only hope we have of >>> getting the back-of-the-napkin >>> calculations to work is to reduce the variability by using a device >>> that is more consistent in its >>> response (eg SSDs). >> >> For sure there is going to be a lot of variability, but it seems we >> aren''t even close. >> >> Have you seen any back-of-the-napkin calculations which take into >> consideration SSDs for cache usage? > > Yes. I''ve written a white paper on the subject, somewhere on the > nexenta.com <http://nexenta.com> website (if it is still available). > But more current info is presentation at ZFSday. > http://www.youtube.com/watch?v=A4yrSfaskwI > http://www.slideshare.net/relling >Great presentation Richard. Our system is designed to provide hands-on labs for education. We use a saved state file for our VMs which eliminate the need for cold boot/login and shutdown issues. This reduces the need for random IO. As well, in this scenario we don''t need to worry about software updates or AV scans, because the labs are completely sandboxed. We need to use HDDs because you have a large amount of labs which can be stored for an extended period. I have been asked to adapt the platform to deliver a VDI solution so I need to make a few more tweaks. thanks, Geoff -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20130104/4c8dca40/attachment.html>