Harry Putnam
2009-Feb-26 04:48 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
I''m experimenting with a zfs home server. Running Opensol-11 by way of vmware on WinXP. It seems one way to avoid all the hardware problems one might run into trying to install opensol on available or spare hardware. Are there some bad gotchas about running opensol/zfs through vmware and never going to real hardware? One thing comes to mind is the overhead of two OSs on one processor. An Athlon64 2.2 +3400 running 32bit Windows XP and opensol in VMware. But if I lay off the windows OS... like not really working it with transcibing video or compressing masses of data or the like. Is this likely to be a problem? Also I''m loosing out on going 64 bit since its not likely this machine supports the AMD V extensions... and I''m short on SATA connections. I only have two onboard, but plan to install a pci style sata controller to squeeze in some more discs. Its a big old ANTEC case so I don''t think getting the discs in there will be much of a problem. But have wondered if a PCI sata controller is likely to be a big problem. So, are there things I need to know about that will make running a zfs home server from vmware a bad idea? The server will be serving as backup destination for 5 home machines and most likely would see service only about 2-3 days a week far as any kind of heavy usage like ghosted disc images and other large chunks of data + a regular 3 day a week backup running from windows using `retrospect'' to backup user directories and changed files in C:\. A 6th (linux) machine may eventually start using the server but for now its pretty selfcontained and has lots of disc space.
Harry, The LiveCD for OpenSolaris has a driver detection tool on it - this will let you see if your hardware is supported without touching the installed XP system. A big issue with running a VM is that ZFS prefers direct access to storage. On Wed, Feb 25, 2009 at 11:48 PM, Harry Putnam <reader at newsguy.com> wrote:> I''m experimenting with a zfs home server. ?Running Opensol-11 by > way of vmware on WinXP. > > It seems one way to avoid all the hardware problems one might run into > trying to install opensol on available or spare hardware. > > Are there some bad gotchas about running opensol/zfs through vmware and > never going to real hardware? > > One thing comes to mind is the overhead of two OSs on one processor. > An Athlon64 2.2 +3400 running 32bit Windows XP and opensol in VMware. > > But if I lay off the windows OS... like not really working it with > transcibing video or compressing masses of data or the like. Is this > likely to be a problem? > > Also I''m loosing out on going 64 bit since its not likely this machine > supports the AMD V extensions... and I''m short on SATA connections. ?I > only have two onboard, but plan to install a pci style sata controller > to squeeze in some more discs. > > Its a ?big old ANTEC case so I don''t think getting the discs in there > will be much of a problem. ?But have wondered if a PCI sata controller > is likely to be a big problem. > > So, are there things I need to know about that will make running a zfs > home server from vmware a bad idea? > > The server will be serving as backup destination for 5 home machines > and most likely would see service only about 2-3 days a week far as > any kind of heavy usage like ghosted disc images and other large > chunks of data + a regular 3 day a week backup running from windows > using `retrospect'' to backup user directories and changed files in > C:\. > > A 6th (linux) machine may eventually start using the server but for > now its pretty selfcontained and has lots of disc space. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Harry Putnam
2009-Feb-27 17:31 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
Blake <blake.irvin at gmail.com> writes:> Harry, > The LiveCD for OpenSolaris has a driver detection tool on it - this > will let you see if your hardware is supported without touching the > installed XP system.Are you talking about the official Opensol-11 install iso or something else?> A big issue with running a VM is that ZFS prefers direct access to storage.What effect does this preferance have? Does it perform badly when it does not have direct access to storage? Are the virtual disks supplied by vmware less functional in some way?
On Fri, Feb 27, 2009 at 12:31 PM, Harry Putnam <reader at newsguy.com> wrote:> Are you talking about the official Opensol-11 install iso or something > else?The official 2008.11 LiveCD has the tool on the default desktop as an icon.> >> A big issue with running a VM is that ZFS prefers direct access to storage. > > What effect does this preferance have? ?Does it perform badly when it > does not have direct access to storage? ?Are the virtual disks > supplied by vmware less functional in some way?I would expect pretty bad performance adding VMWare as a layer in between ZFS and your block devices. ZFS documentation specifically advises against abstracting block devices whenever possible. Since ZFS is trying to checksum blocks, the fewer abstraction layers you have in between ZFS and spinning rust, the less points of error/failure.> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Richard Elling
2009-Feb-27 18:29 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
Blake wrote:> On Fri, Feb 27, 2009 at 12:31 PM, Harry Putnam <reader at newsguy.com> wrote: > >> Are you talking about the official Opensol-11 install iso or something >> else? >> > The official 2008.11 LiveCD has the tool on the default desktop as an icon. >No need, it is a Java app and you can run it on multiple OSes. http://www.sun.com/bigadmin/hcl/hcts/device_detect.jsp -- richard
Bob Friesenhahn
2009-Feb-27 18:48 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
On Fri, 27 Feb 2009, Blake wrote:>>>> SinceZFS is trying to checksum blocks, the fewer abstraction >>>> layers youhave in between ZFS and spinning rust, the less points >>>> oferror/failure.Are you saying that ZFS checksums are responsible for the failure? In what way does more layers of abstraction cause particular problems for ZFS which won''t also occur with some other filesystem? Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Netherton
2009-Feb-27 18:53 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
Bob is right. Less chance of failure perhaps but also less protection. I don''t like it when my storage lies to me :) Bob Sent from my iPhone On Feb 27, 2009, at 12:48 PM, Bob Friesenhahn <bfriesen at simple.dallas.tx.us > wrote:> On Fri, 27 Feb 2009, Blake wrote: > >>>>> SinceZFS is trying to checksum blocks, the fewer abstraction >>>>> layers youhave in between ZFS and spinning rust, the less points >>>>> oferror/failure. > > Are you saying that ZFS checksums are responsible for the failure? > > In what way does more layers of abstraction cause particular > problems for ZFS which won''t also occur with some other filesystem? > > Bob > -- > Bob Friesenhahn > bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Harry Putnam
2009-Feb-27 19:36 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
Blake wrote:>> The official 2008.11 LiveCD has the tool on the default desktop as an icon.Richard Elling wrote:> No need, it is a Java app and you can run it on multiple OSes. > http://www.sun.com/bigadmin/hcl/hcts/device_detect.jspIts a little confusing to tell what to make of the report. The main thing I was worried about was the motherboard... It''s not mentioned in the report far as I see, the only red ball I get is on a via raid controller.. I don''t plan to use anyway. www.jtan.com/~reader/SDDToolReport-chub-OpenSolaris.html So can I assume then that my motherboard and Opensol-11 will get along fine? Its not mentioned on the HCL anywhere.
Harry Putnam
2009-Feb-27 19:50 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
Blake <blake.irvin at gmail.com> writes:> Harry, > The LiveCD for OpenSolaris has a driver detection tool on it - this > will let you see if your hardware is supported without touching the > installed XP system.That won''t help much with the one piece of hardware I posted about in OP:> ... and I''m short on SATA connections. I only > have two onboard, but plan to install a pci style sata controller to > squeeze in some more discs.I want to know if there is a pci sata controller known to work with opensol-11. I''m not involved in a big commercial operation so those $500 to $1000+ jobs with dozens of ports are not what I want. I''m hoping for something around $50 or so with 4 ports... even 2 would probably be enough. Also, that tool detection tool you mention has nothing to say about the existing motherboard (Aopen AK86-L [not mentioned on HCL]). Which is Another item I was worried about not working with opensol if installed direct to hardware.
Richard Elling
2009-Feb-27 20:14 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
Harry Putnam wrote:> Blake wrote: > >>> The official 2008.11 LiveCD has the tool on the default desktop as an icon. >>> > > Richard Elling wrote: > >> No need, it is a Java app and you can run it on multiple OSes. >> http://www.sun.com/bigadmin/hcl/hcts/device_detect.jsp >> > > Its a little confusing to tell what to make of the report. > > The main thing I was worried about was the motherboard... It''s not > mentioned in the report far as I see, the only red ball I get is on a > via raid controller.. I don''t plan to use anyway. > > www.jtan.com/~reader/SDDToolReport-chub-OpenSolaris.html > > So can I assume then that my motherboard and Opensol-11 will get along > fine? Its not mentioned on the HCL anywhere. >Motherboards don''t matter. It is what is on the motherboard that matters. In your case, it looks like everything should work except the VIA SATA RAID controller. Fortunately, the IDE controller is supported, so you should be able to install it. IMHO, unless you need > 6 SATA ports, it might be less expensive to buy a new motherboard than to buy a PCI[-E] SATA controller. Most modern motherboards in the < $100 category have 4+ SATA ports. -- richard
Harry Putnam
2009-Feb-27 20:24 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
Richard Elling <richard.elling at gmail.com> writes:> Motherboards don''t matter. It is what is on the motherboard that > matters. In your case, it looks like everything should work except > the VIA SATA RAID controller. Fortunately, the IDE controller is > supported, so you should be able to install it.The cpu doesn''t appear to be reported there either... one of what''s on the mobo. Oh, and since the mothers don''t matter, why is there a section on HCL for them?> IMHO, unless you need > 6 SATA ports, it might be less expensive > to buy a new motherboard than to buy a PCI[-E] SATA controller. > Most modern motherboards in the < $100 category have 4+ SATA > ports.It might be easier to buy but maybe not so easy to install.. I see pci sata controllers for $50 and down but no idea if they work with opensol-11. The HCL seems to home in on very big very expensive controllers, I don''t see lightweights listed.
I meant that the more layers you remove, the less layers there are that can tell ZFS something that''s not true. I guess ZFS would still catch those errors in most cases - it would still be a pain to deal with needless errors. Also I like to do what the manual says, and the manual says avoid abstraction layers :) Harry, Richard is probably right. There are plenty of boards with nVidia or Intel SATA that should work fine. Search for ''opensolaris hcl'' (hardware compatibility list) - there are about 400+ mobos listed there that are reported to work. On Fri, Feb 27, 2009 at 1:48 PM, Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:> On Fri, 27 Feb 2009, Blake wrote: > >>>>> SinceZFS is trying to checksum blocks, the fewer abstraction layers >>>>> youhave in between ZFS and spinning rust, the less points oferror/failure. > > Are you saying that ZFS checksums are responsible for the failure? > > In what way does more layers of abstraction cause particular problems for > ZFS which won''t also occur with some other filesystem? > > Bob > -- > Bob Friesenhahn > bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, ? ?http://www.GraphicsMagick.org/ >
Bob Friesenhahn
2009-Feb-27 21:15 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
On Fri, 27 Feb 2009, Blake wrote:> I meant that the more layers you remove, the less layers there are > that can tell ZFS something that''s not true. I guess ZFS would still > catch those errors in most cases - it would still be a pain to deal > with needless errors. Also I like to do what the manual says, and the > manual says avoid abstraction layers :)I expect that the desire to avoid abstraction layers is because ZFS is performance tuned so that each LUN is one disk. If one LUN is several (or many) disks then ZFS does not know how to optimize its I/O requests to take best advantage of available disks. If the LUN is very large, then resilvering and other LUN-specific tasks may take longer than desired. It is not that abstraction layers are necessarily bad. Abstraction layers are what allow modern systems to work, and to scale. Whenever an abstraction layer is used, it is best to do a fault-analysis to see what any particular failure would do to the system. For example, it would be very bad if two mirrored LUNs were accidentally using part of the same physical disk drive. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Brandon High
2009-Feb-28 00:17 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
On Thu, Feb 26, 2009 at 8:35 AM, Blake <blake.irvin at gmail.com> wrote:> A big issue with running a VM is that ZFS prefers direct access to storage.VMWare can give VMs direct access to the actual disks. This should avoid the overhead of using virtual disks. -B -- Brandon High : bhigh at freaks.com
Harry Putnam
2009-Feb-28 00:51 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
Brandon High <bhigh at freaks.com> writes:> On Thu, Feb 26, 2009 at 8:35 AM, Blake <blake.irvin at gmail.com> wrote: >> A big issue with running a VM is that ZFS prefers direct access to storage. > > VMWare can give VMs direct access to the actual disks. This should > avoid the overhead of using virtual disks.Can you say if it makes a noticeable difference to zfs. I''d noticed that option but didn''t connect it to this conversation. Also, if I recall there is some warning about being an advanced user to use that option or something similar.
Brandon makes a good point. I think that''s an option to pursue if you don''t want to risk messing up your Windows install. If you can, dedicate entire disks, rather that partitions, to ZFS. It''s easier to manage. ZFS is managed by the VMs processor in this case, so you will take a bigger performance hit than running on bare metal. That said, my filer exporting ZFS over NFS to 10 busy CentOS clients barely breaks a sweat. On Fri, Feb 27, 2009 at 7:51 PM, Harry Putnam <reader at newsguy.com> wrote:> Brandon High <bhigh at freaks.com> writes: > >> On Thu, Feb 26, 2009 at 8:35 AM, Blake <blake.irvin at gmail.com> wrote: >>> ? A big issue with running a VM is that ZFS prefers direct access to storage. >> >> VMWare can give VMs direct access to the actual disks. This should >> avoid the overhead of using virtual disks. > > Can you say if it makes a noticeable difference to zfs. ?I''d noticed > that option but didn''t connect it to this conversation. ?Also, if I > recall there is some warning about being an advanced user to use that > option or something similar. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Juergen Nickelsen
2009-Mar-02 07:38 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
Harry Putnam <reader at newsguy.com> writes:> www.jtan.com/~reader/SDDToolReport-chub-OpenSolaris.htmlI see the following there: Solaris Bundled Driver: * vgatext/ ** radeon Video ATI Technologies Inc R360 NJ [Radeon 9800 XT] I *think* this is the same driver used with my work laptop (which I don''t have at hand to check, unfortunately), also with ATI graphics hardware. As far as I know the situation with ATI is that, while ATI supplies well-performing binary drivers for MS Windows (of course) and Linux, there is no such thing for other OSs. So OpenSolaris uses standardized interfaces of the graphics hardware, which have comparatively low bandwidth. This leads to very unimpressive graphics performance, up to the point that the machine nearly freezes when large images are loaded into the graphics adapter. Most of my work is text-oriented (lots of XTerms and one XEmacs, mostly) with some web browsing and the occasional GUI tool thrown in, and this works mostly fine on the system. Even picture processing with Gimp from time to time is okay, while not fast. (And I do not mean "not blindingly fast", but rather "really not fast".) But there are things that really are a pain, e. g. web pages that constantly blend one picture into the other, for instance http://www.strato.de/ . While you would not notice that, usually, this page makes my laptop really slow, such that it requires significant effort even to find and press the button to close the window. Still, I find that bearable given that I have Solaris running on the machine (as my target platform is Solaris 10) including ZFS goodness. On the other hand, I understand that you want to build a server, not a workstation type machine. Graphics performance should be irrelevant in this case. If it is not, you might consider another graphics adapter. To my knowledge the situation is much better with NVIDIA hardware. Regards, Juergen. -- Unix gives you just enough rope to hang yourself -- and then a couple of more feet, just to be sure. -- Eric Allman
Juergen Nickelsen
2009-Mar-02 11:18 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
Juergen Nickelsen <ni at jnickelsen.de> writes:> Solaris Bundled Driver: * vgatext/ ** radeon > Video > ATI Technologies Inc > R360 NJ [Radeon 9800 XT] > > I *think* this is the same driver used with my work laptop (which I > don''t have at hand to check, unfortunately), also with ATI graphics > hardware.Confirmed. Regards, Juergen. -- What you "won" was the obligation to pay more for something than anybody else thought it was worth. -- Delainey and Rasmussen''s "Betty" about eBay
yes, most nvidia hardware will give you much better performance on OpenSolaris (provided the card is fairly recent) On Mon, Mar 2, 2009 at 6:18 AM, Juergen Nickelsen <ni at jnickelsen.de> wrote:> Juergen Nickelsen <ni at jnickelsen.de> writes: > >> Solaris Bundled Driver: * vgatext/ ** radeon >> Video >> ATI Technologies Inc >> R360 NJ [Radeon 9800 XT] >> >> I *think* this is the same driver used with my work laptop (which I >> don''t have at hand to check, unfortunately), also with ATI graphics >> hardware. > > Confirmed. > Regards, Juergen. > > -- > What you "won" was the obligation to pay more for something than > anybody else thought it was worth. > ? ? ? ? ? ? ? -- Delainey and Rasmussen''s "Betty" about eBay > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Marion Hakanson
2009-Mar-02 19:19 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
ni at jnickelsen.de said:> As far as I know the situation with ATI is that, while ATI supplies > well-performing binary drivers for MS Windows (of course) and Linux, there is > no such thing for other OSs. So OpenSolaris uses standardized interfaces of > the graphics hardware, which have comparatively low bandwidth. > . . . > But there are things that really are a pain, e. g. web pages that constantly > blend one picture into the other, for instance http://www.strato.de/ . While > you would not notice that, usually, this page makes my laptop really slow, > such that it requires significant effort even to find and press the button to > close the window.Wow, this is getting pretty far afield from a ZFS discussion. Hopefully others will find this a helpful tidbit.... I just found some xorg.conf settings which greatly alleviate this issue on my Solaris-10-x86 machine with ATI Radeon 9200 graphics adapter. In the "Device" section, try one of the following: Option "AccelMethod" "EXA" # default is "XAA" Or: Option "XaaNoOffscreenPixmaps" "on" Seriously, it''s almost like having a new PC. Either option makes the "100% CPU while fading rotating images" go away; Personally, I prefer the 2nd option, as I found the 1st method led to slightly slower redrawing of windows (e.g. when you switch between GNOME desktops), but that will depend on what else you''re doing. But yes, nVidia cards are much, much better supported in Solaris. Regards, Marion
Miles Nordin
2009-Mar-02 23:42 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
>>>>> "bh" == Brandon High <bhigh at freaks.com> writes:bh> VMWare can give VMs direct access to the actual disks. This bh> should avoid the overhead of using virtual disks. maybe some of the ``overhead'''' but not necessarily the write cache sync bugs. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090302/14b0c210/attachment.bin>
Brandon High
2009-Mar-03 00:02 UTC
[zfs-discuss] Virutal zfs server vs hardware zfs server
On Fri, Feb 27, 2009 at 4:51 PM, Harry Putnam <reader at newsguy.com> wrote:> Can you say if it makes a noticeable difference to zfs. I''d noticed > that option but didn''t connect it to this conversation. Also, if I > recall there is some warning about being an advanced user to use that > option or something similar.I can''t comment, since I haven''t used the option before on VM Ware Server or Workstation. I would expect that it would be a better solution, however the host operating system''s controller driver would still be used. Other than that, the host system''s i/o system and caching should not be in the data path at all. You could take the drive out and install them in a new machine and it would look just like a native disk. -B -- Brandon High : bhigh at freaks.com
When creating a ZFS pool, it seems the default format is striping. But is there a way to create a pool of concatenated disks? That is, let''s say I have 2 local disks (SATA, 100GB each) and 1 iscsi partition (from remote Solaris server, 80GB). So, if I issue a command: # zpool create -f mypool c0t1d0 c0t2d0 c2t010000E0815E35D500002A0049313F80d0 (for example) I would get ''mypool'' that is 280GB in size, right? But this would be in striped mode. As a stripe, if any one of the disks were to fail, mypool would be toast. But, if mypool was a concatenation, things would get written onto the c0t1d0 first, and if any one of the subsequent disks were to fail, I should be able to recover everything off of mypool, as long as I have not filled up c0t1d0, since things were written sequentially, rather than across all disks like striping. Is my understanding correct, or am I totally off the wall here? And, if I AM correct, how do you create a concatenated zpool? S
> But, if mypool was a concatenation, things would get written onto the c0t1d0 first, and if any one of the subsequent disks were to fail, I should be able to recover everything off of mypool, as long as I have not filled up c0t1d0, since things were written sequentially, rather than across all disks like striping.I think the circumstances where this would work are very unlikely, and I don''t know that ZFS gives you any guarantee that it''s going to write to the front of a given device and then work back from there.... does it? Even if it did, what about if the pool filled up, and then emptied out again while you weren''t looking. Some data might be left in the last device. Neither simple concat or stripe have any resilience to disk failure, you must use mirrors or raidz to achieve that.> Is my understanding correct, or am I totally off the wall here? > > And, if I AM correct, how do you create a concatenated zpool?You can''t. ZFS dynamically stripes across top-level vdevs. Whichever order you add them into the pool, they will be effectively treated as a stripe. regards, --justin