Hi everyone! I''m planning to migrate a little firm''s Windows server to a HVM machine. The test setup woks fine (it''s basically an MSSQL server tied pack of programs for an accountancy office), all programs can reach the mssql from the outside on the hvm windows. If fact it turned out to be so good, that I''m even planning to simply copy the win image(s) to the real server later on. I''m planning to use zfs too, a simple mirroring setup. Now to the question: Would it be better to reinstall Win onto a zfs zvol, or is it OK to leave it as is, and simply copy it to a directory (as it is now). A few concerns: 1.) at this moment the Win seems to have a pretty good performance (on nv 77 a.t.m.), but it''s now on ufs. Can I expect the prformance top drop by only moving the same image to zfs? (Sun X2100 with 1G RAM. I know it isn''t much, but that''s what we have for now. Later there will be more! The test machine is a C2D E6600 1GB RAM, sata hdd''s, all slices UFS.) 2.) Can I grow Windows'' emulated partition by adding space to the image file? Is there a tool for it? I know it can be done on a zfs. It''s not a big deal though, as you can add space by using junctions on win. That is the case now - sql sits on an other partition (image), linked to it''s c:\program files\bla directory - just to make backup a little easier. 3.) I plan to back up the image by copying it to an in-office pc, say in the night hours. If I use zfs, how could that be done? I mean, if I install Win on a zvol, does it still use an image file? Now a simple samba share, and a scheduled windoze task can do it. I hope it''s all clear... :) Thanks in advance for the advices!! a This message posted from opensolaris.org
Mark Johnson
2007-Dec-06 13:34 UTC
Re: HVM Windows: disk image vs. zvol - which is better?
Attila Nagy wrote:> Hi everyone! > I''m planning to migrate a little firm''s Windows> server to a HVM machine.> The test setup woks fine (it''s basically an MSSQL> server tied pack of programs for an accountancy > office), all programs can reach the mssql from > the outside on the hvm windows.> If fact it turned out to be so good, that I''m even> planning to simply copy the win image(s) to the real > server later on.> I''m planning to use zfs too, a simple mirroring setup. > > Now to the question: > Would it be better to reinstall Win onto a zfs> zvol, or is it OK to leave it as is, and simply > copy it to a directory (as it is now).> > A few concerns: > 1.) at this moment the Win seems to have a pretty good> performance (on nv 77 a.t.m.), but it''s now on ufs. Can > I expect the prformance top drop by only moving the > same image to zfs? (Sun X2100 with 1G RAM. I know it > isn''t much, but that''s what we have for now. Later > there will be more! The test machine is a C2D E6600 > 1GB RAM, sata hdd''s, all slices UFS.) Performance will not be very good until you have Windows PV disk/net drivers. I''m not sure when we will have publicly available Windows PV drivers I would not recommend moving to zfs with only 1G of ram to be shared between dom0 and any guests. It''s really meant for larger systems. On a small system like this, you might want to start with Solaris Volume Manager (SVM) and then migrate to zfs when you go to a bigger system. zfs gives you a lot more functionality, but SVM is small and runs as fast as a native slice/disk in dom0. You should be able to dd the disk image from a svm volume to a disk file to a zvol and move it around at will as long as the size matches (I haven''t done it myself though). You should also be shutting everything down that you can on dom0. e.g. X windows, etc. On a side note, I have been playing with a script which builds a minimal dom0 which runs out of a ramdisk. ramdisk is about 80M compressed and takes about ~ 280M of memory for the disk when running. Good for booting of a USB stick, compact flash, etc. Other than the ramdisk, it has a pretty small memory footprint so it almost makes up for the ramdisk.. I need to clean it up and send it out for folks to play with/improve...> 2.) Can I grow Windows'' emulated partition by adding> space to the image file? Is there a tool for it? I know > it can be done on a zfs. It''s not a big deal though, > as you can add space by using junctions on win. That > is the case now - sql sits on an other partition (image), > linked to it''s c:\program files\bla directory - just > to make backup a little easier. You would have to similar tricks that you do today with windows when moving to a bigger disk.. e.g. create a new larger disk, copy the old disk to the larger disk, use partition magic to grow the partition. It doesn''t hurt having multiple disks... In the near''ish future, it may even make sense to export a zfs filesystem via cifs from the dom0 and have the windows domain net mount some of the disks depending on you performance requirements. You would need PV drivers of course.> 3.) I plan to back up the image by copying it to an> in-office pc, say in the night hours. If I use zfs, > how could that be done? I mean, if I install Win on > a zvol, does it still use an image file? Now a > simple samba share, and a scheduled windoze task can do it. See this thread.. http://www.opensolaris.org/jive/thread.jspa?messageID=166817 Thanks, MRJ> I hope it''s all clear... :) > > Thanks in advance for the advices!! > > a > > > This message posted from opensolaris.org > _______________________________________________ > xen-discuss mailing list > xen-discuss@opensolaris.org
> Performance will not be very good until you have > Windows PV disk/net drivers. I''m not sure when we > will have publicly available Windows PV driversPerformance is not _that_ bad, at least I feel so. This Windows boots up fine, in about 30 seconds. Okay, far from perfect :)), but still, not bad. Do you have at least an estimate when PV drivers might arrive? (say 2008 Q1, or maybe in 6 months, or maybe sooner? Just curious.)> > I would not recommend moving to zfs with only 1G > of ram to be shared between dom0 and any guests. > It''s really meant for larger systems.Yeah, _I_ know that, but the owner wants everything, but gives.... a little less :) He even speaks about having some 30 instances of mssql on that HVM Windows... I recommended him to buy a few 8GB ECC modules :))> > On a small system like this, you might want to > start with Solaris Volume Manager (SVM) and then > migrate to zfs when you go to a bigger system. > zfs gives you a lot more functionality, but SVM > is small and runs as fast as a native slice/disk > in dom0. You should be able to dd the disk image > from a svm volume to a disk file to a zvol and > move it around at will as long as the size > matches (I haven''t done it myself though).Okay, I''ll give it a shot. I''d stick to zfs, because of it''s simplicity and features, but it needs more memory, I know. Somehow I always had hw raid, or zfs (lately). I don''t have much experience with svm, but I dont''t think it''s _that_ complicated :))> > You should also be shutting everything down > that you can on dom0. e.g. X windows, etc.Sure. Probably I missed something, but how can I get to the HVM''s console from, say an other win, on the same lan? vnc to dom0? I don''t think so. vnc to domU? It isn''t trivial to administer Windows from the command line... :)> > On a side note, I have been playing with a > script which builds a minimal dom0 which runs > out of a ramdisk. ramdisk is about 80M compressed > and takes about ~ 280M of memory for the disk > when running. Good for booting of a USB stick, > compact flash, etc. Other than the ramdisk, it > has a pretty small memory footprint so it almost > makes up for the ramdisk.. I need to clean it > up and send it out for folks to play with/improve...Ummm... sounds interesting! Think of me as a volunteer! :)> > You would have to similar tricks that you do today > with windows when moving to a bigger disk.. e.g. > create > a new larger disk, copy the old disk to the larger > disk, use partition magic to grow the partition.Yes, that was my idea too. Okay.> > It doesn''t hurt having multiple disks...Not at all!> > In the near''ish future, it may even make sense to > export a zfs filesystem via cifs from the dom0 > and have the windows domain net mount some of the > disks depending on you performance requirements. > You would need PV drivers of course.I was actually considering something similar, only with samba :)> > See this thread.. > > ttp://www.opensolaris.org/jive/thread.jspa?messageID=1 > 66817 >Great thread, thanks! I planned that I let the server compress the win image in the night hours, and put it in a samba-shared directory, and from there a win client machine pulls down, and maybe writes to dvd (or directly to dvd-ram). This way the data leaves the server. It''s almost the same theory as a zfs send/receive! Whoa I invented the wheel! :))> > Thanks, > > MRJ > >Thanks for the great support, and all the work! Attila This message posted from opensolaris.org
James Cornell
2007-Dec-06 23:53 UTC
Re: HVM Windows: disk image vs. zvol - which is better?
xVM and X11 have nothing to do with each other. X11 is just a giant memory hog, and JDS many more times over on top of that. Even with 32GB of memory, it''s common to not see any gui with UNIX systems, they are unncessary. As for Windows command-line administration, I agree mostly, except Powershell, Services for UNIX, and Windows Server 2003 and beyond have a full set of tools for managing IIS, domains, dns, security, etc, you just need to read up on them those three solutions together will fill in 75% of the tasks of an average Windows sysadmin. Windows can run backgrounded, see VMware Workstation and VMware Server backgrounding, it doesn''t bind with X11 or anything like you''re used to with COM and compositing on Vista for instance, which makes it impossible to separate window manager and program. There is a console for installing and using xVM guests, but it is hardly necessary for operation of a server after it''s setup. There''s VNC support regardless of the guest type with xVM already, just a matter of toggling a flag and setting a password. You could also use rdp if you want better integration and performance after the guest is up, obviously very nice because if windows bsod''s you have direct VNC not using mirror drivers or abstraction, it''s like a fancy ipkvm. James On Dec 6, 2007, at 3:43 PM, Attila Nagy wrote:>> Performance will not be very good until you have >> Windows PV disk/net drivers. I''m not sure when we >> will have publicly available Windows PV drivers > > Performance is not _that_ bad, at least I feel so. This Windows > boots up fine, in about 30 seconds. Okay, far from perfect :)), but > still, not bad. > Do you have at least an estimate when PV drivers might arrive? (say > 2008 Q1, or maybe in 6 months, or maybe sooner? Just curious.) > >> >> I would not recommend moving to zfs with only 1G >> of ram to be shared between dom0 and any guests. >> It''s really meant for larger systems. > > Yeah, _I_ know that, but the owner wants everything, but gives.... a > little less :) He even speaks about having some 30 instances of > mssql on that HVM Windows... I recommended him to buy a few 8GB ECC > modules :)) > >> >> On a small system like this, you might want to >> start with Solaris Volume Manager (SVM) and then >> migrate to zfs when you go to a bigger system. >> zfs gives you a lot more functionality, but SVM >> is small and runs as fast as a native slice/disk >> in dom0. You should be able to dd the disk image >> from a svm volume to a disk file to a zvol and >> move it around at will as long as the size >> matches (I haven''t done it myself though). > > Okay, I''ll give it a shot. I''d stick to zfs, because of it''s > simplicity and features, but it needs more memory, I know. > Somehow I always had hw raid, or zfs (lately). I don''t have much > experience with svm, but I dont''t think it''s _that_ complicated :)) > >> >> You should also be shutting everything down >> that you can on dom0. e.g. X windows, etc. > > Sure. > Probably I missed something, but how can I get to the HVM''s console > from, say an other win, on the same lan? vnc to dom0? I don''t think > so. vnc to domU? > It isn''t trivial to administer Windows from the command line... :) > >> >> On a side note, I have been playing with a >> script which builds a minimal dom0 which runs >> out of a ramdisk. ramdisk is about 80M compressed >> and takes about ~ 280M of memory for the disk >> when running. Good for booting of a USB stick, >> compact flash, etc. Other than the ramdisk, it >> has a pretty small memory footprint so it almost >> makes up for the ramdisk.. I need to clean it >> up and send it out for folks to play with/improve... > > Ummm... sounds interesting! > Think of me as a volunteer! :) > >> >> You would have to similar tricks that you do today >> with windows when moving to a bigger disk.. e.g. >> create >> a new larger disk, copy the old disk to the larger >> disk, use partition magic to grow the partition. > > Yes, that was my idea too. Okay. > >> >> It doesn''t hurt having multiple disks... > > Not at all! > >> >> In the near''ish future, it may even make sense to >> export a zfs filesystem via cifs from the dom0 >> and have the windows domain net mount some of the >> disks depending on you performance requirements. >> You would need PV drivers of course. > > I was actually considering something similar, only with samba :) > > >> >> See this thread.. >> >> ttp://www.opensolaris.org/jive/thread.jspa?messageID=1 >> 66817 >> > > Great thread, thanks! I planned that I let the server compress the > win image in the night hours, and put it in a samba-shared > directory, and from there a win client machine pulls down, and maybe > writes to dvd (or directly to dvd-ram). This way the data leaves the > server. > It''s almost the same theory as a zfs send/receive! Whoa I invented > the wheel! :)) > >> >> Thanks, >> >> MRJ >> >> > > Thanks for the great support, and all the work! > > Attila > > > This message posted from opensolaris.org > _______________________________________________ > xen-discuss mailing list > xen-discuss@opensolaris.org
Mark Johnson
2007-Dec-07 03:03 UTC
Re: HVM Windows: disk image vs. zvol - which is better?
Attila Nagy wrote:>> Performance will not be very good until you have >> Windows PV disk/net drivers. I''m not sure when we >> will have publicly available Windows PV drivers > > Performance is not _that_ bad, at least I feel so. This Windows boots up fine, in about 30 seconds. Okay, far from perfect :)), but still, not bad. > Do you have at least an estimate when PV drivers might arrive? (say 2008 Q1, or maybe in 6 months, or maybe sooner? Just curious.)Hopefully soon''ish, but I really don''t know...>> I would not recommend moving to zfs with only 1G >> of ram to be shared between dom0 and any guests. >> It''s really meant for larger systems. > > Yeah, _I_ know that, but the owner wants everything, but gives.... a little less :) He even speaks about having some 30 instances of mssql on that HVM Windows... I recommended him to buy a few 8GB ECC modules :)) > >> On a small system like this, you might want to >> start with Solaris Volume Manager (SVM) and then >> migrate to zfs when you go to a bigger system. >> zfs gives you a lot more functionality, but SVM >> is small and runs as fast as a native slice/disk >> in dom0. You should be able to dd the disk image >> from a svm volume to a disk file to a zvol and >> move it around at will as long as the size >> matches (I haven''t done it myself though). > > Okay, I''ll give it a shot. I''d stick to zfs, because of it''s simplicity and features, but it needs more memory, I know. > Somehow I always had hw raid, or zfs (lately). I don''t have much experience with svm, but I dont''t think it''s _that_ complicated :)) > >> You should also be shutting everything down >> that you can on dom0. e.g. X windows, etc. > > Sure. > Probably I missed something, but how can I get to the HVM''s console from, say an other win, on the same lan? vnc to dom0? I don''t think so. vnc to domU? > It isn''t trivial to administer Windows from the command line... :)Yep, as of b76 or 77 we include a vnc server which you can use for dom0 if needed. You can vnc into the domUs for their console... If your using a modern version of windows you should enable the rdp stuff and use that. MRJ>> On a side note, I have been playing with a >> script which builds a minimal dom0 which runs >> out of a ramdisk. ramdisk is about 80M compressed >> and takes about ~ 280M of memory for the disk >> when running. Good for booting of a USB stick, >> compact flash, etc. Other than the ramdisk, it >> has a pretty small memory footprint so it almost >> makes up for the ramdisk.. I need to clean it >> up and send it out for folks to play with/improve... > > Ummm... sounds interesting! > Think of me as a volunteer! :) > >> You would have to similar tricks that you do today >> with windows when moving to a bigger disk.. e.g. >> create >> a new larger disk, copy the old disk to the larger >> disk, use partition magic to grow the partition. > > Yes, that was my idea too. Okay. > >> It doesn''t hurt having multiple disks... > > Not at all! > >> In the near''ish future, it may even make sense to >> export a zfs filesystem via cifs from the dom0 >> and have the windows domain net mount some of the >> disks depending on you performance requirements. >> You would need PV drivers of course. > > I was actually considering something similar, only with samba :) > > >> See this thread.. >> >> ttp://www.opensolaris.org/jive/thread.jspa?messageID=1 >> 66817 >> > > Great thread, thanks! I planned that I let the server compress the win image in the night hours, and put it in a samba-shared directory, and from there a win client machine pulls down, and maybe writes to dvd (or directly to dvd-ram). This way the data leaves the server. > It''s almost the same theory as a zfs send/receive! Whoa I invented the wheel! :)) > >> Thanks, >> >> MRJ >> >> > > Thanks for the great support, and all the work! > > Attila > > > This message posted from opensolaris.org > _______________________________________________ > xen-discuss mailing list > xen-discuss@opensolaris.org
Boris Derzhavets
2007-Dec-07 13:26 UTC
Re: HVM Windows: disk image vs. zvol - which is better?
At mean time I wouldn’t move to Solaris xVM (76,77) any production system, taking in acoount Sun hardware you intend to utilize for Solaris xVM run and development phase in which Solaris xVM is at mean time. Just for fun, try Windows HVM at CentOS 5.1 on top of embedded Xen 3.1 utilizing Linux LVs as storage image and having at least 2GB of RAM “Quote:- The test machine is a C2D E6600 1GB RAM, sata hdd's” same CPU, same board and 250 GB SATA Drive with 16 MB cashed controller. Next estimate the price for PC with:- Intel Core2Quad Q6600 <2.40GHz/1066MHz/8Mb> MB ASUS P5K64 WS 8 GB RAM SATA HDD 250 GB (Seagate,WD) Xen 3.1 enabled RHEL 5.1 (CentOS 5.1) would run oh this box pretty fast (AHCI drivers with NCQ support) However, I do realize that in just one year Solaris xVM may be one of the best ways to go for Xen Virtualization This message posted from opensolaris.org _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org
> xVM and X11 have nothing to do with each other. X11 > is just a giant > memory hog, and JDS many more times over on top of > that. Even with > 32GB of memory, it''s common to not see any gui with > UNIX systems, they > are unncessary.Yes, I run a few Solarises, with no GUI :)> As for Windows command-line > administration, I agree > mostly, except Powershell, Services for UNIX, and > Windows Server 2003 > and beyond have a full set of tools for managing IIS, > domains, dns, > security, etc, you just need to read up on them those > three solutions > together will fill in 75% of the tasks of an average > Windows > sysadmin.Yes, they are good tools indeed. But still, somehow Windows is not a "real" "shell-toy" :)> Windows can run backgrounded, see VMware > Workstation and > VMware Server backgrounding, it doesn''t bind with X11 > or anything like > you''re used to with COM and compositing on Vista for > instance, which > makes it impossible to separate window manager and > program. There is > a console for installing and using xVM guests, but it > is hardly > necessary for operation of a server after it''s setup. > There''s VNC > upport regardless of the guest type with xVM already, > just a matter > of toggling a flag and setting a password. You could > also use rdp if > you want better integration and performance after the > guest is up, > obviously very nice because if windows bsod''s you > have direct VNC not > using mirror drivers or abstraction, it''s like a > fancy ipkvm.:)> > JamesThanks for the comments! A This message posted from opensolaris.org
> Hopefully soon''ish, but I really don''t know... >Great! :)> > > Yep, as of b76 or 77 we include a vnc server which > you can > use for dom0 if needed.Yes, I had just found out, that I actually configured it, so it "just works" :)> > You can vnc into the domUs for their console... If > your > using a modern version of windows you should enable > the rdp stuff and use that. > > > > MRJ > >Thanks for the follow-up! A This message posted from opensolaris.org
> At mean time I wouldn’t move to Solaris xVM > (76,77) any production system, taking in acoount Sun > hardware you intend to utilize for Solaris xVM run > and development phase in which Solaris xVM is at mean > time.Well indeed, I know I must be careful here. I'm planning liveupgrade too, since that way there is a chance that it won't stop working :) (I read that under x86pv lu does not work, but under x86 it does, and a few hours service down is bearable on a ~monthly basis)> Just for fun, try Windows HVM at CentOS 5.1 on > top of embedded Xen 3.1 utilizing Linux LVs as > storage image and having at least 2GB of RAM > Quote:- The test machine is a C2D E6600 1GB RAM, sata > hdd's” > same CPU, same board and 250 GB SATA Drive with 16 MB > cashed controller.To be honest, I never played with linux that much. What I know of linux are the Sun Linux-to-Solaris (Blueprints?)documents. (can't remember the exact title at the moment, sorry) A few friends of mine utilise linux boxen, based on what I've seen, there aren't _that_ serious differences, but I'm afraid to run a linux. Altough a good suggestion, I'll consider it, thanks!> Next estimate the price for PC with:- > Intel Core2Quad Q6600 <2.40GHz/1066MHz/8Mb> > MB ASUS P5K64 WS > 8 GB RAM > SATA HDD 250 GB (Seagate,WD) > Xen 3.1 enabled RHEL 5.1 (CentOS 5.1) would run oh > this box pretty fast > (AHCI drivers with NCQ support)This C2D machine is my #2 machine, it's at home, mostly for net, and a little video editing, and to use it as a Solaris sandbox. My "main" machine is an Asus barebone, with a C2Q6600, and 4G RAM. That works better :) (And I carry it in my backpack, so I have it handy almost everywhere :)) In the meantime the X2100 will get a few more gigs of RAM. The owner has finally believed me that 1G isn't much nowadays for such purposes :)> However, I do realize that in just one year Solaris > xVM may > be one of the best ways to go for Xen VirtualizationI hope so! Based on what we've seen so far, I think we can surely count on these folks, I bet they work hard! Thanks for your comments too! It turned out to be a great thread; I hoped only in a quick reply! Thanks folks! A This message posted from opensolaris.org _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org
Andrew Gabriel
2007-Dec-08 13:50 UTC
Re: HVM Windows: disk image vs. zvol - which is better?
Attila Nagy wrote: At mean time I wouldn’t move to Solaris xVM (76,77) any production system, taking in acoount Sun hardware you intend to utilize for Solaris xVM run and development phase in which Solaris xVM is at mean time. Well indeed, I know I must be careful here. I''m planning liveupgrade too, since that way there is a chance that it won''t stop working :) (I read that under x86pv lu does not work, but under x86 it does, and a few hours service down is bearable on a ~monthly basis) Note that live upgrade currently does not update the grub menu.lst file entry for xVM to point to the newly activated boot environment, so you need to do that yourself. If you boot xVM after activating the new boot environment without doing this, you''ll find it boots xVM from the old boot environment. -- Andrew Gabriel
Boris Derzhavets
2007-Dec-08 14:14 UTC
Re: HVM Windows: disk image vs. zvol - which is better?
> > At mean time I wouldn’t move to Solaris xVM > > (76,77) any production system, taking in acoount > Sun > > hardware you intend to utilize for Solaris xVM > run > > and development phase in which Solaris xVM is at > mean > > time. > > Well indeed, I know I must be careful here. I'm > planning liveupgrade too, since that way there is a > chance that it won't stop working :) > (I read that under x86pv lu does not work, but under > x86 it does, and a few hours service down is bearable > on a ~monthly basis) > > > Just for fun, try Windows HVM at CentOS 5.1 > on > > top of embedded Xen 3.1 utilizing Linux LVs as > > storage image and having at least 2GB of RAM > > Quote:- The test machine is a C2D E6600 1GB RAM, > sata > > hdd's” > > same CPU, same board and 250 GB SATA Drive with 16 > MB > > cashed controller. > > To be honest, I never played with linux that much. > What I know of linux are the Sun Linux-to-Solaris > (Blueprints?)documents. (can't remember the exact > title at the moment, sorry) A few friends of mine > utilise linux boxen, based on what I've seen, there > aren't _that_ serious differences, but I'm afraid to > run a linux.Don't be afraid of RHEL 5.1 (CentOS 5.1 is free clone) There is the only one noticeable difference CentOS is unsupported Xen Hypervisor originally was created based on Linux Kernel 2.6.X (not sure) and finally moved to 2.6.18 ( Xen 3.1).In my opinion RedHat (RHEL AS 5.0,5.1, Fedora 7,8 ) provides very powerful and up to date Xen Virtualization (3.1) right now. This message posted from opensolaris.org _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org
On Sat, Dec 08, 2007 at 01:50:57PM +0000, Andrew Gabriel wrote:> <pre wrap=""><!----> > Well indeed, I know I must be careful here. I''m planning liveupgrade too, since that way there is a chance that it won''t stop working :) > (I read that under x86pv lu does not work, but under x86 it does, and a few hours service down is bearable on a ~monthly basis)</pre> > </blockquote> > <br> > Note that live upgrade currently does not update the grub menu.lst file > entry for xVM to point to the newly activated boot environment, so you > need to do that yourself. If you boot xVM after activating the new boot > environment without doing this, you''ll find it boots xVM from the old > boot environment.<br> > <br>(And this is fixed in build 80) regards john
Okay! Good to know, thanks! This message posted from opensolaris.org
> Don''t be afraid of RHEL 5.1 (CentOS 5.1 is free > clone) > There is the only one noticeable difference CentOS is > unsupported > Xen Hypervisor originally was created based on Linux > Kernel 2.6.X (not sure) > and finally moved to 2.6.18 ( Xen 3.1).In my opinion > RedHat (RHEL AS 5.0,5.1, Fedora 7,8 ) provides very > powerful and up to date Xen Virtualization (3.1) > right now.Thanks for the info! I''ll evaluate it, but it''ll take some time. I do not know much about linux''s package management, patch management, lvms amongst a few other things. Well... Live and learn. :) Thanks! A This message posted from opensolaris.org
> Performance will not be very good until you have Windows PV disk/net drivers.Just how bad is "not very good" here?> I would not recommend moving to zfs with only 1G of ram to be shared between dom0 and any guests. It''s really meant for larger systems.How large? I''m thinking of slicing up a X4150 into Solaris zones & XP xVMs to service about a half dozen sysadmins & Java developers with Sun Ray clients. The idea is that they use the Solaris side for primary tool support with some software running in the XP xVM(s) and accessed via RDP. Also, some server support software is expected to run on the Solaris zones to support the sysadmins and developers. VCS, bug tracker, maybe a couple others with relatively small footprint. And I had in mind to do this all elegantly with ZFS backing for the obvious reasons. Would the performance of the XP xVM(s) not stand up to this kind of use? Am I trying to do too much with one box? This message posted from opensolaris.org
Paul Lange wrote:>> Performance will not be very good until you have Windows PV disk/net drivers. >> > > Just how bad is "not very good" here? > > >> I would not recommend moving to zfs with only 1G of ram to be shared between dom0 and any guests. It''s really meant for larger systems. >> > > How large? > > I''m thinking of slicing up a X4150 into Solaris zones & XP xVMs to service about a half dozen sysadmins & Java developers with Sun Ray clients. The idea is that they use the Solaris side for primary tool support with some software running in the XP xVM(s) and accessed via RDP. > > Also, some server support software is expected to run on the Solaris zones to support the sysadmins and developers. VCS, bug tracker, maybe a couple others with relatively small footprint. And I had in mind to do this all elegantly with ZFS backing for the obvious reasons. > > Would the performance of the XP xVM(s) not stand up to this kind of use? Am I trying to do too much with one box? > > > This message posted from opensolaris.org > _______________________________________________ > xen-discuss mailing list > xen-discuss@opensolaris.org >This is just based on experience, but the performance is terrible, even on a Ultra-20 M2 with 2.6GHz AMD Opteron 1218, 2GB ram, allocated 1GB to the guest. The major issue is mouse synchronization, the performance is comparable to Qemu running fully emulated. It''s somewhat usable, enough to install the system and a few programs, but hardly usable for development or testing. Until drivers that fix the networking and disk support are out, it''s just how it is. Right now it''s nothing comparable to native or VMware. RDP is the preferred access method, for performance reasons, VNC just doesn''t cut it, and the console is not usable for anything but installing. For your proposed setup, I''d recommend hosting something like Windows Server 2003 with 2GB ram on xVM and use terminal services, depending on what you host, may need more. Multiple VM''s will be VERY painful, if you can help it, don''t do it! The one issue to watch out for is disk i/o with 6 users, if you have a dedicated disk it''ll help a lot. You''re not necessarily trying to do too much with one machine, it''s just a problem right now, things will get better quickly, but I can''t give you an eta because I don''t work directly with the xVM team. If you need hardware recommendations and can afford it, a dual socket Opteron system such as the Ultra 40 would be the preferred host. If you can help it, running multiple VM''s per core is not ideal, and it won''t be usable if you do so. One VM per socket is the only way to do it without pain. PS: Sorry that I didn''t CC xen-discuss at the first e-mail, I made a typo :-o James
Hi James, For your mouse issues, have you tried setting the mouse to USB tablet mode? Add the following to your domain config, and restart it: usb=1 usbdevice=''tablet'' That will fix your mouse tracking problem under VNC. James Cornell wrote:> Paul Lange wrote: >>> Performance will not be very good until you have Windows PV disk/net drivers. >>> >> Just how bad is "not very good" here? >> >> >>> I would not recommend moving to zfs with only 1G of ram to be shared between dom0 and any guests. It''s really meant for larger systems. >>> >> How large? >> >> I''m thinking of slicing up a X4150 into Solaris zones & XP xVMs to service about a half dozen sysadmins & Java developers with Sun Ray clients. The idea is that they use the Solaris side for primary tool support with some software running in the XP xVM(s) and accessed via RDP. >> >> Also, some server support software is expected to run on the Solaris zones to support the sysadmins and developers. VCS, bug tracker, maybe a couple others with relatively small footprint. And I had in mind to do this all elegantly with ZFS backing for the obvious reasons. >> >> Would the performance of the XP xVM(s) not stand up to this kind of use? Am I trying to do too much with one box? >> >> >> This message posted from opensolaris.org >> _______________________________________________ >> xen-discuss mailing list >> xen-discuss@opensolaris.org >> > > This is just based on experience, but the performance is terrible, even > on a Ultra-20 M2 with 2.6GHz AMD Opteron 1218, 2GB ram, allocated 1GB to > the guest. The major issue is mouse synchronization, the performance is > comparable to Qemu running fully emulated. It''s somewhat usable, enough > to install the system and a few programs, but hardly usable for > development or testing. Until drivers that fix the networking and disk > support are out, it''s just how it is. Right now it''s nothing comparable > to native or VMware. > > RDP is the preferred access method, for performance reasons, VNC just > doesn''t cut it, and the console is not usable for anything but > installing. For your proposed setup, I''d recommend hosting something > like Windows Server 2003 with 2GB ram on xVM and use terminal services, > depending on what you host, may need more. Multiple VM''s will be VERY > painful, if you can help it, don''t do it! The one issue to watch out > for is disk i/o with 6 users, if you have a dedicated disk it''ll help a > lot. You''re not necessarily trying to do too much with one machine, > it''s just a problem right now, things will get better quickly, but I > can''t give you an eta because I don''t work directly with the xVM team. > If you need hardware recommendations and can afford it, a dual socket > Opteron system such as the Ultra 40 would be the preferred host. If you > can help it, running multiple VM''s per core is not ideal, and it won''t > be usable if you do so. One VM per socket is the only way to do it > without pain. > > PS: Sorry that I didn''t CC xen-discuss at the first e-mail, I made a typo :-o > > James > > _______________________________________________ > xen-discuss mailing list > xen-discuss@opensolaris.org-- ----------------------------------------------------- Russ Blaine | Solaris Kernel | russell.blaine@sun.com
Oh, I''ll try it. Thanks! James On Dec 12, 2007, at 3:09 PM, Russ Blaine wrote:> Hi James, > > For your mouse issues, have you tried setting the mouse to USB > tablet mode? Add the following to your domain config, and restart it: > > usb=1 > usbdevice=''tablet'' > > That will fix your mouse tracking problem under VNC. > > > > James Cornell wrote: >> Paul Lange wrote: >>>> Performance will not be very good until you have Windows PV disk/ >>>> net drivers. >>>> >>> Just how bad is "not very good" here? >>> >>> >>>> I would not recommend moving to zfs with only 1G of ram to be >>>> shared between dom0 and any guests. It''s really meant for larger >>>> systems. >>>> >>> How large? >>> >>> I''m thinking of slicing up a X4150 into Solaris zones & XP xVMs to >>> service about a half dozen sysadmins & Java developers with Sun >>> Ray clients. The idea is that they use the Solaris side for >>> primary tool support with some software running in the XP xVM(s) >>> and accessed via RDP. >>> >>> Also, some server support software is expected to run on the >>> Solaris zones to support the sysadmins and developers. VCS, bug >>> tracker, maybe a couple others with relatively small footprint. >>> And I had in mind to do this all elegantly with ZFS backing for >>> the obvious reasons. >>> >>> Would the performance of the XP xVM(s) not stand up to this kind >>> of use? Am I trying to do too much with one box? >>> This message posted from opensolaris.org >>> _______________________________________________ >>> xen-discuss mailing list >>> xen-discuss@opensolaris.org >>> >> This is just based on experience, but the performance is terrible, >> even >> on a Ultra-20 M2 with 2.6GHz AMD Opteron 1218, 2GB ram, allocated >> 1GB to >> the guest. The major issue is mouse synchronization, the >> performance is >> comparable to Qemu running fully emulated. It''s somewhat usable, >> enough >> to install the system and a few programs, but hardly usable for >> development or testing. Until drivers that fix the networking and >> disk >> support are out, it''s just how it is. Right now it''s nothing >> comparable >> to native or VMware. >> RDP is the preferred access method, for performance reasons, VNC just >> doesn''t cut it, and the console is not usable for anything but >> installing. For your proposed setup, I''d recommend hosting something >> like Windows Server 2003 with 2GB ram on xVM and use terminal >> services, >> depending on what you host, may need more. Multiple VM''s will be >> VERY >> painful, if you can help it, don''t do it! The one issue to watch out >> for is disk i/o with 6 users, if you have a dedicated disk it''ll >> help a >> lot. You''re not necessarily trying to do too much with one machine, >> it''s just a problem right now, things will get better quickly, but I >> can''t give you an eta because I don''t work directly with the xVM >> team. If you need hardware recommendations and can afford it, a >> dual socket >> Opteron system such as the Ultra 40 would be the preferred host. >> If you >> can help it, running multiple VM''s per core is not ideal, and it >> won''t >> be usable if you do so. One VM per socket is the only way to do it >> without pain. >> PS: Sorry that I didn''t CC xen-discuss at the first e-mail, I made >> a typo :-o >> James >> _______________________________________________ >> xen-discuss mailing list >> xen-discuss@opensolaris.org > > -- > > ----------------------------------------------------- > Russ Blaine | Solaris Kernel | russell.blaine@sun.com