I am very new to Xen. I have used VMware and VirtualBox *desktop* products in the past. I am also familiar with zones and ldoms. I realize that none of that is particularly relevant to Xen, but I think it helps get a lot of the *really* basic stuff out of the way. I followed the Wiki: http://hub.opensolaris.org/bin/view/Community+Group+xen/2008_11_dom0 to setup dom0. I believe that is totally functional. (minus some milestone/xvm issues I had). I followed the wiki: http://hub.opensolaris.org/bin/view/Community+Group+xen/virtinstall to setup a couple domU''s. I have a PV OpenSolaris box, a HVM d0ze 2k8, and HVM S10u8 box, the S10 box can be live migrated between to dom0''s which I am pretty excited about. (the other two are zvols on rpool) I wanted to try to move/clone the opensolaris box to have its disks on NFS (vmdk?), but I am really going off the deep end there. I got vdiskadm figured out enough to supposedly migrate the disk to VMDK, but I haven''t the foggiest idea how to "switch" it. virsh seems to indicate that you aren''t supposed to edit the XML, even though there is an "edit" option that appears to do just that :-/. I had asked in ##xen and they said I shouldn''t be using VMDK, I should be using tap:aio. As typical of open source support, instead of giving me the answer they lead me down another path. They also suggested that I use "xm" instead of virsh. Our man pages list virsh as the preferred mechanism and xm as the legacy. The best I can do to "extract" the configuration of the VM from xm is "xm list -l DOMAIN" ... but thats not really setting=value like I have seen elsewhere. I am not sure if tap:aio is supported on OpenSolaris, nor am I sure if its supported over NFS. I also need to change the network, as my dom0 will be on a "private" network, and I want my domU''s to be on a "service" network. I usually use tagged vlans (vnics) with my zones, but I haven''t figured that out either. *** First question: Does OpenSolaris support tap:aio, as the ##xen people say thats the best performing "file" based virtual disk. Does it work over NFS? *** Next question: How do I find out what "drivers" are supported for disks for example. There are a couple examples on the wiki, but I didn''t see anything in the man pages, I didn''t see any help or list option to tell me what it would support. It seems like there should be a list somewhere that lists driver/subdriver and maybe some description. At the very least maybe there is a way to list a library directory or something? *** Next question: virt-manage is broken in nv_126 (known bug), I symlinked the vte module per the workarounds in the bug, and now it opens, but its getting a libgnomebreakpad error (which I think is safe to ignore). I was able to change the "boot device" (net/disk) for my HVM S10 box, but the GUI seems very limited. I can''t change the disk or network settings. I can delete the disk and re-add it, but it doesn''t appear do it right. #s10-test (HVM) disk (device (tap (uuid f60d38d0-d0cc-f1ab-a437-435238e924cb) (bootable 1) (devid 768) (dev hda:disk) (uname tap:vdisk:/xendisks/s10-test/disk0) (mode w) ) # OS-test (PV) disk (device (vbd (uuid b9805e15-b746-fb41-d9c6-eb6bcc0cab91) (bootable 1) (driver paravirtualised) (dev xvda) (uname file:/xendisks/neoga-test1/neoga-test1-disk0-flat.vmdk) (mode w) ) *** Next question: also in virt-manage, I tried to remove and re-add the network on the proper vnic, but its greyed out. I tried to add it with virsh, but it really doesnt like me: neoga# virsh help attach-interface NAME attach-interface - attach network interface SYNOPSIS attach-interface <domain> <type> <source> [<target>] [<mac>] [<script>] [--capped-bandwidth <string>] [--vlanid <number>] DESCRIPTION Attach new network interface. OPTIONS <domain> domain name, id or uuid <type> network interface type <source> source of network interface <target> target network name <mac> MAC address <script> script used to bridge network interface --capped-bandwidth <string> bandwidth limit for this interface --vlanid <number> VLAN ID attached to this interface neoga# virsh attach-interface neoga-test1 ethernet aggr0 --vlanid 634 error: No support ethernet in command ''attach-interface'' neoga# virsh attach-interface neoga-test1 vif aggr0 --vlanid 634 error: No support vif in command ''attach-interface'' neoga# virsh attach-interface neoga-test1 vif-vnic aggr0 --vlanid 634 error: No support vif-vnic in command ''attach-interface'' neoga# virsh attach-interface neoga-test1 vnic aggr0 --vlanid 634 error: No support vnic in command ''attach-interface'' ----- So? What do I put for type? I cant find a list of acceptable types in the man pages, as with disks, I am sure I am just not looking in the right place :) *** Next question: I was able to presumably add it with "xm" but it doesnt look like its bridged to aggr0 anymore? The xm list -l doesnt have the (bridge aggr0). neoga# xm network-attach neoga-test1 vlanid=634 *** Next question: virsh seems to have some sort of "remote" option, but apparently (from the libvirt.org page) requires some extra setup. Before I go too far down that road, has that been wrappered or automated in any way? I would assume not? have we got any documentation on OpenSolaris specifics, or can we mostly follow the linux docs? *** Next question: does XEN on OS support virtual fiber channel? I am sure I will have a lot more as I go through. I am planning to deploy a "production" infrastructure into a "private cloud" mostly based on OpenSolaris machines. Tommy -- This message posted from opensolaris.org
On 9 Nov 2009, at 5:40pm, Tommy McNeely wrote:> *** Next question: also in virt-manage, I tried to remove and re-add > the network on the proper vnic, but its greyed out. I tried to add > it with virsh, but it really doesnt like me: > > neoga# virsh help attach-interface > NAME > attach-interface - attach network interface > > SYNOPSIS > attach-interface <domain> <type> <source> [<target>] [<mac>] > [<script>] [--capped-bandwidth <string>] [--vlanid <number>] > > DESCRIPTION > Attach new network interface. > > OPTIONS > <domain> domain name, id or uuid > <type> network interface type > <source> source of network interface > <target> target network name > <mac> MAC address > <script> script used to bridge network interface > --capped-bandwidth <string> bandwidth limit for this interface > --vlanid <number> VLAN ID attached to this interface > > > neoga# virsh attach-interface neoga-test1 ethernet aggr0 --vlanid 634 > error: No support ethernet in command ''attach-interface'' > > neoga# virsh attach-interface neoga-test1 vif aggr0 --vlanid 634 > error: No support vif in command ''attach-interface'' > > neoga# virsh attach-interface neoga-test1 vif-vnic aggr0 --vlanid 634 > error: No support vif-vnic in command ''attach-interface'' > > neoga# virsh attach-interface neoga-test1 vnic aggr0 --vlanid 634 > error: No support vnic in command ''attach-interface'' > > ----- So? What do I put for type? I cant find a list of acceptable > types in the man pages, as with disks, I am sure I am just not > looking in the right place :)bridge.> *** Next question: I was able to presumably add it with "xm" but it > doesnt look like its bridged to aggr0 anymore? The xm list -l > doesnt have the (bridge aggr0). > neoga# xm network-attach neoga-test1 vlanid=634If no bridge is specified it will attempt to guess (by looking at the output of dladm show-link).
OK "bridge" .. the reason I was using "ethernet" is... <interface type=''ethernet''> <mac address=''00:16:36:0b:a7:18''/> <script path=''/usr/lib/xen/scripts/vif-vnic''/> <target dev=''vif-1.0''/> </interface> -- This message posted from opensolaris.org
Tommy McNeely wrote:> I am very new to Xen. I have used VMware and VirtualBox> *desktop* products in the past. I am also familiar with > zones and ldoms. I realize that none of that is particularly > relevant to Xen, but I think it helps get a lot of the > *really* basic stuff out of the way.> > I followed the Wiki: http://hub.opensolaris.org/bin/view/Community+Group+xen/2008_11_dom0 to> setup dom0. I believe that is totally functional. (minus some milestone/xvm issues I had).> > I followed the wiki: http://hub.opensolaris.org/bin/view/Community+Group+xen/virtinstall> to setup a couple domU''s. I have a PV OpenSolaris box, a HVM > d0ze 2k8, and HVM S10u8 box, the S10 box can be live migrated > between to dom0''s which I am pretty excited about. (the other > two are zvols on rpool)> > I wanted to try to move/clone the opensolaris box to have its> disks on NFS (vmdk?), but I am really going off the deep end > there. I got vdiskadm figured out enough to supposedly migrate > the disk to VMDK, but I haven''t the foggiest idea how to "switch" > it. virsh seems to indicate that you aren''t supposed to edit the > XML, even though there is an "edit" option that appears to do> just that :-/. > > I had asked in ##xen##xen is pretty linux specific.. You better off asking Solaris dom0 questions on #solaris-xen on oftc.net.> and they said I shouldn''t be using VMDK,> I should be using tap:aio. As typical of open source support, > instead of giving me the answer they lead me down another path. > They also suggested that I use "xm" instead of virsh. Our man > pages list virsh as the preferred mechanism and xm as the legacy. > The best I can do to "extract" the configuration of the VM from > xm is "xm list -l DOMAIN" ... but thats not really setting=value > like I have seen elsewhere. I am not sure if tap:aio is supported > on OpenSolaris, nor am I sure if its supported over NFS. I also > need to change the network, as my dom0 will be on a "private" > network, and I want my domU''s to be on a "service" network. I > usually use tagged vlans (vnics) with my zones, but I haven''t > figured that out either.> > *** First question: Does OpenSolaris support tap:aio, as the> ##xen people say thats the best performing "file" based virtual > disk. Does it work over NFS? No, Solaris uses a different blktap implementation based on the VirtualBox disk code. (tap:vdisk vs tap:aio) The VirtualBox code has much better support for vmdk, vhd, and vdi files.. vdiskadm also provides zfs like snapshot, rollback, etc functionality for file based disks. You can also use it to move between block, and file based disks. You can also move a file from VirtualBox or VMWare and use it on Solaris on Xen (assuming the OS is somewhat flexible with the H/W emulation used). tap:vdisk works fine over NFS.. Remember that everything has to be read/writeable by user xvm. We do need to improve the performance some more though. Although, if your looking towards a production''ish environment, I would suggest using iscsi. We also have some modifications which makes it possible to easily migrated a guest on an iscsi disk. phy:iscsi:/alias/<lun>/<iscsi-alias> phy:iscsi:/static/<server IP>/<lun>/<target id> phy:iscsi:/discover/<lun>/<alias or target id> e.g. virt-install -p -n nevada -l /export/snv108.iso --nographics \ --noautoconsole -r 1024 \ --disk path=/static/192.168.0.70/0/iqn.1986-03.com.sun:02:d5ab1c26-0a7a-c6b4-98f8-d6d267eb2561,driver=phy,subdriver=iscsi virsh attach-disk nevada /static/10.6.70.64/0/iqn.1986-03.com.sun:02:01accb27-35a3-e45f-882d-dc4e48c5685d xvdb --driver phy --subdriver iscsi> > *** Next question: How do I find out what "drivers" are supported> for disks for example. There are a couple examples on the wiki, > but I didn''t see anything in the man pages, I didn''t see any help > or list option to tell me what it would support. It seems like > there should be a list somewhere that lists driver/subdriver and > maybe some description. At the very least maybe there is a way to > list a library directory or something? For file based disks, you should use tap:vdisk.> > *** Next question: virt-manage is broken in nv_126 (known bug),> I symlinked the vte module per the workarounds in the bug, and > now it opens, but its getting a libgnomebreakpad error (which I > think is safe to ignore). I was able to change the "boot device" > (net/disk) for my HVM S10 box, but the GUI seems very limited. > I can''t change the disk or network settings. I can delete the > disk and re-add it, but it doesn''t appear do it right. A different group works on virt-manager, so we can''t help you very much there... You need to use tap:vdisk for the vdisk though... e.g. virsh attach-disk nevada /xendisks/s10-test/disk0 xvda --driver tap --subdriver vdisk virt-install -p -n nevada -l /export/snv108.iso --nographics \ --noautoconsole -r 1024 \ --disk path=/export/nevada/disk0,size=10,driver=tap,subdriver=vdisk,format=vdi> > *** Next question: virsh seems to have some sort of "remote"> option, but apparently (from the libvirt.org page) requires > some extra setup. Before I go too far down that road, has > that been wrappered or automated in any way? I would assume > not? have we got any documentation on OpenSolaris specifics, > or can we mostly follow the linux docs? We disable that by default today... I believe you would have to rebuild libvirt to enable the remote support... But I''m not 100% sure.> *** Next question: does XEN on OS support virtual fiber channel?If you mean npiv, then yes... You can see the hotplug script /usr/lib/xen/scripts/vbd-npiv for some details.. I don''t have any H/W which supports it so I don''t know that much about it. MRJ
>> *** Next question: does XEN on OS support virtual fiber channel? > > If you mean npiv, then yes... You can see the hotplug script > /usr/lib/xen/scripts/vbd-npiv > > for some details.. I don''t have any H/W which supports it > so I don''t know that much about it.Here''s the RFE http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6713736 Unfortunately, all the info info is in the private fields.. Here is the relevant info disk = [''phy:npiv:210100e08ba426ed/c0007d2794c38743/201600a0b82a3846/0,xvdb,w''] virtual port c0007d2794c38743, on phy port 210100e08ba426ed, bring up lun 0 on target 201600a0b82a3846 Something like the following should work virsh attach-disk nevada /210100e08ba426ed/c0007d2794c38743/201600a0b82a3846/0 xvdb --driver phy --subdriver npiv MRJ
THANKS for responding, I have had my head spinning lately, so this one sorta slipped away... let me respond within (vastly snipped)> ##xen is pretty linux specific.. You better off > asking Solaris > dom0 questions on #solaris-xen on oftc.net.Thanks, I have joined there. I probably just missed where that was listed ;) -----> No, Solaris uses a different blktap implementation > based > on the VirtualBox disk code. (tap:vdisk vs tap:aio) > The VirtualBox code has much better support for vmdk, > vhd, and > vdi files.. vdiskadm also provides zfs like snapshot, > rollback, > etc functionality for file based disks. You can also > use it to > move between block, and file based disks. > > You can also move a file from VirtualBox or VMWare > and use it on > Solaris on Xen (assuming the OS is somewhat flexible > with the H/W > emulation used). > > tap:vdisk works fine over NFS.. Remember that > everything has > to be read/writeable by user xvm. We do need to > improve the > performance some more though.Is the WIKI still correct that tap:vdisk with VMDK format is the "best peforming" (and most compatible)?> > Although, if your looking towards a production''ish > environment, > I would suggest using iscsi. We also have some > modifications > which makes it possible to easily migrated a guest on > an iscsi > disk. > > phy:iscsi:/alias/<lun>/<iscsi-alias> > phy:iscsi:/static/<server IP>/<lun>/<target id> > phy:iscsi:/discover/<lun>/<alias or target id>We will look at iscsi, but past experience has not been real good with it. Our NAS backend is a 7410c, which does appear to support iscsi, so its something we can certainly look at trying again. I am looking for something that is network based for (live) migration compatibility.> . > > virt-install -p -n nevada -l /export/snv108.iso > --nographics \ > --noautoconsole -r 1024 \ > --disk > path=/static/192.168.0.70/0/iqn.1986-03.com.sun:02:d5 > b1c26-0a7a-c6b4-98f8-d6d267eb2561,driver=phy,subdriver > =iscsiWow, thats nearly as obnoxious looking as FC WWNs ;)> > virsh attach-disk nevada > /static/10.6.70.64/0/iqn.1986-03.com.sun:02:01accb27- > 5a3-e45f-882d-dc4e48c5685d xvdb --driver phy > --subdriver iscsiThanks.> > > > > > *** Next question: How do I find out what "drivers" > are supported > > for disks for example. There are a couple > examples on the wiki, > > but I didn''t see anything in the man pages, I > didn''t see any help > > or list option to tell me what it would support. > It seems like > > there should be a list somewhere that lists > driver/subdriver and > > maybe some description. At the very least maybe > there is a way to > > list a library directory or something? > > For file based disks, you should use tap:vdisk.OK, but that still doesn''t explain the most basic question of how to get the list of drivers, sub drivers and how to use them... maybe a wiki page suggestion? :) (snipped virt-manager stuff, thanks)> > *** Next question: does XEN on OS support virtual > fiber channel? > > If you mean npiv, then yes... You can see the hotplug > script > /usr/lib/xen/scripts/vbd-npiv > r some details.. I don''t have any H/W which supports > it > so I don''t know that much about it. >Yes, I have been going round and round with NPIV. That is a whole separate thread :) see bug id # 6900002 :) ~tommy -- This message posted from opensolaris.org
Tommy McNeely wrote:>> >> tap:vdisk works fine over NFS.. Remember that >> everything has >> to be read/writeable by user xvm. We do need to >> improve the >> performance some more though. > > Is the WIKI still correct that tap:vdisk with VMDK format is the "best peforming" (and most compatible)?For file based disks, yes. Block will always be faster. File based disks are more flexible..> >> Although, if your looking towards a production''ish >> environment, >> I would suggest using iscsi. We also have some >> modifications >> which makes it possible to easily migrated a guest on >> an iscsi >> disk. >> >> phy:iscsi:/alias/<lun>/<iscsi-alias> >> phy:iscsi:/static/<server IP>/<lun>/<target id> >> phy:iscsi:/discover/<lun>/<alias or target id> > > We will look at iscsi, but past experience has not been> real good with it. Our NAS backend is a 7410c, which does > appear to support iscsi, so its something we can certainly > look at trying again. I am looking for something that is > network based for (live) migration compatibility. if you can get a decent performing iscsi backend, it works very well. You can use san too... The framework is very simple to extent. It would be easy to extend it to have something line phy:san:/wwn/lun and have the dom0 figure out the /dev/dsk mapping..> >> . >> >> virt-install -p -n nevada -l /export/snv108.iso >> --nographics \ >> --noautoconsole -r 1024 \ >> --disk >> path=/static/192.168.0.70/0/iqn.1986-03.com.sun:02:d5 >> b1c26-0a7a-c6b4-98f8-d6d267eb2561,driver=phy,subdriver >> =iscsi > > Wow, thats nearly as obnoxious looking as FC WWNs ;)That''s iscsi for you :-)>>> *** Next question: How do I find out what "drivers" >> are supported >>> for disks for example. There are a couple >> examples on the wiki, >>> but I didn''t see anything in the man pages, I >> didn''t see any help >>> or list option to tell me what it would support. >> It seems like >>> there should be a list somewhere that lists >> driver/subdriver and >>> maybe some description. At the very least maybe >> there is a way to >>> list a library directory or something? >> For file based disks, you should use tap:vdisk. > > OK, but that still doesn''t explain the most basic> question of how to get the list of drivers, sub > drivers and how to use them... maybe a wiki page suggestion? :) Yes, we should add something there... Here''s what''s there as of today for Solaris on Xen. file:/ tap:vdisk:/ phy:/ phy:iscsi:/ phy:npiv:/ phy:zvol:/ We have infrastructure support for phy:san:/ but don''t have a hotplug script for it yet. If your interested, we can add that in easy enough.. The iscsi support was done for the www.sun.com folks originally.> > (snipped virt-manager stuff, thanks) > >>> *** Next question: does XEN on OS support virtual >> fiber channel? >> >> If you mean npiv, then yes... You can see the hotplug >> script >> /usr/lib/xen/scripts/vbd-npiv >> r some details.. I don''t have any H/W which supports >> it >> so I don''t know that much about it. >> > > > Yes, I have been going round and round with NPIV.> That is a whole separate thread :) see bug id # 6900002 :) Yes, I see that.. Thanks, MRJ