jkeil, Thanks for the hint. I tried with no success. Here is what I run # xm list Name ID Mem VCPUs State Time(s) Domain-0 0 5547 16 r----- 63198.4 solarisHVM1 23 12288 1 r----- 1265.0 solarisPV1 21 12288 4 -b---- 275.3 # xm block-attach solarisHVM1 file:/domains/solarisHVM1/disk1.img hdc w # xm block-list solarisHVM1 Vdev BE handle state evt-ch ring-ref BE-path 768 0 0 1 -1 -1 /local/domain/0/backend/vbd/23/768 5632 0 0 1 -1 -1 /local/domain/0/backend/vbd/23/5632 However, when I search for the new device on the solarisHVM1 domain, I can not see it solarisHVM1#ls /dev/dsk c0d0p0 c0d0s1 c0d0s15 c0d0s7 c1t0d0p3 c1t0d0s12 c1t0d0s4 c0d0p1 c0d0s10 c0d0s2 c0d0s8 c1t0d0p4 c1t0d0s13 c1t0d0s5 c0d0p2 c0d0s11 c0d0s3 c0d0s9 c1t0d0s0 c1t0d0s14 c1t0d0s6 c0d0p3 c0d0s12 c0d0s4 c1t0d0p0 c1t0d0s1 c1t0d0s15 c1t0d0s7 c0d0p4 c0d0s13 c0d0s5 c1t0d0p1 c1t0d0s10 c1t0d0s2 c1t0d0s8 c0d0s0 c0d0s14 c0d0s6 c1t0d0p2 c1t0d0s11 c1t0d0s3 c1t0d0s9 solarisHVM1# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 Specify disk (enter its number): I run devfsadm, and still do not see the device on solarisHVM1. I have to note however, that the steps I described on my original posting do work on PV domains., therefore I guess this is an issue only with HVM domains Anyother thing I might be doing wrong on the HVM domain ?? thanks, J. Cardozo > I''m running snv_90 and created a Solaris 10 u4 HVM (solarisHVM1) guest domain. > I''m following this procedure to add a block device to my guest domain: > # xm block-attach solarisHVM1 file:/dom1/disk1.img 2 w For a HVM domU, you must use something like hda, hdb, hdc, hdd as the FrontDev, not "2". Try xm block-attach solarisHVM1 file:/dom1/disk1.img hdc w -------- Original Message -------- Subject: Unable to block-attach to HVM domain Date: Wed, 18 Jun 2008 09:29:29 -0700 From: Jairo.Cardozo@Sun.COM To: xen-discuss@opensolaris.org Hi, I''m running snv_90 and created a Solaris 10 u4 HVM (solarisHVM1) guest domain. I''m following this procedure to add a block device to my guest domain: # xm block-list solarisHVM1 Vdev BE handle state evt-ch ring-ref BE-path 768 0 0 1 -1 -1 /local/domain/0/backend/vbd/9/768 Create the image file #dd if=/dev/zero of=/dom1/disk1.img bs=1024k seek=8192 count=1 Add block device #xm block-attach solarisHVM1 file:/dom1/disk1.img 2 w List block devices #xm block-list solarisHVM1 Vdev BE handle state evt-ch ring-ref BE-path 768 0 0 1 -1 -1 /local/domain/0/backend/vbd/10/768 2 0 0 1 -1 -1 /local/domain/0/backend/vbd/10/2 some output from #xm list solarisHVM1 -l (device (vbd (uname file:/domains/solarisHVM1/root.img) (uuid d843dcb0-3a20-ef2d-919d-6247a7a9dd21) (mode w) (dev hda:disk) (backend 0) (bootable 1) ) ) (device (vbd (uname file:/dom1/disk1.img) (uuid 6347a698-d345-0dba-709e-5155a0e60c4c) (mode w) (dev 2:disk) (backend 0) (bootable 0) ) After running devfsadm in my guest domain, the new device does not show up # ls /dev/dsk c0d0p0 c0d0s1 c0d0s15 c0d0s7 c1t0d0p3 c1t0d0s12 c1t0d0s4 c0d0p1 c0d0s10 c0d0s2 c0d0s8 c1t0d0p4 c1t0d0s13 c1t0d0s5 c0d0p2 c0d0s11 c0d0s3 c0d0s9 c1t0d0s0 c1t0d0s14 c1t0d0s6 c0d0p3 c0d0s12 c0d0s4 c1t0d0p0 c1t0d0s1 c1t0d0s15 c1t0d0s7 c0d0p4 c0d0s13 c0d0s5 c1t0d0p1 c1t0d0s10 c1t0d0s2 c1t0d0s8 c0d0s0 c0d0s14 c0d0s6 c1t0d0p2 c1t0d0s11 c1t0d0s3 c1t0d0s9 I tried different options, including adding the device while the domain was down and rebooting the solarisHVM1 domain, but nothing seem to work. Previous to running snv_90 I was running svn_82 and that I was being affected by bug 6656611, which was fixed in snv_85. However, I can not get it to work. Any ideas,hints, suggestions, or corrections (if I''m doing something wrong) on how to solve this issue are much appreciated. Thanks, Jairo Cardozo -- +-----------------------------------------------+ Jairo Cardozo H Benchmark Engineer Sun Solution Centers Jairo.Cardozo@sun.com Ext: 87013 Phone: 650-786-7013 +-----------------------------------------------+
On SNV90 HVM DomU at SNV92 Dom0 :- # xm block-attach file:/export/home/images/disk1.img hdc w (at Dom0) New format report at DomU:- bash-3.2# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 3129 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 1. c1d0 <DEFAULT cyl 2045 alt 2 hd 128 sec 32> /pci@0,0/pci-ide@1,1/ide@1/cmdk@0,0 Specify disk (enter its number): 1 selecting c1d0 Controller working list found [disk formatted, defect list found] format> fdisk Total disk size is 2048 cylinders Cylinder size is 4096 (512 byte) blocks Cylinders Partition Status Type Start End Length % ========= ====== ============ ===== === ====== == 1 Solaris2 1 2047 2047 100 Then check:- bash-3.2# ls /dev/rdsk/c1d0* /dev/rdsk/c1d0p0 /dev/rdsk/c1d0s0 /dev/rdsk/c1d0s13 /dev/rdsk/c1d0s4 /dev/rdsk/c1d0s9 /dev/rdsk/c1d0p1 /dev/rdsk/c1d0s1 /dev/rdsk/c1d0s14 /dev/rdsk/c1d0s5 /dev/rdsk/c1d0p2 /dev/rdsk/c1d0s10 /dev/rdsk/c1d0s15 /dev/rdsk/c1d0s6 /dev/rdsk/c1d0p3 /dev/rdsk/c1d0s11 /dev/rdsk/c1d0s2 /dev/rdsk/c1d0s7 /dev/rdsk/c1d0p4 /dev/rdsk/c1d0s12 /dev/rdsk/c1d0s3 /dev/rdsk/c1d0s8 Attempted newfs ,which worked fine at SNV90 PV DomU ( after # xm block-attach file:/export/home/images/disk1.img 2 w at Dom0) now at SNV90 HVM DomU reports for any slice :- bash-3.2# newfs /dev/rdsk/c1d0s0 /dev/rdsk/c1d0s0: No such device or address --- On Wed, 6/25/08, Jairo.Cardozo@Sun.COM <Jairo.Cardozo@Sun.COM> wrote: From: Jairo.Cardozo@Sun.COM <Jairo.Cardozo@Sun.COM> Subject: [xen-discuss] [Fwd: Unable to block-attach to HVM domain] To: xen-discuss@opensolaris.org Date: Wednesday, June 25, 2008, 8:01 PM jkeil, Thanks for the hint. I tried with no success. Here is what I run # xm list Name ID Mem VCPUs State Time(s) Domain-0 0 5547 16 r----- 63198.4 solarisHVM1 23 12288 1 r----- 1265.0 solarisPV1 21 12288 4 -b---- 275.3 # xm block-attach solarisHVM1 file:/domains/solarisHVM1/disk1.img hdc w # xm block-list solarisHVM1 Vdev BE handle state evt-ch ring-ref BE-path 768 0 0 1 -1 -1 /local/domain/0/backend/vbd/23/768 5632 0 0 1 -1 -1 /local/domain/0/backend/vbd/23/5632 However, when I search for the new device on the solarisHVM1 domain, I can not see it solarisHVM1#ls /dev/dsk c0d0p0 c0d0s1 c0d0s15 c0d0s7 c1t0d0p3 c1t0d0s12 c1t0d0s4 c0d0p1 c0d0s10 c0d0s2 c0d0s8 c1t0d0p4 c1t0d0s13 c1t0d0s5 c0d0p2 c0d0s11 c0d0s3 c0d0s9 c1t0d0s0 c1t0d0s14 c1t0d0s6 c0d0p3 c0d0s12 c0d0s4 c1t0d0p0 c1t0d0s1 c1t0d0s15 c1t0d0s7 c0d0p4 c0d0s13 c0d0s5 c1t0d0p1 c1t0d0s10 c1t0d0s2 c1t0d0s8 c0d0s0 c0d0s14 c0d0s6 c1t0d0p2 c1t0d0s11 c1t0d0s3 c1t0d0s9 solarisHVM1# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 4565 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 Specify disk (enter its number): I run devfsadm, and still do not see the device on solarisHVM1. I have to note however, that the steps I described on my original posting do work on PV domains., therefore I guess this is an issue only with HVM domains Anyother thing I might be doing wrong on the HVM domain ?? thanks, J. Cardozo> I''m running snv_90 and created a Solaris 10 u4 HVM (solarisHVM1)guest domain.> I''m following this procedure to add a block device to my guestdomain:> # xm block-attach solarisHVM1 file:/dom1/disk1.img 2 wFor a HVM domU, you must use something like hda, hdb, hdc, hdd as the FrontDev, not "2". Try xm block-attach solarisHVM1 file:/dom1/disk1.img hdc w -------- Original Message -------- Subject: Unable to block-attach to HVM domain Date: Wed, 18 Jun 2008 09:29:29 -0700 From: Jairo.Cardozo@Sun.COM To: xen-discuss@opensolaris.org Hi, I''m running snv_90 and created a Solaris 10 u4 HVM (solarisHVM1) guest domain. I''m following this procedure to add a block device to my guest domain: # xm block-list solarisHVM1 Vdev BE handle state evt-ch ring-ref BE-path 768 0 0 1 -1 -1 /local/domain/0/backend/vbd/9/768 Create the image file #dd if=/dev/zero of=/dom1/disk1.img bs=1024k seek=8192 count=1 Add block device #xm block-attach solarisHVM1 file:/dom1/disk1.img 2 w List block devices #xm block-list solarisHVM1 Vdev BE handle state evt-ch ring-ref BE-path 768 0 0 1 -1 -1 /local/domain/0/backend/vbd/10/768 2 0 0 1 -1 -1 /local/domain/0/backend/vbd/10/2 some output from #xm list solarisHVM1 -l (device (vbd (uname file:/domains/solarisHVM1/root.img) (uuid d843dcb0-3a20-ef2d-919d-6247a7a9dd21) (mode w) (dev hda:disk) (backend 0) (bootable 1) ) ) (device (vbd (uname file:/dom1/disk1.img) (uuid 6347a698-d345-0dba-709e-5155a0e60c4c) (mode w) (dev 2:disk) (backend 0) (bootable 0) ) After running devfsadm in my guest domain, the new device does not show up # ls /dev/dsk c0d0p0 c0d0s1 c0d0s15 c0d0s7 c1t0d0p3 c1t0d0s12 c1t0d0s4 c0d0p1 c0d0s10 c0d0s2 c0d0s8 c1t0d0p4 c1t0d0s13 c1t0d0s5 c0d0p2 c0d0s11 c0d0s3 c0d0s9 c1t0d0s0 c1t0d0s14 c1t0d0s6 c0d0p3 c0d0s12 c0d0s4 c1t0d0p0 c1t0d0s1 c1t0d0s15 c1t0d0s7 c0d0p4 c0d0s13 c0d0s5 c1t0d0p1 c1t0d0s10 c1t0d0s2 c1t0d0s8 c0d0s0 c0d0s14 c0d0s6 c1t0d0p2 c1t0d0s11 c1t0d0s3 c1t0d0s9 I tried different options, including adding the device while the domain was down and rebooting the solarisHVM1 domain, but nothing seem to work. Previous to running snv_90 I was running svn_82 and that I was being affected by bug 6656611, which was fixed in snv_85. However, I can not get it to work. Any ideas,hints, suggestions, or corrections (if I''m doing something wrong) on how to solve this issue are much appreciated. Thanks, Jairo Cardozo -- +-----------------------------------------------+ Jairo Cardozo H Benchmark Engineer Sun Solution Centers Jairo.Cardozo@sun.com Ext: 87013 Phone: 650-786-7013 +-----------------------------------------------+ _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org
Command:- # xm block-attach file:/export/home/images/disk1.img hdb w generates at HVM DomU:- bash-3.2# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 3129 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 1. c0d1 <DEFAULT cyl 2045 alt 2 hd 128 sec 32> /pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0 bash-3.2# ls /dev/rdsk/c0d1* /dev/rdsk/c0d1p0 /dev/rdsk/c0d1s0 /dev/rdsk/c0d1s13 /dev/rdsk/c0d1s4 /dev/rdsk/c0d1s9 /dev/rdsk/c0d1p1 /dev/rdsk/c0d1s1 /dev/rdsk/c0d1s14 /dev/rdsk/c0d1s5 /dev/rdsk/c0d1p2 /dev/rdsk/c0d1s10 /dev/rdsk/c0d1s15 /dev/rdsk/c0d1s6 /dev/rdsk/c0d1p3 /dev/rdsk/c0d1s11 /dev/rdsk/c0d1s2 /dev/rdsk/c0d1s7 /dev/rdsk/c0d1p4 /dev/rdsk/c0d1s12 /dev/rdsk/c0d1s3 /dev/rdsk/c0d1s8 bash-3.2# newfs /dev/rdsk/c0d1s7 /dev/rdsk/c0d1s7: No such device or address --- On Fri, 6/27/08, Boris Derzhavets <bderzhavets@yahoo.com> wrote: From: Boris Derzhavets <bderzhavets@yahoo.com> Subject: Re: [xen-discuss] [Fwd: Unable to block-attach to HVM domain] To: xen-discuss@opensolaris.org, Jairo.Cardozo@Sun.COM Date: Friday, June 27, 2008, 1:42 AM On SNV90 HVM DomU at SNV92 Dom0 :- # xm block-attach file:/export/home/images/disk1.img hdc w (at Dom0) New format report at DomU:- bash-3.2# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 3129 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 1. c1d0 <DEFAULT cyl 2045 alt 2 hd 128 sec 32> /pci@0,0/pci-ide@1,1/ide@1/cmdk@0,0 Specify disk (enter its number): 1 selecting c1d0 Controller working list found [disk formatted, defect list found] format> fdisk Total disk size is 2048 cylinders Cylinder size is 4096 (512 byte) blocks Cylinders Partition Status Type Start End Length % ========= ====== ============ ===== === ====== == 1 Solaris2 1 2047 2047 100 Then check:- bash-3.2# ls /dev/rdsk/c1d0* /dev/rdsk/c1d0p0 /dev/rdsk/c1d0s0 /dev/rdsk/c1d0s13 /dev/rdsk/c1d0s4 /dev/rdsk/c1d0s9 /dev/rdsk/c1d0p1 /dev/rdsk/c1d0s1 /dev/rdsk/c1d0s14 /dev/rdsk/c1d0s5 /dev/rdsk/c1d0p2 /dev/rdsk/c1d0s10 /dev/rdsk/c1d0s15 /dev/rdsk/c1d0s6 /dev/rdsk/c1d0p3 /dev/rdsk/c1d0s11 /dev/rdsk/c1d0s2 /dev/rdsk/c1d0s7 /dev/rdsk/c1d0p4 /dev/rdsk/c1d0s12 /dev/rdsk/c1d0s3 /dev/rdsk/c1d0s8 Attempted newfs ,which worked fine at SNV90 PV DomU ( after # xm block-attach file:/export/home/images/disk1.img 2 w at Dom0) now at SNV90 HVM DomU reports for any slice :- bash-3.2# newfs /dev/rdsk/c1d0s0 /dev/rdsk/c1d0s0: No such device or address --- On Wed, 6/25/08, Jairo.Cardozo@Sun.COM <Jairo.Cardozo@Sun.COM> wrote: From: Jairo.Cardozo@Sun.COM <Jairo.Cardozo@Sun.COM> Subject: [xen-discuss] [Fwd: Unable to block-attach to HVM domain] To: xen-discuss@opensolaris.org Date: Wednesday, June 25, 2008, 8:01 PM jkeil, Thanks for the hint. I tried with no success. Here is what I run # xm list Name ID Mem VCPUs State Time(s) Domain-0 0 5547 16 r----- 63198.4 solarisHVM1 23 12288 1 r----- 1265.0 solarisPV1 21 12288 4 -b---- 275.3 # xm block-attach solarisHVM1 file:/domains/solarisHVM1/disk1.img hdc w # xm block-list solarisHVM1 Vdev BE handle state evt-ch ring-ref BE-path 768 0 0 1 -1 -1 /local/domain/0/backend/vbd/23/768 5632 0 0 1 -1 -1 /local/domain/0/backend/vbd/23/5632 However, when I search for the new device on the solarisHVM1 domain, I can not see it solarisHVM1#ls /dev/dsk c0d0p0 c0d0s1 c0d0s15 c0d0s7 c1t0d0p3 c1t0d0s12 c1t0d0s4 c0d0p1 c0d0s10 c0d0s2 c0d0s8 c1t0d0p4 c1t0d0s13 c1t0d0s5 c0d0p2 c0d0s11 c0d0s3 c0d0s9 c1t0d0s0 c1t0d0s14 c1t0d0s6 c0d0p3 c0d0s12 c0d0s4 c1t0d0p0 c1t0d0s1 c1t0d0s15 c1t0d0s7 c0d0p4 c0d0s13 c0d0s5 c1t0d0p1 c1t0d0s10 c1t0d0s2 c1t0d0s8 c0d0s0 c0d0s14 c0d0s6 c1t0d0p2 c1t0d0s11 c1t0d0s3 c1t0d0s9 solarisHVM1# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 4565 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 Specify disk (enter its number): I run devfsadm, and still do not see the device on solarisHVM1. I have to note however, that the steps I described on my original posting do work on PV domains., therefore I guess this is an issue only with HVM domains Anyother thing I might be doing wrong on the HVM domain ?? thanks, J. Cardozo> I''m running snv_90 and created a Solaris 10 u4 HVM (solarisHVM1)guest domain.> I''m following this procedure to add a block device to my guestdomain:> # xm block-attach solarisHVM1 file:/dom1/disk1.img 2 wFor a HVM domU, you must use something like hda, hdb, hdc, hdd as the FrontDev, not "2". Try xm block-attach solarisHVM1 file:/dom1/disk1.img hdc w -------- Original Message -------- Subject: Unable to block-attach to HVM domain Date: Wed, 18 Jun 2008 09:29:29 -0700 From: Jairo.Cardozo@Sun.COM To: xen-discuss@opensolaris.org Hi, I''m running snv_90 and created a Solaris 10 u4 HVM (solarisHVM1) guest domain. I''m following this procedure to add a block device to my guest domain: # xm block-list solarisHVM1 Vdev BE handle state evt-ch ring-ref BE-path 768 0 0 1 -1 -1 /local/domain/0/backend/vbd/9/768 Create the image file #dd if=/dev/zero of=/dom1/disk1.img bs=1024k seek=8192 count=1 Add block device #xm block-attach solarisHVM1 file:/dom1/disk1.img 2 w List block devices #xm block-list solarisHVM1 Vdev BE handle state evt-ch ring-ref BE-path 768 0 0 1 -1 -1 /local/domain/0/backend/vbd/10/768 2 0 0 1 -1 -1 /local/domain/0/backend/vbd/10/2 some output from #xm list solarisHVM1 -l (device (vbd (uname file:/domains/solarisHVM1/root.img) (uuid d843dcb0-3a20-ef2d-919d-6247a7a9dd21) (mode w) (dev hda:disk) (backend 0) (bootable 1) ) ) (device (vbd (uname file:/dom1/disk1.img) (uuid 6347a698-d345-0dba-709e-5155a0e60c4c) (mode w) (dev 2:disk) (backend 0) (bootable 0) ) After running devfsadm in my guest domain, the new device does not show up # ls /dev/dsk c0d0p0 c0d0s1 c0d0s15 c0d0s7 c1t0d0p3 c1t0d0s12 c1t0d0s4 c0d0p1 c0d0s10 c0d0s2 c0d0s8 c1t0d0p4 c1t0d0s13 c1t0d0s5 c0d0p2 c0d0s11 c0d0s3 c0d0s9 c1t0d0s0 c1t0d0s14 c1t0d0s6 c0d0p3 c0d0s12 c0d0s4 c1t0d0p0 c1t0d0s1 c1t0d0s15 c1t0d0s7 c0d0p4 c0d0s13 c0d0s5 c1t0d0p1 c1t0d0s10 c1t0d0s2 c1t0d0s8 c0d0s0 c0d0s14 c0d0s6 c1t0d0p2 c1t0d0s11 c1t0d0s3 c1t0d0s9 I tried different options, including adding the device while the domain was down and rebooting the solarisHVM1 domain, but nothing seem to work. Previous to running snv_90 I was running svn_82 and that I was being affected by bug 6656611, which was fixed in snv_85. However, I can not get it to work. Any ideas,hints, suggestions, or corrections (if I''m doing something wrong) on how to solve this issue are much appreciated. Thanks, Jairo Cardozo -- +-----------------------------------------------+ Jairo Cardozo H Benchmark Engineer Sun Solution Centers Jairo.Cardozo@sun.com Ext: 87013 Phone: 650-786-7013 +-----------------------------------------------+ _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org
I ran:- # newfs /dev/rdsk/c0d2s0 It worked on PV DomU for slice 0. Then # mount /dev/dsk/c0d2s0 /export/home1 and editing /etc/vfstab. I followed instruction http://multiboot.solaris-x86.org/iv/3.html This message posted from opensolaris.org
Finally , succeeded on SNV90 HVM DomU:- bash-3.2# newfs /dev/rdsk/c0d1s2 newfs: construct a new file system /dev/rdsk/c0d1s2: (y/n)? y /dev/rdsk/c0d1s2: 8376320 sectors in 2045 cylinders of 128 tracks, 32 sectors 4090.0MB in 79 cyl groups (26 c/g, 52.00MB/g, 6400 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 106560, 213088, 319616, 426144, 532672, 639200, 745728, 852256, 958784, 7350464, 7456992, 7563520, 7670048, 7776576, 7883104, 7989632, 8096160, 8202688, 8309216 bash-3.2# mkdir /export/home1 bash-3.2# mount /dev/dsk/c0d1s2 /export/home1 bash-3.2# df -h Filesystem size used avail capacity Mounted on /dev/dsk/c0d0s0 8.7G 4.8G 3.8G 57% / /devices 0K 0K 0K 0% /devices /dev 0K 0K 0K 0% /dev ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 981M 1.0M 980M 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab /usr/lib/libc/libc_hwcap1.so.1 8.7G 4.8G 3.8G 57% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 980M 44K 980M 1% /tmp swap 980M 40K 980M 1% /var/run /dev/dsk/c0d0s7 14G 15M 14G 1% /export/home /dev/dsk/c0d1s2 3.9G 4.0M 3.9G 1% /export/home1 I cannot understand from were the slice number for UFS creation appears. I was just looping 0,1,2 - success ( ? ) This message posted from opensolaris.org
Snapshot attached. This message posted from opensolaris.org
OK. Magic number is caught. It''s 2 bash-3.2# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 3129 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 1. c0d1 <DEFAULT cyl 2045 alt 2 hd 128 sec 32> /pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0 2. c1d0 <DEFAULT cyl 2045 alt 2 hd 128 sec 32> /pci@0,0/pci-ide@1,1/ide@1/cmdk@0,0 bash-3.2# newfs /dev/rdsk/c1d0s2 newfs: /dev/rdsk/c1d0s2 last mounted as /export/home1 newfs: construct a new file system /dev/rdsk/c1d0s2: (y/n)? y /dev/rdsk/c1d0s2: 8376320 sectors in 2045 cylinders of 128 tracks, 32 sectors 4090.0MB in 79 cyl groups (26 c/g, 52.00MB/g, 6400 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 106560, 213088, 319616, 426144, 532672, 639200, 745728, 852256, 958784, 7350464, 7456992, 7563520, 7670048, 7776576, 7883104, 7989632, 8096160, 8202688, 8309216 bash-3.2# mkdir /export/home2 bash-3.2# mount /dev/dsk/c1d0s2 /export/home2 bash-3.2# df -h Filesystem size used avail capacity Mounted on /dev/dsk/c0d0s0 8.7G 4.8G 3.8G 57% / /devices 0K 0K 0K 0% /devices /dev 0K 0K 0K 0% /dev ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 974M 1000K 973M 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab /usr/lib/libc/libc_hwcap1.so.1 8.7G 4.8G 3.8G 57% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 973M 44K 973M 1% /tmp swap 973M 40K 973M 1% /var/run /dev/dsk/c0d0s7 14G 15M 14G 1% /export/home /dev/dsk/c0d1s2 3.9G 4.0M 3.9G 1% /export/home1 /dev/dsk/c1d0s2 3.9G 4.0M 3.9G 1% /export/home2 ****************** c0d1s2 for hdb c1d0s2 for hdc ****************** What would be for hdd ? I believe /dev/rdsk/c1d1s2 (second controller,second drive) This message posted from opensolaris.org
> On SNV90 HVM DomU at SNV92 Dom0 : > # xm block-attach file:/export/home/images/disk1.img hdc w (at Dom0)Is that disk1.img file an empty raw disk image file that you just created with mkfile (or dd)?> format> > fdisk...> Partition Status Type Start End Length % > ========= ====== ============ ===== === ====== ==> 1 Solaris2 1 2047 2047 100Did you use format''s "partition" menu, do define slices / a partition table for the disk; and did you write a SunOS disk label to the new disk using format''s "label" command?> Attempted newfs ,which worked fine at SNV90 PV DomU > now at SNV90 HVM DomU reports for any slice : > bash-3.2# newfs /dev/rdsk/c1d0s0 > /dev/rdsk/c1d0s0: No such device or addressSeems that you skipped the step to partition / label the new disk. In this case the kernel constructs a default disk label, which has the "s2" slice that maps the whole Solaris fdisk partition. All other slices are empty. You can print the current active partition information with the prtvtoc command: # prtvtoc /dev/rdsk/c1d0s2 or # prtvtoc /dev/rdsk/c1d0p0 This message posted from opensolaris.org
> > On SNV90 HVM DomU at SNV92 Dom0 : > > # xm block-attach > file:/export/home/images/disk1.img hdc w (at Dom0) > > Is that disk1.img file an empty raw disk image file > that > you just created with mkfile (or dd)?************************ Yes, created with dd. ***********************> > > > format> > > fdisk > ... > > Partition Status Type Start > End Length % > ========= ====== ============ ===== ==> ====== ==> 1 Solaris2 1 2047 > 2047 100 > did you use format''s "partition" menu, do define > slices / a partition table for the disk; and did you > write a SunOS disk label to the new disk using > format''s "label" command?************************************************************ No, i didn''t consult manuals. I realize that managed not the best way. Block attached device is a nice option to learn stuff you wrote above completely safely. *************************************************************> > > Attempted newfs ,which worked fine at SNV90 PV DomU > > > now at SNV90 HVM DomU reports for any slice : > > bash-3.2# newfs /dev/rdsk/c1d0s0 > > /dev/rdsk/c1d0s0: No such device or address > > Seems that you skipped the step to partition / label > the > new disk. In this case the kernel constructs a > default > disk label, which has the "s2" slice that maps the > whole > Solaris fdisk partition. All other slices are empty.********************************************************************* Yes, that''s what exactly happened . Been doing loop search of right slice i found s2 slices **********************************************************************> > You can print the current active partition > information with the > prtvtoc command: > > # prtvtoc /dev/rdsk/c1d0s2 > # prtvtoc /dev/rdsk/c1d0p0Thank you. This message posted from opensolaris.org
Jairo.Cardozo@Sun.COM
2008-Jun-27 16:53 UTC
Re: [Fwd: Unable to block-attach to HVM domain]
Boris, Check that you slice 7 on the new disk has cylinders using format and then select disk 1, and then partition, print ? Have you try newfs /dev/rdsk/c0d1s0 or /dev/rdsk/c0d1s2 ?? Good luck, J.C. Boris Derzhavets wrote: Command:- # xm block-attach file:/export/home/images/disk1.img hdb w generates at HVM DomU:- bash-3.2# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 1. c0d1 /pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0 bash-3.2# ls /dev/rdsk/c0d1* /dev/rdsk/c0d1p0 /dev/rdsk/c0d1s0 /dev/rdsk/c0d1s13 /dev/rdsk/c0d1s4 /dev/rdsk/c0d1s9 /dev/rdsk/c0d1p1 /dev/rdsk/c0d1s1 /dev/rdsk/c0d1s14 /dev/rdsk/c0d1s5 /dev/rdsk/c0d1p2 /dev/rdsk/c0d1s10 /dev/rdsk/c0d1s15 /dev/rdsk/c0d1s6 /dev/rdsk/c0d1p3 /dev/rdsk/c0d1s11 /dev/rdsk/c0d1s2 /dev/rdsk/c0d1s7 /dev/rdsk/c0d1p4 /dev/rdsk/c0d1s12 /dev/rdsk/c0d1s3 /dev/rdsk/c0d1s8 bash-3.2# newfs /dev/rdsk/c0d1s7 /dev/rdsk/c0d1s7: No such device or address --- On Fri, 6/27/08, Boris Derzhavets wrote: From: Boris Derzhavets Subject: Re: [xen-discuss] [Fwd: Unable to block-attach to HVM domain] To: xen-discuss@opensolaris.org, Jairo.Cardozo@Sun.COM Date: Friday, June 27, 2008, 1:42 AM On SNV90 HVM DomU at SNV92 Dom0 :- # xm block-attach file:/export/home/images/disk1.img hdc w (at Dom0) New format report at DomU:- bash-3.2# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 1. c1d0 /pci@0,0/pci-ide@1,1/ide@1/cmdk@0,0 Specify disk (enter its number): 1 selecting c1d0 Controller working list found [disk formatted, defect list found] format> fdisk Total disk size is 2048 cylinders Cylinder size is 4096 (512 byte) blocks Cylinders Partition Status Type Start End Length % ========= ====== ============ ===== === ====== == 1 Solaris2 1 2047 2047 100 Then check:- bash-3.2# ls /dev/rdsk/c1d0* /dev/rdsk/c1d0p0 /dev/rdsk/c1d0s0 /dev/rdsk/c1d0s13 /dev/rdsk/c1d0s4 /dev/rdsk/c1d0s9 /dev/rdsk/c1d0p1 /dev/rdsk/c1d0s1 /dev/rdsk/c1d0s14 /dev/rdsk/c1d0s5 /dev/rdsk/c1d0p2 /dev/rdsk/c1d0s10 /dev/rdsk/c1d0s15 /dev/rdsk/c1d0s6 /dev/rdsk/c1d0p3 /dev/rdsk/c1d0s11 /dev/rdsk/c1d0s2 /dev/rdsk/c1d0s7 /dev/rdsk/c1d0p4 /dev/rdsk/c1d0s12 /dev/rdsk/c1d0s3 /dev/rdsk/c1d0s8 Attempted newfs ,which worked fine at SNV90 PV DomU ( after # xm block-attach file:/export/home/images/disk1.img 2 w at Dom0) now at SNV90 HVM DomU reports for any slice :- bash-3.2# newfs /dev/rdsk/c1d0s0 /dev/rdsk/c1d0s0: No such device or address --- On Wed, 6/25/08, Jairo.Cardozo@Sun.COM wrote: From: Jairo.Cardozo@Sun.COM Subject: [xen-discuss] [Fwd: Unable to block-attach to HVM domain] To: xen-discuss@opensolaris.org Date: Wednesday, June 25, 2008, 8:01 PM jkeil, Thanks for the hint. I tried with no success. Here is what I run # xm list Name ID Mem VCPUs State Time(s) Domain-0 0 5547 16 r----- 63198.4 solarisHVM1 23 12288 1 r----- 1265.0 solarisPV1 21 12288 4 -b---- 275.3 # xm block-attach solarisHVM1 file:/domains/solarisHVM1/disk1.img hdc w # xm block-list solarisHVM1 Vdev BE handle state evt-ch ring-ref BE-path 768 0 0 1 -1 -1 /local/domain/0/backend/vbd/23/768 5632 0 0 1 -1 -1 /local/domain/0/backend/vbd/23/5632 However, when I search for the new device on the solarisHVM1 domain, I can not see it solarisHVM1#ls /dev/dsk c0d0p0 c0d0s1 c0d0s15 c0d0s7 c1t0d0p3 c1t0d0s12 c1t0d0s4 c0d0p1 c0d0s10 c0d0s2 c0d0s8 c1t0d0p4 c1t0d0s13 c1t0d0s5 c0d0p2 c0d0s11 c0d0s3 c0d0s9 c1t0d0s0 c1t0d0s14 c1t0d0s6 c0d0p3 c0d0s12 c0d0s4 c1t0d0p0 c1t0d0s1 c1t0d0s15 c1t0d0s7 c0d0p4 c0d0s13 c0d0s5 c1t0d0p1 c1t0d0s10 c1t0d0s2 c1t0d0s8 c0d0s0 c0d0s14 c0d0s6 c1t0d0p2 c1t0d0s11 c1t0d0s3 c1t0d0s9 solarisHVM1# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 Specify disk (enter its number): I run devfsadm, and still do not see the device on solarisHVM1. I have to note however, that the steps I described on my original posting do work on PV domains., therefore I guess this is an issue only with HVM domains Anyother thing I might be doing wrong on the HVM domain ?? thanks, J. Cardozo > I''m running snv_90 and created a Solaris 10 u4 HVM (solarisHVM1) guest domain. > I''m following this procedure to add a block device to my guest domain: > # xm block-attach solarisHVM1 file:/dom1/disk1.img 2 w For a HVM domU, you must use something like hda, hdb, hdc, hdd as the FrontDev, not "2". Try xm block-attach solarisHVM1 file:/dom1/disk1.img hdc w -------- Original Message -------- Subject: Unable to block-attach to HVM domain Date: Wed, 18 Jun 2008 09:29:29 -0700 From: Jairo.Cardozo@Sun.COM To: xen-discuss@opensolaris.org Hi, I''m running snv_90 and created a Solaris 10 u4 HVM (solarisHVM1) guest domain. I''m following this procedure to add a block device to my guest domain: # xm block-list solarisHVM1 Vdev BE handle state evt-ch ring-ref BE-path 768 0 0 1 -1 -1 /local/domain/0/backend/vbd/9/768 Create the image file #dd if=/dev/zero of=/dom1/disk1.img bs=1024k seek=8192 count=1 Add block device #xm block-attach solarisHVM1 file:/dom1/disk1.img 2 w List block devices #xm block-list solarisHVM1 Vdev BE handle state evt-ch ring-ref BE-path 768 0 0 1 -1 -1 /local/domain/0/backend/vbd/10/768 2 0 0 1 -1 -1 /local/domain/0/backend/vbd/10/2 some output from #xm list solarisHVM1 -l (device (vbd (uname file:/domains/solarisHVM1/root.img) (uuid d843dcb0-3a20-ef2d-919d-6247a7a9dd21) (mode w) (dev hda:disk) (backend 0) (bootable 1) ) ) (device (vbd (uname file:/dom1/disk1.img) (uuid 6347a698-d345-0dba-709e-5155a0e60c4c) (mode w) (dev 2:disk) (backend 0) (bootable 0) ) After running devfsadm in my guest domain, the new device does not show up # ls /dev/dsk c0d0p0 c0d0s1 c0d0s15 c0d0s7 c1t0d0p3 c1t0d0s12 c1t0d0s4 c0d0p1 c0d0s10 c0d0s2 c0d0s8 c1t0d0p4 c1t0d0s13 c1t0d0s5 c0d0p2 c0d0s11 c0d0s3 c0d0s9 c1t0d0s0 c1t0d0s14 c1t0d0s6 c0d0p3 c0d0s12 c0d0s4 c1t0d0p0 c1t0d0s1 c1t0d0s15 c1t0d0s7 c0d0p4 c0d0s13 c0d0s5 c1t0d0p1 c1t0d0s10 c1t0d0s2 c1t0d0s8 c0d0s0 c0d0s14 c0d0s6 c1t0d0p2 c1t0d0s11 c1t0d0s3 c1t0d0s9 I tried different options, including adding the device while the domain was down and rebooting the solarisHVM1 domain, but nothing seem to work. Previous to running snv_90 I was running svn_82 and that I was being affected by bug 6656611, which was fixed in snv_85. However, I can not get it to work. Any ideas,hints, suggestions, or corrections (if I''m doing something wrong) on how to solve this issue are much appreciated. Thanks, Jairo Cardozo -- +-----------------------------------------------+ Jairo Cardozo H Benchmark Engineer Sun Solution Centers Jairo.Cardozo@sun.com Ext: 87013 Phone: 650-786-7013 +-----------------------------------------------+ _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org -- +-----------------------------------------------+ Jairo Cardozo H Benchmark Engineer Sun Solution Centers Jairo.Cardozo@sun.com Ext: 87013 Phone: 650-786-7013 +-----------------------------------------------+
partition> print Current partition table (original): Total disk cylinders available: 2045 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 2044 3.99GB (2045/0/0) 8376320 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 2.00MB (1/0/0) 4096 9 alternates wm 1 - 2 4.00MB (2/0/0) 8192 partition> 3 Part Tag Flag Cylinders Size Blocks 3 unassigned wm 0 0 (0/0/0) 0 Enter partition id tag[unassigned]: Enter partition permission flags[wm]: Enter new starting cyl[0]: 0 Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 1022c partition> 4 Part Tag Flag Cylinders Size Blocks 4 unassigned wm 0 0 (0/0/0) 0 Enter partition id tag[unassigned]: Enter partition permission flags[wm]: Enter new starting cyl[0]: 1023 Enter partition size[0b, 0c, 1023e, 0.00mb, 0.00gb]: 1022c partition> label Ready to label disk, continue? y partition> quit format> quit bash-3.2# prtvtoc /dev/rdsk/c1d0s2 * /dev/rdsk/c1d0s2 partition map * * Dimensions: * 512 bytes/sector * 32 sectors/track * 128 tracks/cylinder * 4096 sectors/cylinder * 2047 cylinders * 2045 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 2 5 01 0 8376320 8376319 /export/home2 3 0 00 0 4186112 4186111 4 0 00 4190208 4186112 8376319 8 1 01 0 4096 4095 9 9 00 4096 8192 12287 bash-3.2# prtvtoc /dev/rdsk/c1d0s3 * /dev/rdsk/c1d0s3 partition map * * Dimensions: * 512 bytes/sector * 32 sectors/track * 128 tracks/cylinder * 4096 sectors/cylinder * 2047 cylinders * 2045 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 2 5 01 0 8376320 8376319 /export/home2 3 0 00 0 4186112 4186111 4 0 00 4190208 4186112 8376319 8 1 01 0 4096 4095 9 9 00 4096 8192 12287 bash-3.2# prtvtoc /dev/rdsk/c1d0s4 * /dev/rdsk/c1d0s4 partition map * * Dimensions: * 512 bytes/sector * 32 sectors/track * 128 tracks/cylinder * 4096 sectors/cylinder * 2047 cylinders * 2045 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 2 5 01 0 8376320 8376319 /export/home2 3 0 00 0 4186112 4186111 4 0 00 4190208 4186112 8376319 8 1 01 0 4096 4095 9 9 00 4096 8192 12287 bash-3.2# newfs /dev/rdsk/c1d0s3 newfs: /dev/rdsk/c1d0s3 last mounted as /export/home2 newfs: construct a new file system /dev/rdsk/c1d0s3: (y/n)? y /dev/rdsk/c1d0s3: 4186112 sectors in 1022 cylinders of 128 tracks, 32 sectors 2044.0MB in 45 cyl groups (23 c/g, 46.00MB/g, 11264 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 94272, 188512, 282752, 376992, 471232, 565472, 659712, 753952, 848192, 3298432, 3392672, 3486912, 3581152, 3675392, 3769632, 3863872, 3958112, 4052352, 4146592 bash-3.2# newfs /dev/rdsk/c1d0s4 newfs: construct a new file system /dev/rdsk/c1d0s4: (y/n)? y /dev/rdsk/c1d0s4: 4186112 sectors in 1022 cylinders of 128 tracks, 32 sectors 2044.0MB in 45 cyl groups (23 c/g, 46.00MB/g, 11264 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 94272, 188512, 282752, 376992, 471232, 565472, 659712, 753952, 848192, 3298432, 3392672, 3486912, 3581152, 3675392, 3769632, 3863872, 3958112, 4052352, 4146592 bash-3.2# mkdir /export/home03 bash-3.2# mkdir /export/home04 bash-3.2# mount /dev/dsk/c1d0s3 /export/home03 bash-3.2# mount /dev/dsk/c1d0s4 /export/home04 bash-3.2# df -h Filesystem size used avail capacity Mounted on /dev/dsk/c0d0s0 8.7G 4.8G 3.8G 57% / /devices 0K 0K 0K 0% /devices /dev 0K 0K 0K 0% /dev ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 974M 1.0M 973M 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab /usr/lib/libc/libc_hwcap1.so.1 8.7G 4.8G 3.8G 57% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 973M 44K 973M 1% /tmp swap 973M 40K 973M 1% /var/run /dev/dsk/c0d0s7 14G 15M 14G 1% /export/home /dev/dsk/c0d1s2 3.9G 4.0M 3.9G 1% /export/home1 /dev/dsk/c1d0s3 1.9G 2.0M 1.9G 1% /export/home03 /dev/dsk/c1d0s4 1.9G 2.0M 1.9G 1% /export/home04 bash-3.2# This message posted from opensolaris.org