Hi, I have problems setting up san volumes in a xen guest. We are running the xen on opensolaris SunOS node2 5.11 snv_127 i86pc i386 i86xpv. The Xen guest itself is Solaris 10: SunOS pgbackup 5.10 Generic_141445-09 i86pc i386 i86pc In the past we were running an older solaris version with solaris zones on: SunOS node138 5.10 Generic_139556-08 i86pc i386 i86pc here new san volumes we automaticly detected and I simply used zpool to create the zpools. Now with opensolaris+xen and solaris sol10u8x86 in the xen guest I am experiencing the following behaviors/problems: 1. i am not able to create zpools on a fresh create volume. i have to run devfsadm first. this is not too bad and deterministic. 2. i am not able to create a zpool in the xen guest. before i treid this i run ''devfsadm -c disk'' 3. when i create a zpool on another machine with an older solaris 10 release, i am sometimes able to import that pool. but this is not always the case. global: root@node2:~ > echo 60:0A:0B:80:00:29:D6:9A:00:00:14:BF:4B:20:42:C3|tr -d : 600A0B800029D69A000014BF4B2042C3 global: root@node2:~ > devfsadm -c disk -v global: root@node2:~ > file /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0 /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0: cannot open: No such file or directory global: root@node2:~ > zpool create pgbackup1b c0t600A0B800029D69A000014BF4B2042C3d0 global: root@node2:~ > file /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0 /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0: block special (85/3975) global: root@node2:~ > zpool destroy pgbackup1b global: root@node2:~ > virsh attach-disk pgbackup /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0 hdb Disk attached successfully global: root@node2:~ > virsh start pgbackup Domain pgbackup started global: root@node2:~ > ssh pgbackup Password: Last login: Thu Dec 10 09:26:56 2009 from 192.168.10.2 global: root@pgbackup:~ > file /dev/dsk/c0d1d0 /dev/dsk/c0d1d0: cannot open: No such file or directory global: root@pgbackup:~ > devfsadm -c disk -v global: root@pgbackup:~ > file /dev/dsk/c0d1d0 /dev/dsk/c0d1d0: cannot open: No such file or directory global: root@pgbackup:~ > zpool create pgbackup1b c0d1d0 cannot open ''c0d1d0'': no such device in /dev/dsk must be a full path or shorthand device name zsh: 1096 exit 1 zpool create pgbackup1b c0d1d0 global: root@pgbackup:~ (1)> best regards, Uwe
Uwe Bartels wrote:> Hi, > > I have problems setting up san volumes in a xen guest. > > We are running the xen on opensolaris SunOS node2 5.11 snv_127 i86pc > i386 i86xpv. > The Xen guest itself is Solaris 10: SunOS pgbackup 5.10 > Generic_141445-09 i86pc i386 i86pc > > In the past we were running an older solaris version with solaris zones > on: SunOS node138 5.10 Generic_139556-08 i86pc i386 i86pc > here new san volumes we automaticly detected and I simply used zpool to > create the zpools. > > Now with opensolaris+xen and solaris sol10u8x86 in the xen guest I am > experiencing the following behaviors/problems: > 1. i am not able to create zpools on a fresh create volume. i have to > run devfsadm first. this is not too bad and deterministic. > 2. i am not able to create a zpool in the xen guest. before i treid this > i run ''devfsadm -c disk'' > 3. when i create a zpool on another machine with an older solaris 10 > release, i am sometimes able to import that pool. but this is not always > the case. > > > > global: root@node2:~ > echo > 60:0A:0B:80:00:29:D6:9A:00:00:14:BF:4B:20:42:C3|tr -d : > 600A0B800029D69A000014BF4B2042C3 > global: root@node2:~ > devfsadm -c disk -v > global: root@node2:~ > file /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0 > /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0: cannot open: No such > file or directory > global: root@node2:~ > zpool create pgbackup1b > c0t600A0B800029D69A000014BF4B2042C3d0 > global: root@node2:~ > file > /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0 > /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0: block special (85/3975) > global: root@node2:~ > zpool destroy pgbackup1b > global: root@node2:~ > virsh attach-disk pgbackup > /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0 hdb > Disk attached successfullyCan you try using p0 to give the entire disk to the guest? e.g. c0t*d0p0 virsh attach-disk pgbackup /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0p0 hdb Thanks, MRJ
Hi Mark, I''m still experiencing the same behavior. best... Uwe 2009/12/10 Mark Johnson <Mark.Johnson@sun.com>> > > Uwe Bartels wrote: > >> Hi, >> >> I have problems setting up san volumes in a xen guest. >> >> We are running the xen on opensolaris SunOS node2 5.11 snv_127 i86pc i386 >> i86xpv. >> The Xen guest itself is Solaris 10: SunOS pgbackup 5.10 Generic_141445-09 >> i86pc i386 i86pc >> >> In the past we were running an older solaris version with solaris zones >> on: SunOS node138 5.10 Generic_139556-08 i86pc i386 i86pc >> here new san volumes we automaticly detected and I simply used zpool to >> create the zpools. >> >> Now with opensolaris+xen and solaris sol10u8x86 in the xen guest I am >> experiencing the following behaviors/problems: >> 1. i am not able to create zpools on a fresh create volume. i have to run >> devfsadm first. this is not too bad and deterministic. >> 2. i am not able to create a zpool in the xen guest. before i treid this i >> run ''devfsadm -c disk'' >> 3. when i create a zpool on another machine with an older solaris 10 >> release, i am sometimes able to import that pool. but this is not always the >> case. >> >> >> >> global: root@node2:~ > echo >> 60:0A:0B:80:00:29:D6:9A:00:00:14:BF:4B:20:42:C3|tr -d : >> 600A0B800029D69A000014BF4B2042C3 >> global: root@node2:~ > devfsadm -c disk -v >> global: root@node2:~ > file >> /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0 >> /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0: cannot open: No such file >> or directory >> global: root@node2:~ > zpool create pgbackup1b >> c0t600A0B800029D69A000014BF4B2042C3d0 >> global: root@node2:~ > file >> /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0 >> /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0: block special (85/3975) >> global: root@node2:~ > zpool destroy pgbackup1b >> global: root@node2:~ > virsh attach-disk pgbackup >> /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0 hdb >> Disk attached successfully >> > > Can you try using p0 to give the entire disk to the guest? > e.g. c0t*d0p0 > > virsh attach-disk pgbackup /dev/dsk/c0t600A0B800029D69A000014BF4B2042C3d0p0 > hdb > > > Thanks, > > MRJ > >
Uwe Bartels wrote:> Hi Mark, > > I''m still experiencing the same behavior.> global: root@node2:~ > ssh pgbackup > Password: > Last login: Thu Dec 10 09:26:56 2009 from 192.168.10.2 > global: root@pgbackup:~ > file /dev/dsk/c0d1d0 c0d1d0? what disks does format show? MRJ
2009/12/10 Mark Johnson <Mark.Johnson@sun.com>> > > Uwe Bartels wrote: > >> Hi Mark, >> >> I''m still experiencing the same behavior. >> > > > > global: root@node2:~ > ssh pgbackup > > Password: > > Last login: Thu Dec 10 09:26:56 2009 from 192.168.10.2 > > global: root@pgbackup:~ > file /dev/dsk/c0d1d0 > > c0d1d0? > > what disks does format show? >global: root@pgbackup:~ > format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 4092 alt 2 hd 128 sec 32> /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 Specify disk (enter its number): ^C zsh: 3564 exit 1 format global: root@pgbackup:~ (1)> c0d1d0 is supposed to come up. maybe the content of /etc/path_to_inst helps. global: root@pgbackup:~ > egrep "xdf|cmdk" /etc/path_to_inst "/xpvd/xdf@300" 0 "xdf" "/xpvd/xdf@340" 1 "xdf" "/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0" 0 "cmdk" "/pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0" 1 "cmdk" best... Uwe> > > MRJ > >
Uwe Bartels wrote:> > > Uwe Bartels wrote: > > Hi Mark, > > I''m still experiencing the same behavior. > > > > > global: root@node2:~ > ssh pgbackup > > Password: > > Last login: Thu Dec 10 09:26:56 2009 from 192.168.10.2 > > global: root@pgbackup:~ > file /dev/dsk/c0d1d0 > > c0d1d0? > > what disks does format show? > > > global: root@pgbackup:~ > format > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > 0. c0d0 <DEFAULT cyl 4092 alt 2 hd 128 sec 32> > /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 > Specify disk (enter its number): ^C > zsh: 3564 exit 1 format > global: root@pgbackup:~ (1)> > > c0d1d0 is supposed to come up.it would be c0d1 e.g. c0d1p0 MRJ> maybe the content of /etc/path_to_inst helps. > > global: root@pgbackup:~ > egrep "xdf|cmdk" /etc/path_to_inst > "/xpvd/xdf@300" 0 "xdf" > "/xpvd/xdf@340" 1 "xdf" > "/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0" 0 "cmdk" > "/pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0" 1 "cmdk" > > best... > Uwe > > > > > > MRJ > >
We are using here fcsan, it works mostly. My problem is, that resizing a disks, involves rebooting the metal to get the change in affect. Especially for OpenSolaris based guests it would be a pleasure to have it immediately affective, because zfs supports online resizing. Florian On 10.12.2009 11:34, Uwe Bartels wrote:> > > 2009/12/10 Mark Johnson <Mark.Johnson@sun.com <mailto:Mark.Johnson@sun.com>>...> > _______________________________________________ > xen-discuss mailing list > xen-discuss@opensolaris.org