Matthew Flanagan
2007-May-11 05:04 UTC
[zfs-discuss] zpool create -f ... fails on disk with previous UFS on it
Hi, I have a test server that I use for testing my different jumpstart installations. This system is continuously installed and reinstalled with different system builds. For some builds I have a finish script that creates a zpool using the utility found in the Solaris 10 update 3 miniroot. I have found an issue where the zpool command fails to create a new zpool if the system previously had a UFS filesystem on the same slice. The command and error is: zpool create -f -R /a -m /srv srv c1t0d0s6 cannot create ''srv'': one or more vdevs refer to the same device The steps to reproduce are: 1. build a Solaris 10 Update 3 system via jumpstart with the following partitioning and only UFS filesystems: partitioning explicit filesys rootdisk.s0 6144 / logging filesys rootdisk.s1 1024 swap filesys rootdisk.s3 4096 /var logging,nosuid filesys rootdisk.s6 free /srv logging filesys rootdisk.s7 50 unnamed 2. Then rebuild the same system via jumpstart with the following partitioning with slice 6 left unnamed so that a finish script may create a zpool with the command ''zpool create -f -R /a -m /srv srv cntndns6'': partitioning explicit filesys rootdisk.s0 6144 / logging filesys rootdisk.s1 1024 swap filesys rootdisk.s3 4096 /var logging,nosuid filesys rootdisk.s6 free unnamed filesys rootdisk.s7 50 unnamed Has anyone hit this issue and is this a known bug with a workaround? regards matthew This message posted from opensolaris.org
eric kustarz
2007-May-11 15:50 UTC
[zfs-discuss] zpool create -f ... fails on disk with previous UFS on it
On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote:> Hi, > > I have a test server that I use for testing my different jumpstart > installations. This system is continuously installed and > reinstalled with different system builds. > For some builds I have a finish script that creates a zpool using > the utility found in the Solaris 10 update 3 miniroot. > > I have found an issue where the zpool command fails to create a new > zpool if the system previously had a UFS filesystem on the same slice. > > The command and error is: > > zpool create -f -R /a -m /srv srv c1t0d0s6 > cannot create ''srv'': one or more vdevs refer to the same device >Works fine for me: # df -kh Filesystem size used avail capacity Mounted on /dev/dsk/c1t1d0s0 17G 4.1G 13G 24% / ... /dev/dsk/c1t1d0s6 24G 24M 24G 1% /zfs0 # umount /zfs0 # zpool create -f -R /a -m /srv srv c1t1d0s6 # zpool status pool: srv state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM srv ONLINE 0 0 0 c1t1d0s6 ONLINE 0 0 0 errors: No known data errors # eric> The steps to reproduce are: > > 1. build a Solaris 10 Update 3 system via jumpstart with the > following partitioning and only UFS filesystems: > > partitioning explicit > filesys rootdisk.s0 6144 / logging > filesys rootdisk.s1 1024 swap > filesys rootdisk.s3 4096 /var logging,nosuid > filesys rootdisk.s6 free /srv logging > filesys rootdisk.s7 50 unnamed > > 2. Then rebuild the same system via jumpstart with the following > partitioning with slice 6 left unnamed so that a finish script may > create a zpool with the command ''zpool create -f -R /a -m /srv srv > cntndns6'': > > partitioning explicit > filesys rootdisk.s0 6144 / logging > filesys rootdisk.s1 1024 swap > filesys rootdisk.s3 4096 /var logging,nosuid > filesys rootdisk.s6 free unnamed > filesys rootdisk.s7 50 unnamed > > Has anyone hit this issue and is this a known bug with a workaround? > > regards > > matthew > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Matthew Flanagan
2007-May-12 09:12 UTC
[zfs-discuss] Re: zpool create -f ... fails on disk with previous
> > On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote: > > > Hi, > > > > I have a test server that I use for testing my > different jumpstart > > installations. This system is continuously > installed and > > reinstalled with different system builds. > > For some builds I have a finish script that creates > a zpool using > > the utility found in the Solaris 10 update 3 > miniroot. > > > > I have found an issue where the zpool command fails > to create a new > > zpool if the system previously had a UFS filesystem > on the same slice. > > > > The command and error is: > > > > zpool create -f -R /a -m /srv srv c1t0d0s6 > > cannot create ''srv'': one or more vdevs refer to the > same device > > > > Works fine for me: > # df -kh > Filesystem size used avail capacity > Mounted on > dev/dsk/c1t1d0s0 17G 4.1G 13G 24% / > ... > /dev/dsk/c1t1d0s6 24G 24M 24G 1% > /zfs0 > mount /zfs0 > # zpool create -f -R /a -m /srv srv c1t1d0s6 > # zpool status > pool: srv > te: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > srv ONLINE 0 0 0 > c1t1d0s6 ONLINE 0 0 0 > known data errors > > > eric > >That works for me too. Perhaps you should actually follow my steps to reproduce the issue? matthew> > > The steps to reproduce are: > > > > 1. build a Solaris 10 Update 3 system via jumpstart > with the > > following partitioning and only UFS filesystems: > > > > partitioning explicit > > filesys rootdisk.s0 6144 / logging > > filesys rootdisk.s1 1024 swap > > filesys rootdisk.s3 4096 /var logging,nosuid > > filesys rootdisk.s6 free /srv logging > > filesys rootdisk.s7 50 unnamed > > > > 2. Then rebuild the same system via jumpstart with > the following > > partitioning with slice 6 left unnamed so that a > finish script may > > create a zpool with the command ''zpool create -f -R > /a -m /srv srv > > cntndns6'': > > > > partitioning explicit > > filesys rootdisk.s0 6144 / logging > > filesys rootdisk.s1 1024 swap > > filesys rootdisk.s3 4096 /var logging,nosuid > > filesys rootdisk.s6 free unnamed > > filesys rootdisk.s7 50 unnamed > > > > Has anyone hit this issue and is this a known bug > with a workaround? > > > > regards > > > > matthew > > > > > > This message posted from opensolaris.org > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss >This message posted from opensolaris.org
eric kustarz
2007-May-14 23:00 UTC
[zfs-discuss] Re: zpool create -f ... fails on disk with previous
On May 12, 2007, at 2:12 AM, Matthew Flanagan wrote:>> >> On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote: >> >>> Hi, >>> >>> I have a test server that I use for testing my >> different jumpstart >>> installations. This system is continuously >> installed and >>> reinstalled with different system builds. >>> For some builds I have a finish script that creates >> a zpool using >>> the utility found in the Solaris 10 update 3 >> miniroot. >>> >>> I have found an issue where the zpool command fails >> to create a new >>> zpool if the system previously had a UFS filesystem >> on the same slice. >>> >>> The command and error is: >>> >>> zpool create -f -R /a -m /srv srv c1t0d0s6 >>> cannot create ''srv'': one or more vdevs refer to the >> same device >>> >> >> Works fine for me: >> # df -kh >> Filesystem size used avail capacity >> Mounted on >> dev/dsk/c1t1d0s0 17G 4.1G 13G 24% / >> ... >> /dev/dsk/c1t1d0s6 24G 24M 24G 1% >> /zfs0 >> mount /zfs0 >> # zpool create -f -R /a -m /srv srv c1t1d0s6 >> # zpool status >> pool: srv >> te: ONLINE >> scrub: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> srv ONLINE 0 0 0 >> c1t1d0s6 ONLINE 0 0 0 >> known data errors >> >> >> eric >> >> > > That works for me too. Perhaps you should actually follow my steps > to reproduce the issue?Perhaps if you asked more nicely then i would. If you didn''t unmount the UFS filesystem "srv" before the ''zpool create'', then try that. If you did and it still fails, then ask the install/jumpstart people. eric
Matthew Flanagan
2007-May-15 06:05 UTC
[zfs-discuss] Re: zpool create -f ... fails on disk with previous
On 5/15/07, eric kustarz <eric.kustarz at sun.com> wrote:> > On May 12, 2007, at 2:12 AM, Matthew Flanagan wrote: > > >> > >> On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote: > >> > >>> Hi, > >>> > >>> I have a test server that I use for testing my > >> different jumpstart > >>> installations. This system is continuously > >> installed and > >>> reinstalled with different system builds. > >>> For some builds I have a finish script that creates > >> a zpool using > >>> the utility found in the Solaris 10 update 3 > >> miniroot. > >>> > >>> I have found an issue where the zpool command fails > >> to create a new > >>> zpool if the system previously had a UFS filesystem > >> on the same slice. > >>> > >>> The command and error is: > >>> > >>> zpool create -f -R /a -m /srv srv c1t0d0s6 > >>> cannot create ''srv'': one or more vdevs refer to the > >> same device > >>> > >> > >> Works fine for me: > >> # df -kh > >> Filesystem size used avail capacity > >> Mounted on > >> dev/dsk/c1t1d0s0 17G 4.1G 13G 24% / > >> ... > >> /dev/dsk/c1t1d0s6 24G 24M 24G 1% > >> /zfs0 > >> mount /zfs0 > >> # zpool create -f -R /a -m /srv srv c1t1d0s6 > >> # zpool status > >> pool: srv > >> te: ONLINE > >> scrub: none requested > >> config: > >> > >> NAME STATE READ WRITE CKSUM > >> srv ONLINE 0 0 0 > >> c1t1d0s6 ONLINE 0 0 0 > >> known data errors > >> > >> > >> eric > >> > >> > > > > That works for me too. Perhaps you should actually follow my steps > > to reproduce the issue? > > Perhaps if you asked more nicely then i would. If you didn''t unmount > the UFS filesystem "srv" before the ''zpool create'', then try that. > If you did and it still fails, then ask the install/jumpstart people. > > eric > >Eric, The UFS filesystem is being unmounted each time because the system is being *reinstalled* each time from bare metal. The first time it is being installed with a UFS file system on slice 6. The second time the system is *reinstalled* with slice 6 left unnamed and a finish script failing to create a zpool from the jumpstart miniroot. I can reliably reproduce this in my lab on a number of different sparc hardware platforms (V120 and V210''s with both 1 and 2 disks). regards matthew ps. I had already opened a support case with Sun before posting to the list and the engineer''s response was to email me back the "correct" command syntax and a copy of the zpool man page which he had obviously not read himself because his "correct" syntax was blatantly wrong. Please make an effort to read my whole email. If you need any clarifications on how to reproduce the problem then I''ll be glad to help. pps. resending this because I was not subscribed to the list. -- matthew http://wadofstuff.blogspot.com
Robert Milkowski
2007-May-15 08:05 UTC
[zfs-discuss] zpool create -f ... fails on disk with previous UFS on it
Hello Matthew, Friday, May 11, 2007, 7:04:06 AM, you wrote: Check in your script (df -h?) if s6 isn''t mounted anyway... -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Matthew Flanagan
2007-May-16 00:01 UTC
[zfs-discuss] Re: zpool create -f ... fails on disk with previous
On 5/15/07, Matthew Flanagan <mattimustang at gmail.com> wrote:> On 5/15/07, eric kustarz <eric.kustarz at sun.com> wrote: > > > > On May 12, 2007, at 2:12 AM, Matthew Flanagan wrote: > > > > >> > > >> On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote: > > >> > > >>> Hi, > > >>> > > >>> I have a test server that I use for testing my > > >> different jumpstart > > >>> installations. This system is continuously > > >> installed and > > >>> reinstalled with different system builds. > > >>> For some builds I have a finish script that creates > > >> a zpool using > > >>> the utility found in the Solaris 10 update 3 > > >> miniroot. > > >>> > > >>> I have found an issue where the zpool command fails > > >> to create a new > > >>> zpool if the system previously had a UFS filesystem > > >> on the same slice. > > >>> > > >>> The command and error is: > > >>> > > >>> zpool create -f -R /a -m /srv srv c1t0d0s6 > > >>> cannot create ''srv'': one or more vdevs refer to the > > >> same device > > >>> > > >> > > >> Works fine for me: > > >> # df -kh > > >> Filesystem size used avail capacity > > >> Mounted on > > >> dev/dsk/c1t1d0s0 17G 4.1G 13G 24% / > > >> ... > > >> /dev/dsk/c1t1d0s6 24G 24M 24G 1% > > >> /zfs0 > > >> mount /zfs0 > > >> # zpool create -f -R /a -m /srv srv c1t1d0s6 > > >> # zpool status > > >> pool: srv > > >> te: ONLINE > > >> scrub: none requested > > >> config: > > >> > > >> NAME STATE READ WRITE CKSUM > > >> srv ONLINE 0 0 0 > > >> c1t1d0s6 ONLINE 0 0 0 > > >> known data errors > > >> > > >> > > >> eric > > >> > > >> > > > > > > That works for me too. Perhaps you should actually follow my steps > > > to reproduce the issue? > > > > Perhaps if you asked more nicely then i would. If you didn''t unmount > > the UFS filesystem "srv" before the ''zpool create'', then try that. > > If you did and it still fails, then ask the install/jumpstart people. > > > > eric > > > > > > Eric, > > The UFS filesystem is being unmounted each time because the system is > being *reinstalled* each time from bare metal. The first time it is > being installed with a UFS file system on slice 6. The second time the > system is *reinstalled* with slice 6 left unnamed and a finish script > failing to create a zpool from the jumpstart miniroot. I can reliably > reproduce this in my lab on a number of different sparc hardware > platforms (V120 and V210''s with both 1 and 2 disks). >I''ve done some further testing today and the problem seems to occur regardless of whether the first installation was had UFS or ZFS on the slice I try to create a new zpool on. I have also discovered that if you run ''zpool create'' a second time (as I do in the create-zfs.fin below) after the first fails it will succeed in creating the zpool. Below are the JASS files I use to recreate the problem. I''m using JASS 4.2 with 122608-03 patch applied. Is anyone else able to reproduce this issue using this set up? ==== rules ===probe osname probe memsize probe hostaddress probe hostname probe disks probe rootdisk probe karch hostname jstest1 - Profiles/test.profile Drivers/test.driver ==== Finish/create-zfs.fin ===#!/bin/sh # # Create zpool # if check_os_min_revision 5.10; then ALT_ROOT="`echo ${JASS_ROOT_DIR}| sed -e ''s,/*$,,g''`" if [ "${SI_ROOTDISK}X" != "X" ]; then ROOTDISK="`echo $SI_ROOTDISK | sed -e ''s/s.$//g''`" vdev="${ROOTDISK}s6" mountpoint="/srv" zpool="srv" logMessage "Creating zpool: ${zpool}" zpool create -f -R ${ALT_ROOT} -m ${mountpoint} ${zpool} ${vdev} if [ $? -ne 0 ]; then logError "Failed to create zpool: ${zpool}" # second time zpool is run it succeeds zpool create -f -R ${ALT_ROOT} -m ${mountpoint} ${zpool} ${vdev} if [ $? -ne 0 ]; then logError "Failed to create zpool again: ${zpool}" fi fi fi else logInvalidOSRevision "5.10+" fi ==== Profiles/test.profile ===install_type initial_install system_type standalone cluster SUNWCrnet partitioning explicit filesys rootdisk.s0 6144 / logging filesys rootdisk.s1 1024 swap filesys rootdisk.s3 4096 /var logging,nosuid filesys rootdisk.s6 free unnamed filesys rootdisk.s7 50 unnamed ==== Drivers/test.driver ===#!/bin/sh # # DIR="`/bin/dirname $0`" export DIR . ${DIR}/driver.init # Finish Scripts JASS_SCRIPTS=" create-zfs.fin " . ${DIR}/driver.run> > regards > > matthew > > > ps. I had already opened a support case with Sun before posting to the > list and the engineer''s response was to email me back the "correct" > command syntax and a copy of the zpool man page which he had obviously > not read himself because his "correct" syntax was blatantly wrong. > Please make an effort to read my whole email. If you need any > clarifications on how to reproduce the problem then I''ll be glad to > help. > > pps. resending this because I was not subscribed to the list. > > -- > matthew > http://wadofstuff.blogspot.com >-- matthew http://wadofstuff.blogspot.com
Matthew Flanagan
2007-Jun-07 00:49 UTC
[zfs-discuss] Re: zpool create -f ... fails on disk with previous UFS on it
Hi, FYI Bug ID 6566433 has been assigned to this. See also the other part of this thread at http://www.opensolaris.org/jive/thread.jspa?threadID=30678 . Current work around suggested by Sun is:> G''day Matthew, > > Apologize for the delay as it took sometime for the Backline Engineer > to reproduce the problem and fix it. > and Thanks for your efforts in providing the logs / update. > > A Bug 6566433 has been filed against this case. > > I draw your attention to the workaround section: > > "The problem is related to an old zpool on the slice from a previous > jumpstart. > > A workaround is to dd if=/dev/zero of=/dev/rdsk/c0t0d0s6 bs=64k before > doing the zpool create in the jumpstart finish script. > It didn''t seem sufficient to overwrite the slice with a ufs > filesystem, probably because zfs places 4 copies of the vdev label on > the slice and ufs doesn''t completely overwrite them."regards matthew This message posted from opensolaris.org