Hey to everyone on this mailing list (since this is my first post)! We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after some system work this weekend we have a problem with only one ZFS volume. We have a pool called /Data with many file systems and two volumes. The status of my zpool is: -bash-3.00$ zpool status pool: Data state: ONLINE scrub: scrub in progress, 5.99% done, 13h38m to go config: NAME STATE READ WRITE CKSUM Data ONLINE 0 0 0 c4t5000402001FC442Cd0 ONLINE 0 0 0 errors: No known data errors Yesterday I started the scrub process because I read that was a smart thing to do after a zpool export and zpool import procedure. I did this because I wanted to move the zpool to another OS installation but changed my mind and did a zpool import on the same OS as I did an export. After checking as much information as I could find on the web, I was advised to to run the zpool scrub after an import. Well, the problem now is that one volume in this zpool is not working. I''ve shared it via iscsi to a Linux host (all of this was working on Friday). The Linux host reports that it can''t find a partition table. Here is the log from the Linux host: Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr sectors (268435 MB) Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write through Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr sectors (268435 MB) Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write through Mar 2 11:09:37 eva kernel: sdb: unknown partition table Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, channel 0, id 0, lun 0 So I checked the status on my Solaris server and I found this information a bit strange;: -bash-3.00$ zfs list Data/subversion1 NAME USED AVAIL REFER MOUNTPOINT Data/subversion1 22.5K 519G 22.5K - How can it bed 519GB available on a volume that is 250GB in size? Here are more details: -bash-3.00$ zfs get all Data/subversion1 NAME PROPERTY VALUE SOURCE Data/subversion1 type volume - Data/subversion1 creation Wed Apr 2 9:06 2008 - Data/subversion1 used 22.5K - Data/subversion1 available 519G - Data/subversion1 referenced 22.5K - Data/subversion1 compressratio 1.00x - Data/subversion1 reservation 250G local Data/subversion1 volsize 250G - Data/subversion1 volblocksize 8K - Data/subversion1 checksum on default Data/subversion1 compression off default Data/subversion1 readonly off default Data/subversion1 shareiscsi off local Will this be fixed after the scrub process is finished tomorrow or is this volume lost forever? Hoping for some quick answers as the data is quite important for us. Regards, Lars-Gunnar Persson
It looks like you only have one physical device in this pool. Is that correct? On Mon, Mar 2, 2009 at 9:01 AM, Lars-Gunnar Persson <lars-gunnar.persson at nersc.no> wrote:> Hey to everyone on this mailing list (since this is my first post)! > > We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after some system > work this weekend we have a problem with only one ZFS volume. > > We have a pool called /Data with many file systems and two volumes. The > status of my zpool is: > > -bash-3.00$ zpool status > ?pool: Data > ?state: ONLINE > ?scrub: scrub in progress, 5.99% done, 13h38m to go > config: > > ? ? ? ?NAME ? ? ? ? ? ? ? ? ? ? STATE ? ? READ WRITE CKSUM > ? ? ? ?Data ? ? ? ? ? ? ? ? ? ? ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? ?c4t5000402001FC442Cd0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > > errors: No known data errors > > > Yesterday I started the scrub process because I read that was a smart thing > to do after a zpool export and zpool import procedure. I did this because I > wanted to move the zpool to another OS installation but changed my mind and > did a zpool import on the same OS as I did an export. > > After checking as much information as I could find on the web, I was advised > to to run the zpool scrub after an import. > > Well, the problem now is that one volume in this zpool is not working. I''ve > shared it via iscsi to a Linux host (all of this was working on Friday). The > Linux host reports that it can''t find a partition table. Here is the log > from the Linux host: > > Mar ?2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr sectors > (268435 MB) > Mar ?2 11:09:36 eva kernel: SCSI device sdb: drive cache: write through > Mar ?2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr sectors > (268435 MB) > Mar ?2 11:09:37 eva kernel: SCSI device sdb: drive cache: write through > Mar ?2 11:09:37 eva kernel: ?sdb: unknown partition table > Mar ?2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, channel 0, id > 0, lun 0 > > > So I checked the status on my Solaris server and I found this information a > bit strange;: > > -bash-3.00$ zfs list Data/subversion1 > NAME ? ? ? ? ? ? ? USED ?AVAIL ?REFER ?MOUNTPOINT > Data/subversion1 ?22.5K ? 519G ?22.5K ?- > > How ?can it bed 519GB available on a volume that is 250GB in size? Here are > more details: > > -bash-3.00$ zfs get all Data/subversion1 > NAME ? ? ? ? ? ? ?PROPERTY ? ? ? VALUE ? ? ? ? ? ? ? ? ?SOURCE > Data/subversion1 ?type ? ? ? ? ? volume ? ? ? ? ? ? ? ? - > Data/subversion1 ?creation ? ? ? Wed Apr ?2 ?9:06 2008 ?- > Data/subversion1 ?used ? ? ? ? ? 22.5K ? ? ? ? ? ? ? ? ?- > Data/subversion1 ?available ? ? ?519G ? ? ? ? ? ? ? ? ? - > Data/subversion1 ?referenced ? ? 22.5K ? ? ? ? ? ? ? ? ?- > Data/subversion1 ?compressratio ?1.00x ? ? ? ? ? ? ? ? ?- > Data/subversion1 ?reservation ? ?250G ? ? ? ? ? ? ? ? ? local > Data/subversion1 ?volsize ? ? ? ?250G ? ? ? ? ? ? ? ? ? - > Data/subversion1 ?volblocksize ? 8K ? ? ? ? ? ? ? ? ? ? - > Data/subversion1 ?checksum ? ? ? on ? ? ? ? ? ? ? ? ? ? default > Data/subversion1 ?compression ? ?off ? ? ? ? ? ? ? ? ? ?default > Data/subversion1 ?readonly ? ? ? off ? ? ? ? ? ? ? ? ? ?default > Data/subversion1 ?shareiscsi ? ? off ? ? ? ? ? ? ? ? ? ?local > > > Will this be fixed after the scrub process is finished tomorrow or is this > volume lost forever? > > Hoping for some quick answers as the data is quite important for us. > > Regards, > > Lars-Gunnar Persson > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
I could be wrong but this looks like an issue on the Linux side A zpool status is returning the healthy pool What does format/fdisk show you on the Linux side ? Can it still see the iSCSI device that is being shared from the Solaris server ? Regards, Damien O''Shea Strategy & Unix Systems Revenue Backup Site VPN: 35603 daoshea at revenue.ie <mailto:daoshea at revenue.ie> -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org]On Behalf Of Blake Sent: 02 March 2009 15:57 To: Lars-Gunnar Persson Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] ZFS volume corrupted? ************************************* This e-mail has been received by the Revenue Internet e-mail service. (IP) ************************************* It looks like you only have one physical device in this pool. Is that correct? On Mon, Mar 2, 2009 at 9:01 AM, Lars-Gunnar Persson <lars-gunnar.persson at nersc.no> wrote:> Hey to everyone on this mailing list (since this is my first post)! > > We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after somesystem> work this weekend we have a problem with only one ZFS volume. > > We have a pool called /Data with many file systems and two volumes. The > status of my zpool is: > > -bash-3.00$ zpool status > ?pool: Data > ?state: ONLINE > ?scrub: scrub in progress, 5.99% done, 13h38m to go > config: > > ? ? ? ?NAME ? ? ? ? ? ? ? ? ? ? STATE ? ? READ WRITE CKSUM > ? ? ? ?Data ? ? ? ? ? ? ? ? ? ? ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? ?c4t5000402001FC442Cd0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > > errors: No known data errors > > > Yesterday I started the scrub process because I read that was a smart thing > to do after a zpool export and zpool import procedure. I did this because I > wanted to move the zpool to another OS installation but changed my mind and > did a zpool import on the same OS as I did an export. > > After checking as much information as I could find on the web, I wasadvised> to to run the zpool scrub after an import. > > Well, the problem now is that one volume in this zpool is not working. I''ve > shared it via iscsi to a Linux host (all of this was working on Friday).The> Linux host reports that it can''t find a partition table. Here is the log > from the Linux host: > > Mar ?2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwrsectors> (268435 MB) > Mar ?2 11:09:36 eva kernel: SCSI device sdb: drive cache: write through > Mar ?2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwrsectors> (268435 MB) > Mar ?2 11:09:37 eva kernel: SCSI device sdb: drive cache: write through > Mar ?2 11:09:37 eva kernel: ?sdb: unknown partition table > Mar ?2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, channel 0, id > 0, lun 0 > > > So I checked the status on my Solaris server and I found this information a > bit strange;: > > -bash-3.00$ zfs list Data/subversion1 > NAME ? ? ? ? ? ? ? USED ?AVAIL ?REFER ?MOUNTPOINT > Data/subversion1 ?22.5K ? 519G ?22.5K ?- > > How ?can it bed 519GB available on a volume that is 250GB in size? Here are > more details: > > -bash-3.00$ zfs get all Data/subversion1 > NAME ? ? ? ? ? ? ?PROPERTY ? ? ? VALUE ? ? ? ? ? ? ? ? ?SOURCE > Data/subversion1 ?type ? ? ? ? ? volume ? ? ? ? ? ? ? ? - > Data/subversion1 ?creation ? ? ? Wed Apr ?2 ?9:06 2008 ?- > Data/subversion1 ?used ? ? ? ? ? 22.5K ? ? ? ? ? ? ? ? ?- > Data/subversion1 ?available ? ? ?519G ? ? ? ? ? ? ? ? ? - > Data/subversion1 ?referenced ? ? 22.5K ? ? ? ? ? ? ? ? ?- > Data/subversion1 ?compressratio ?1.00x ? ? ? ? ? ? ? ? ?- > Data/subversion1 ?reservation ? ?250G ? ? ? ? ? ? ? ? ? local > Data/subversion1 ?volsize ? ? ? ?250G ? ? ? ? ? ? ? ? ? - > Data/subversion1 ?volblocksize ? 8K ? ? ? ? ? ? ? ? ? ? - > Data/subversion1 ?checksum ? ? ? on ? ? ? ? ? ? ? ? ? ? default > Data/subversion1 ?compression ? ?off ? ? ? ? ? ? ? ? ? ?default > Data/subversion1 ?readonly ? ? ? off ? ? ? ? ? ? ? ? ? ?default > Data/subversion1 ?shareiscsi ? ? off ? ? ? ? ? ? ? ? ? ?local > > > Will this be fixed after the scrub process is finished tomorrow or is this > volume lost forever? > > Hoping for some quick answers as the data is quite important for us. > > Regards, > > Lars-Gunnar Persson > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >_______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ************************ This message has been delivered to the Internet by the Revenue Internet e-mail service (OP) *************************
That is correct. It''s a raid 6 disk shelf with one volume connected via fibre. Lars-Gunnar Persson Den 2. mars. 2009 kl. 16.57 skrev Blake <blake.irvin at gmail.com>:> It looks like you only have one physical device in this pool. Is > that correct? > > > > On Mon, Mar 2, 2009 at 9:01 AM, Lars-Gunnar Persson > <lars-gunnar.persson at nersc.no> wrote: >> Hey to everyone on this mailing list (since this is my first post)! >> >> We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after >> some system >> work this weekend we have a problem with only one ZFS volume. >> >> We have a pool called /Data with many file systems and two volumes. >> The >> status of my zpool is: >> >> -bash-3.00$ zpool status >> pool: Data >> state: ONLINE >> scrub: scrub in progress, 5.99% done, 13h38m to go >> config: >> >> NAME STATE READ WRITE CKSUM >> Data ONLINE 0 0 0 >> c4t5000402001FC442Cd0 ONLINE 0 0 0 >> >> errors: No known data errors >> >> >> Yesterday I started the scrub process because I read that was a >> smart thing >> to do after a zpool export and zpool import procedure. I did this >> because I >> wanted to move the zpool to another OS installation but changed my >> mind and >> did a zpool import on the same OS as I did an export. >> >> After checking as much information as I could find on the web, I >> was advised >> to to run the zpool scrub after an import. >> >> Well, the problem now is that one volume in this zpool is not >> working. I''ve >> shared it via iscsi to a Linux host (all of this was working on >> Friday). The >> Linux host reports that it can''t find a partition table. Here is >> the log >> from the Linux host: >> >> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte >> hdwr sectors >> (268435 MB) >> Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write >> through >> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte >> hdwr sectors >> (268435 MB) >> Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write >> through >> Mar 2 11:09:37 eva kernel: sdb: unknown partition table >> Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, >> channel 0, id >> 0, lun 0 >> >> >> So I checked the status on my Solaris server and I found this >> information a >> bit strange;: >> >> -bash-3.00$ zfs list Data/subversion1 >> NAME USED AVAIL REFER MOUNTPOINT >> Data/subversion1 22.5K 519G 22.5K - >> >> How can it bed 519GB available on a volume that is 250GB in size? >> Here are >> more details: >> >> -bash-3.00$ zfs get all Data/subversion1 >> NAME PROPERTY VALUE SOURCE >> Data/subversion1 type volume - >> Data/subversion1 creation Wed Apr 2 9:06 2008 - >> Data/subversion1 used 22.5K - >> Data/subversion1 available 519G - >> Data/subversion1 referenced 22.5K - >> Data/subversion1 compressratio 1.00x - >> Data/subversion1 reservation 250G local >> Data/subversion1 volsize 250G - >> Data/subversion1 volblocksize 8K - >> Data/subversion1 checksum on default >> Data/subversion1 compression off default >> Data/subversion1 readonly off default >> Data/subversion1 shareiscsi off local >> >> >> Will this be fixed after the scrub process is finished tomorrow or >> is this >> volume lost forever? >> >> Hoping for some quick answers as the data is quite important for us. >> >> Regards, >> >> Lars-Gunnar Persson >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >
The Linux host can still see the device. I showed you the log from the Linux host. I tried the fdisk -l and it listed the iSCSI disks. Lars-Gunnar Persson Den 2. mars. 2009 kl. 17.02 skrev "O''Shea, Damien" <daoshea at revenue.ie>:> > I could be wrong but this looks like an issue on the Linux side > > A zpool status is returning the healthy pool > > What does format/fdisk show you on the Linux side ? Can it still see > the > iSCSI device that is being shared from the Solaris server ? > > > > Regards, > Damien O''Shea > Strategy & Unix Systems > Revenue Backup Site > VPN: 35603 > daoshea at revenue.ie <mailto:daoshea at revenue.ie> > > > -----Original Message----- > From: zfs-discuss-bounces at opensolaris.org > [mailto:zfs-discuss-bounces at opensolaris.org]On Behalf Of Blake > Sent: 02 March 2009 15:57 > To: Lars-Gunnar Persson > Cc: zfs-discuss at opensolaris.org > Subject: Re: [zfs-discuss] ZFS volume corrupted? > > > ************************************* > > This e-mail has been received by the Revenue Internet e-mail > service. (IP) > > ************************************* > > It looks like you only have one physical device in this pool. Is that > correct? > > > > On Mon, Mar 2, 2009 at 9:01 AM, Lars-Gunnar Persson > <lars-gunnar.persson at nersc.no> wrote: >> Hey to everyone on this mailing list (since this is my first post)! >> >> We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after some > system >> work this weekend we have a problem with only one ZFS volume. >> >> We have a pool called /Data with many file systems and two volumes. >> The >> status of my zpool is: >> >> -bash-3.00$ zpool status >> pool: Data >> state: ONLINE >> scrub: scrub in progress, 5.99% done, 13h38m to go >> config: >> >> NAME STATE READ WRITE CKSUM >> Data ONLINE 0 0 0 >> c4t5000402001FC442Cd0 ONLINE 0 0 0 >> >> errors: No known data errors >> >> >> Yesterday I started the scrub process because I read that was a >> smart thing >> to do after a zpool export and zpool import procedure. I did this >> because I >> wanted to move the zpool to another OS installation but changed my >> mind and >> did a zpool import on the same OS as I did an export. >> >> After checking as much information as I could find on the web, I was > advised >> to to run the zpool scrub after an import. >> >> Well, the problem now is that one volume in this zpool is not >> working. I''ve >> shared it via iscsi to a Linux host (all of this was working on >> Friday). > The >> Linux host reports that it can''t find a partition table. Here is >> the log >> from the Linux host: >> >> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr > sectors >> (268435 MB) >> Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write >> through >> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr > sectors >> (268435 MB) >> Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write >> through >> Mar 2 11:09:37 eva kernel: sdb: unknown partition table >> Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, >> channel 0, id >> 0, lun 0 >> >> >> So I checked the status on my Solaris server and I found this >> information a >> bit strange;: >> >> -bash-3.00$ zfs list Data/subversion1 >> NAME USED AVAIL REFER MOUNTPOINT >> Data/subversion1 22.5K 519G 22.5K - >> >> How can it bed 519GB available on a volume that is 250GB in size? >> Here are >> more details: >> >> -bash-3.00$ zfs get all Data/subversion1 >> NAME PROPERTY VALUE SOURCE >> Data/subversion1 type volume - >> Data/subversion1 creation Wed Apr 2 9:06 2008 - >> Data/subversion1 used 22.5K - >> Data/subversion1 available 519G - >> Data/subversion1 referenced 22.5K - >> Data/subversion1 compressratio 1.00x - >> Data/subversion1 reservation 250G local >> Data/subversion1 volsize 250G - >> Data/subversion1 volblocksize 8K - >> Data/subversion1 checksum on default >> Data/subversion1 compression off default >> Data/subversion1 readonly off default >> Data/subversion1 shareiscsi off local >> >> >> Will this be fixed after the scrub process is finished tomorrow or >> is this >> volume lost forever? >> >> Hoping for some quick answers as the data is quite important for us. >> >> Regards, >> >> Lars-Gunnar Persson >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > ************************ > > This message has been delivered to the Internet by the Revenue > Internet e-mail service (OP) > > *************************
Lars-Gunnar Persson wrote:> Hey to everyone on this mailing list (since this is my first post)!Welcome!> > We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after some > system work this weekend we have a problem with only one ZFS volume. > > We have a pool called /Data with many file systems and two volumes. > The status of my zpool is: > > -bash-3.00$ zpool status > pool: Data > state: ONLINE > scrub: scrub in progress, 5.99% done, 13h38m to go > config: > > NAME STATE READ WRITE CKSUM > Data ONLINE 0 0 0 > c4t5000402001FC442Cd0 ONLINE 0 0 0 > > errors: No known data errors > > > Yesterday I started the scrub process because I read that was a smart > thing to do after a zpool export and zpool import procedure. I did > this because I wanted to move the zpool to another OS installation but > changed my mind and did a zpool import on the same OS as I did an export. > > After checking as much information as I could find on the web, I was > advised to to run the zpool scrub after an import. > > Well, the problem now is that one volume in this zpool is not working. > I''ve shared it via iscsi to a Linux host (all of this was working on > Friday). The Linux host reports that it can''t find a partition table. > Here is the log from the Linux host: > > Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr > sectors (268435 MB) > Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write through > Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr > sectors (268435 MB) > Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write through > Mar 2 11:09:37 eva kernel: sdb: unknown partition table > Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, channel > 0, id 0, lun 0 > > > So I checked the status on my Solaris server and I found this > information a bit strange;: > > -bash-3.00$ zfs list Data/subversion1 > NAME USED AVAIL REFER MOUNTPOINT > Data/subversion1 22.5K 519G 22.5K - > > How can it bed 519GB available on a volume that is 250GB in size? Here > are more details: > > -bash-3.00$ zfs get all Data/subversion1 > NAME PROPERTY VALUE SOURCE > Data/subversion1 type volume - > Data/subversion1 creation Wed Apr 2 9:06 2008 - > Data/subversion1 used 22.5K - > Data/subversion1 available 519G - > Data/subversion1 referenced 22.5K - > Data/subversion1 compressratio 1.00x - > Data/subversion1 reservation 250G local > Data/subversion1 volsize 250G - > Data/subversion1 volblocksize 8K - > Data/subversion1 checksum on default > Data/subversion1 compression off default > Data/subversion1 readonly off default > Data/subversion1 shareiscsi off localIt does not appear that Data/subversion1 is being shared via iscsi? -- richard
I''ve turned off iSCSI sharing at the moment. My first question is: how can zfs report available is larger than reservation on a zfs volume? I also know that used mshould be larger than 22.5 K. Isn''t this strange? Lars-Gunnar Persson Den 3. mars. 2009 kl. 00.38 skrev Richard Elling <richard.elling at gmail.com >:> Lars-Gunnar Persson wrote: >> Hey to everyone on this mailing list (since this is my first post)! > > Welcome! > >> >> We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after >> some system work this weekend we have a problem with only one ZFS >> volume. >> >> We have a pool called /Data with many file systems and two volumes. >> The status of my zpool is: >> >> -bash-3.00$ zpool status >> pool: Data >> state: ONLINE >> scrub: scrub in progress, 5.99% done, 13h38m to go >> config: >> >> NAME STATE READ WRITE CKSUM >> Data ONLINE 0 0 0 >> c4t5000402001FC442Cd0 ONLINE 0 0 0 >> >> errors: No known data errors >> >> >> Yesterday I started the scrub process because I read that was a >> smart thing to do after a zpool export and zpool import procedure. >> I did this because I wanted to move the zpool to another OS >> installation but changed my mind and did a zpool import on the same >> OS as I did an export. >> >> After checking as much information as I could find on the web, I >> was advised to to run the zpool scrub after an import. >> >> Well, the problem now is that one volume in this zpool is not >> working. I''ve shared it via iscsi to a Linux host (all of this was >> working on Friday). The Linux host reports that it can''t find a >> partition table. Here is the log from the Linux host: >> >> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr >> sectors (268435 MB) >> Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write >> through >> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr >> sectors (268435 MB) >> Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write >> through >> Mar 2 11:09:37 eva kernel: sdb: unknown partition table >> Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, >> channel 0, id 0, lun 0 >> >> >> So I checked the status on my Solaris server and I found this >> information a bit strange;: >> >> -bash-3.00$ zfs list Data/subversion1 >> NAME USED AVAIL REFER MOUNTPOINT >> Data/subversion1 22.5K 519G 22.5K - >> >> How can it bed 519GB available on a volume that is 250GB in size? >> Here are more details: >> >> -bash-3.00$ zfs get all Data/subversion1 >> NAME PROPERTY VALUE SOURCE >> Data/subversion1 type volume - >> Data/subversion1 creation Wed Apr 2 9:06 2008 - >> Data/subversion1 used 22.5K - >> Data/subversion1 available 519G - >> Data/subversion1 referenced 22.5K - >> Data/subversion1 compressratio 1.00x - >> Data/subversion1 reservation 250G local >> Data/subversion1 volsize 250G - >> Data/subversion1 volblocksize 8K - >> Data/subversion1 checksum on default >> Data/subversion1 compression off default >> Data/subversion1 readonly off default >> Data/subversion1 shareiscsi off local > > It does not appear that Data/subversion1 is being shared via iscsi? > -- richard > >
I thought a ZFS file system wouldn''t destroy a ZFS volume? Hmm, I''m not sure what to do now ... First of all, this zfs volume Data/subversion1 has been working for a year and suddenly after a reboot of the Solaris server, running of the zpool export and zpool import command, I get problems with this ZFS volume? Today I checked some more, after reading this guide: http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide My main question is: Is my ZFS volume which is part of a zpool lost or can I recover it? If I upgrade the Solaris server to the latest and do a zpool export and zpool import help? All advices appreciated :-) Here is some more information: -bash-3.00$ zfs list -o name,type,used,avail,ratio,compression,reserv,volsize Data/subversion1 NAME TYPE USED AVAIL RATIO COMPRESS RESERV VOLSIZE Data/subversion1 volume 22.5K 511G 1.00x off 250G 250G I''ve also learned the the AVAIL column reports what''s available in the zpool and NOT what''s available in the ZFS volume. -bash-3.00$ sudo zpool status -v Password: pool: Data state: ONLINE scrub: scrub in progress, 5.86% done, 12h46m to go config: NAME STATE READ WRITE CKSUM Data ONLINE 0 0 0 c4t5000402001FC442Cd0 ONLINE 0 0 0 errors: No known data errors Interesting thing here is that the scrub process should be finished today but the progress is much slower than reported here. And will the scrub process help anything in my case? -bash-3.00$ sudo fmdump TIME UUID SUNW-MSG-ID Nov 15 2007 10:16:38 8aa789d2-7f3a-45d5-9f5c-c101d73b795e ZFS-8000-CS Oct 14 09:31:40.8179 8c7d9847-94b7-ec09-8da7-c352de405b78 FMD-8000-2K bash-3.00$ sudo fmdump -ev TIME CLASS ENA Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6897db600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e688d11500401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68926e600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6897db600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68a3d3900401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68bc67400001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68d8bb600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68da5b500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68da5b500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68f0c9800401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68da5b500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6897db600401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68e981900001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68f0c9800401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690a11000401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68f0c9800401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690385500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690385500001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690a11000401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e692a4ca00001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68bc67400001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690a11000401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68bc67400001 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e690385500001 Nov 15 2007 09:33:52 ereport.fs.zfs.data 0x915e6850ff400401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68a3d3900401 Nov 15 2007 09:33:52 ereport.fs.zfs.io 0x915e68a3d3900401 Nov 15 2007 10:16:12 ereport.fs.zfs.vdev.open_failed 0x0533bb1b56400401 Nov 15 2007 10:16:12 ereport.fs.zfs.zpool 0x0533bb1b56400401 Oct 14 09:31:31.6092 ereport.fm.fmd.log_append 0x02eb96a8b6502801 Oct 14 09:31:31.8643 ereport.fm.fmd.mod_init 0x02ec89eadd100401 On 3. mars. 2009, at 08.10, Lars-Gunnar Persson wrote:> I''ve turned off iSCSI sharing at the moment. > > My first question is: how can zfs report available is larger than > reservation on a zfs volume? I also know that used mshould be larger > than 22.5 K. Isn''t this strange? > > Lars-Gunnar Persson > > Den 3. mars. 2009 kl. 00.38 skrev Richard Elling <richard.elling at gmail.com > >: > >> Lars-Gunnar Persson wrote: >>> Hey to everyone on this mailing list (since this is my first post)! >> >> Welcome! >> >>> >>> We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after >>> some system work this weekend we have a problem with only one ZFS >>> volume. >>> >>> We have a pool called /Data with many file systems and two >>> volumes. The status of my zpool is: >>> >>> -bash-3.00$ zpool status >>> pool: Data >>> state: ONLINE >>> scrub: scrub in progress, 5.99% done, 13h38m to go >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> Data ONLINE 0 0 0 >>> c4t5000402001FC442Cd0 ONLINE 0 0 0 >>> >>> errors: No known data errors >>> >>> >>> Yesterday I started the scrub process because I read that was a >>> smart thing to do after a zpool export and zpool import procedure. >>> I did this because I wanted to move the zpool to another OS >>> installation but changed my mind and did a zpool import on the >>> same OS as I did an export. >>> >>> After checking as much information as I could find on the web, I >>> was advised to to run the zpool scrub after an import. >>> >>> Well, the problem now is that one volume in this zpool is not >>> working. I''ve shared it via iscsi to a Linux host (all of this was >>> working on Friday). The Linux host reports that it can''t find a >>> partition table. Here is the log from the Linux host: >>> >>> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte >>> hdwr sectors (268435 MB) >>> Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write >>> through >>> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte >>> hdwr sectors (268435 MB) >>> Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write >>> through >>> Mar 2 11:09:37 eva kernel: sdb: unknown partition table >>> Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, >>> channel 0, id 0, lun 0 >>> >>> >>> So I checked the status on my Solaris server and I found this >>> information a bit strange;: >>> >>> -bash-3.00$ zfs list Data/subversion1 >>> NAME USED AVAIL REFER MOUNTPOINT >>> Data/subversion1 22.5K 519G 22.5K - >>> >>> How can it bed 519GB available on a volume that is 250GB in size? >>> Here are more details: >>> >>> -bash-3.00$ zfs get all Data/subversion1 >>> NAME PROPERTY VALUE SOURCE >>> Data/subversion1 type volume - >>> Data/subversion1 creation Wed Apr 2 9:06 2008 - >>> Data/subversion1 used 22.5K - >>> Data/subversion1 available 519G - >>> Data/subversion1 referenced 22.5K - >>> Data/subversion1 compressratio 1.00x - >>> Data/subversion1 reservation 250G local >>> Data/subversion1 volsize 250G - >>> Data/subversion1 volblocksize 8K - >>> Data/subversion1 checksum on default >>> Data/subversion1 compression off default >>> Data/subversion1 readonly off default >>> Data/subversion1 shareiscsi off local >> >> It does not appear that Data/subversion1 is being shared via iscsi? >> -- richard >> >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Hi, The reason zfs is saying that the available is larger is because in Zfs the size of the pool is always available to the all the zfs filesystems that reside in the pool. Setting a reservation will gaurntee that the reservation size is "reserved" for the filesystem/volume but you can change that on the fly. You can see that if you create another filsystem within the pool that the reservation in use by your volume will have be deducted from the available size. Like below: root at testfs create -V 10g testpool/test root at testfs get all testpool NAME PROPERTY VALUE SOURCE testpool type filesystem - testpool creation Wed Feb 11 13:17 2009 - testpool used 10.1G - testpool available 124G - testpool referenced 100M - testpool compressratio 1.00x - testpool mounted yes - Here the available is 124g as the volume has been set to 10g from a pool of 134g. If we set a reservation like this root at test1 set reservation=10g testpool/test root at test1 zfs get all testpool/test NAME PROPERTY VALUE SOURCE testpool/test type volume - testpool/test creation Tue Mar 3 10:13 2009 - testpool/test used 10G - testpool/test available 134G - testpool/test referenced 16K - testpool/test compressratio 1.00x - We can see that the available is now 134g, which is the avilable size of the rest of the pool + the 10g reservation that we have set. So in theory this volume can grow to the complete size of the pool. So if we have a look at the availble space now in the pool we see root at test1# zfs get all testpool NAME PROPERTY VALUE SOURCE testpool type filesystem - testpool creation Wed Feb 11 13:17 2009 - testpool used 10.1G - testpool available 124G - testpool referenced 100M - testpool compressratio 1.00x - testpool mounted yes - 124g with 10g used to account for the size of the volume ! So if we now create another filesystem like this root at test1# zfs create testpool/test3 root at test1# zfs get all testpool/test3 NAME PROPERTY VALUE SOURCE testpool/test3 type filesystem - testpool/test3 creation Tue Mar 3 10:19 2009 - testpool/test3 used 18K - testpool/test3 available 124G - testpool/test3 referenced 18K - testpool/test3 compressratio 1.00x - testpool/test3 mounted yes - We see that the total amount available to the filesystem is the amount of the space in the pool minus the 10g reservation. Lets set the reservation to something bigger. root at test1# zfs set volsize=100g testpool/test root at test1# zfs set reservation=100g testpool/test root at test1# zfs get all testpool/test NAME PROPERTY VALUE SOURCE testpool/test type volume - testpool/test creation Tue Mar 3 10:13 2009 - testpool/test used 100G - testpool/test available 134G - testpool/test referenced 16K - So the available is still 134G, which is the rest of the pool + the reservation set. root at test1# zfs get all testpool NAME PROPERTY VALUE SOURCE testpool type filesystem - testpool creation Wed Feb 11 13:17 2009 - testpool used 100G - testpool available 33.8G - testpool referenced 100M - testpool compressratio 1.00x - testpool mounted yes - The pool however now only has 33.8G left, which should be the same for all the other filesystems in the pool. Hope that helps. -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org]On Behalf Of Lars-Gunnar Persson Sent: 03 March 2009 07:11 To: Richard Elling Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] ZFS volume corrupted? ************************************* This e-mail has been received by the Revenue Internet e-mail service. (IP) ************************************* I''ve turned off iSCSI sharing at the moment. My first question is: how can zfs report available is larger than reservation on a zfs volume? I also know that used mshould be larger than 22.5 K. Isn''t this strange? Lars-Gunnar Persson Den 3. mars. 2009 kl. 00.38 skrev Richard Elling <richard.elling at gmail.com >:> Lars-Gunnar Persson wrote: >> Hey to everyone on this mailing list (since this is my first post)! > > Welcome! > >> >> We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after >> some system work this weekend we have a problem with only one ZFS >> volume. >> >> We have a pool called /Data with many file systems and two volumes. >> The status of my zpool is: >> >> -bash-3.00$ zpool status >> pool: Data >> state: ONLINE >> scrub: scrub in progress, 5.99% done, 13h38m to go >> config: >> >> NAME STATE READ WRITE CKSUM >> Data ONLINE 0 0 0 >> c4t5000402001FC442Cd0 ONLINE 0 0 0 >> >> errors: No known data errors >> >> >> Yesterday I started the scrub process because I read that was a >> smart thing to do after a zpool export and zpool import procedure. >> I did this because I wanted to move the zpool to another OS >> installation but changed my mind and did a zpool import on the same >> OS as I did an export. >> >> After checking as much information as I could find on the web, I >> was advised to to run the zpool scrub after an import. >> >> Well, the problem now is that one volume in this zpool is not >> working. I''ve shared it via iscsi to a Linux host (all of this was >> working on Friday). The Linux host reports that it can''t find a >> partition table. Here is the log from the Linux host: >> >> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr >> sectors (268435 MB) >> Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write >> through >> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr >> sectors (268435 MB) >> Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write >> through >> Mar 2 11:09:37 eva kernel: sdb: unknown partition table >> Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, >> channel 0, id 0, lun 0 >> >> >> So I checked the status on my Solaris server and I found this >> information a bit strange;: >> >> -bash-3.00$ zfs list Data/subversion1 >> NAME USED AVAIL REFER MOUNTPOINT >> Data/subversion1 22.5K 519G 22.5K - >> >> How can it bed 519GB available on a volume that is 250GB in size? >> Here are more details: >> >> -bash-3.00$ zfs get all Data/subversion1 >> NAME PROPERTY VALUE SOURCE >> Data/subversion1 type volume - >> Data/subversion1 creation Wed Apr 2 9:06 2008 - >> Data/subversion1 used 22.5K - >> Data/subversion1 available 519G - >> Data/subversion1 referenced 22.5K - >> Data/subversion1 compressratio 1.00x - >> Data/subversion1 reservation 250G local >> Data/subversion1 volsize 250G - >> Data/subversion1 volblocksize 8K - >> Data/subversion1 checksum on default >> Data/subversion1 compression off default >> Data/subversion1 readonly off default >> Data/subversion1 shareiscsi off local > > It does not appear that Data/subversion1 is being shared via iscsi? > -- richard > >_______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ************************ This message has been delivered to the Internet by the Revenue Internet e-mail service (OP) *************************
Thank you for your long reply. I don''t believe that will help me get my ZFS volume back though, From my last reply to this list I confirm that I do understand what the AVAIL column is reporting when running the zfs list command. hmm, still confused ... Regards, Lars-Gunnar Persson On 3. mars. 2009, at 11.26, O''Shea, Damien wrote:> > Hi, > > The reason zfs is saying that the available is larger is because in > Zfs the > size of the pool is always available to the all the zfs filesystems > that > reside in the pool. Setting a reservation will gaurntee that the > reservation > size is "reserved" for the filesystem/volume but you can change that > on the > fly. > > You can see that if you create another filsystem within the pool > that the > reservation in use by your volume will have be deducted from the > available > size. > > Like below: > > > > root at testfs create -V 10g testpool/test > root at testfs get all testpool > NAME PROPERTY VALUE SOURCE > testpool type filesystem - > testpool creation Wed Feb 11 13:17 2009 - > testpool used 10.1G - > testpool available 124G - > testpool referenced 100M - > testpool compressratio 1.00x - > testpool mounted yes - > > Here the available is 124g as the volume has been set to 10g from a > pool of > 134g. If we set a reservation like this > > root at test1 set reservation=10g testpool/test > root at test1 zfs get all testpool/test > NAME PROPERTY VALUE SOURCE > testpool/test type volume - > testpool/test creation Tue Mar 3 10:13 2009 - > testpool/test used 10G - > testpool/test available 134G - > testpool/test referenced 16K - > testpool/test compressratio 1.00x - > > We can see that the available is now 134g, which is the avilable > size of the > rest of the pool + the 10g reservation that we have set. So in > theory this > volume can grow to the complete size of the pool. > > So if we have a look at the availble space now in the pool we see > > root at test1# zfs get all testpool > NAME PROPERTY VALUE SOURCE > testpool type filesystem - > testpool creation Wed Feb 11 13:17 2009 - > testpool used 10.1G - > testpool available 124G - > testpool referenced 100M - > testpool compressratio 1.00x - > testpool mounted yes - > > 124g with 10g used to account for the size of the volume ! > > So if we now create another filesystem like this > > root at test1# zfs create testpool/test3 > root at test1# zfs get all testpool/test3 > NAME PROPERTY VALUE SOURCE > testpool/test3 type filesystem - > testpool/test3 creation Tue Mar 3 10:19 2009 - > testpool/test3 used 18K - > testpool/test3 available 124G - > testpool/test3 referenced 18K - > testpool/test3 compressratio 1.00x - > testpool/test3 mounted yes - > > We see that the total amount available to the filesystem is the > amount of the > space in the pool minus the 10g reservation. Lets set the > reservation to > something bigger. > > root at test1# zfs set volsize=100g testpool/test > root at test1# zfs set reservation=100g testpool/test > root at test1# zfs get all testpool/test > NAME PROPERTY VALUE SOURCE > testpool/test type volume - > testpool/test creation Tue Mar 3 10:13 2009 - > testpool/test used 100G - > testpool/test available 134G - > testpool/test referenced 16K - > > So the available is still 134G, which is the rest of the pool + the > reservation set. > > root at test1# zfs get all testpool > NAME PROPERTY VALUE SOURCE > testpool type filesystem - > testpool creation Wed Feb 11 13:17 2009 - > testpool used 100G - > testpool available 33.8G - > testpool referenced 100M - > testpool compressratio 1.00x - > testpool mounted yes - > > The pool however now only has 33.8G left, which should be the same > for all > the other filesystems in the pool. > > Hope that helps. > > > > > -----Original Message----- > From: zfs-discuss-bounces at opensolaris.org > [mailto:zfs-discuss-bounces at opensolaris.org]On Behalf Of Lars-Gunnar > Persson > Sent: 03 March 2009 07:11 > To: Richard Elling > Cc: zfs-discuss at opensolaris.org > Subject: Re: [zfs-discuss] ZFS volume corrupted? > > > ************************************* > > This e-mail has been received by the Revenue Internet e-mail > service. (IP) > > ************************************* > > I''ve turned off iSCSI sharing at the moment. > > My first question is: how can zfs report available is larger than > reservation on a zfs volume? I also know that used mshould be larger > than 22.5 K. Isn''t this strange? > > Lars-Gunnar Persson > > Den 3. mars. 2009 kl. 00.38 skrev Richard Elling <richard.elling at gmail.com >> : > >> Lars-Gunnar Persson wrote: >>> Hey to everyone on this mailing list (since this is my first post)! >> >> Welcome! >> >>> >>> We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after >>> some system work this weekend we have a problem with only one ZFS >>> volume. >>> >>> We have a pool called /Data with many file systems and two volumes. >>> The status of my zpool is: >>> >>> -bash-3.00$ zpool status >>> pool: Data >>> state: ONLINE >>> scrub: scrub in progress, 5.99% done, 13h38m to go >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> Data ONLINE 0 0 0 >>> c4t5000402001FC442Cd0 ONLINE 0 0 0 >>> >>> errors: No known data errors >>> >>> >>> Yesterday I started the scrub process because I read that was a >>> smart thing to do after a zpool export and zpool import procedure. >>> I did this because I wanted to move the zpool to another OS >>> installation but changed my mind and did a zpool import on the same >>> OS as I did an export. >>> >>> After checking as much information as I could find on the web, I >>> was advised to to run the zpool scrub after an import. >>> >>> Well, the problem now is that one volume in this zpool is not >>> working. I''ve shared it via iscsi to a Linux host (all of this was >>> working on Friday). The Linux host reports that it can''t find a >>> partition table. Here is the log from the Linux host: >>> >>> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr >>> sectors (268435 MB) >>> Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write >>> through >>> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr >>> sectors (268435 MB) >>> Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write >>> through >>> Mar 2 11:09:37 eva kernel: sdb: unknown partition table >>> Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, >>> channel 0, id 0, lun 0 >>> >>> >>> So I checked the status on my Solaris server and I found this >>> information a bit strange;: >>> >>> -bash-3.00$ zfs list Data/subversion1 >>> NAME USED AVAIL REFER MOUNTPOINT >>> Data/subversion1 22.5K 519G 22.5K - >>> >>> How can it bed 519GB available on a volume that is 250GB in size? >>> Here are more details: >>> >>> -bash-3.00$ zfs get all Data/subversion1 >>> NAME PROPERTY VALUE SOURCE >>> Data/subversion1 type volume - >>> Data/subversion1 creation Wed Apr 2 9:06 2008 - >>> Data/subversion1 used 22.5K - >>> Data/subversion1 available 519G - >>> Data/subversion1 referenced 22.5K - >>> Data/subversion1 compressratio 1.00x - >>> Data/subversion1 reservation 250G local >>> Data/subversion1 volsize 250G - >>> Data/subversion1 volblocksize 8K - >>> Data/subversion1 checksum on default >>> Data/subversion1 compression off default >>> Data/subversion1 readonly off default >>> Data/subversion1 shareiscsi off local >> >> It does not appear that Data/subversion1 is being shared via iscsi? >> -- richard >> >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > ************************ > > This message has been delivered to the Internet by the Revenue > Internet e-mail service (OP) > > ************************* >.--------------------------------------------------------------------------. |Lars-Gunnar Persson | |IT- sjef | | | |Nansen senteret for milj? og fjernm?ling | |Adresse : Thorm?hlensgate 47, 5006 Bergen | |Direkte : 55 20 58 31, sentralbord: 55 20 58 00, fax: 55 20 58 01 | |Internett: http://www.nersc.no, e-post: lars- gunnar.persson at nersc.no | ''--------------------------------------------------------------------------''
I run a new command now zdb. Here is the current output: -bash-3.00$ sudo zdb Data version=4 name=''Data'' state=0 txg=9806565 pool_guid=6808539022472427249 vdev_tree type=''root'' id=0 guid=6808539022472427249 children[0] type=''disk'' id=0 guid=2167768931511572294 path=''/dev/dsk/c4t5000402001FC442Cd0s0'' devid=''id1,sd at n6000402001fc442c6e1a0e9700000000/a'' whole_disk=1 metaslab_array=14 metaslab_shift=36 ashift=9 asize=11801587875840 Uberblock magic = 0000000000bab10c version = 4 txg = 9842225 guid_sum = 8976307953983999543 timestamp = 1236084668 UTC = Tue Mar 3 13:51:08 2009 Dataset mos [META], ID 0, cr_txg 4, 392M, 1213 objects ... [snip] Dataset Data/subversion1 [ZVOL], ID 3527, cr_txg 2514080, 22.5K, 3 objects ... [snip] Dataset Data [ZPL], ID 5, cr_txg 4, 108M, 2898 objects Traversing all blocks to verify checksums and verify nothing leaked ... and I''m still waiting for this process to finish. On 3. mars. 2009, at 11.18, Lars-Gunnar Persson wrote:> I thought a ZFS file system wouldn''t destroy a ZFS volume? Hmm, I''m > not sure what to do now ... > > First of all, this zfs volume Data/subversion1 has been working for > a year and suddenly after a reboot of the Solaris server, running of > the zpool export and zpool import command, I get problems with this > ZFS volume? > > Today I checked some more, after reading this guide: http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide > > My main question is: Is my ZFS volume which is part of a zpool lost > or can I recover it? > > If I upgrade the Solaris server to the latest and do a zpool export > and zpool import help? > > All advices appreciated :-) > > Here is some more information: > > -bash-3.00$ zfs list -o > name,type,used,avail,ratio,compression,reserv,volsize Data/subversion1 > NAME TYPE USED AVAIL RATIO COMPRESS RESERV > VOLSIZE > Data/subversion1 volume 22.5K 511G 1.00x off 250G > 250G > > I''ve also learned the the AVAIL column reports what''s available in > the zpool and NOT what''s available in the ZFS volume. > > -bash-3.00$ sudo zpool status -v > Password: > pool: Data > state: ONLINE > scrub: scrub in progress, 5.86% done, 12h46m to go > config: > > NAME STATE READ WRITE CKSUM > Data ONLINE 0 0 0 > c4t5000402001FC442Cd0 ONLINE 0 0 0 > > errors: No known data errors > > Interesting thing here is that the scrub process should be finished > today but the progress is much slower than reported here. And will > the scrub process help anything in my case? > > > -bash-3.00$ sudo fmdump > TIME UUID SUNW-MSG-ID > Nov 15 2007 10:16:38 8aa789d2-7f3a-45d5-9f5c-c101d73b795e ZFS-8000-CS > Oct 14 09:31:40.8179 8c7d9847-94b7-ec09-8da7-c352de405b78 FMD-8000-2K > > bash-3.00$ sudo fmdump -ev > TIME CLASS ENA > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6850ff400401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6850ff400401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6897db600401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e688d11500401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68926e600401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6897db600401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68a3d3900401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68bc67400001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68d8bb600401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68da5b500001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68da5b500001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68f0c9800401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68da5b500001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6897db600401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68e981900001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68f0c9800401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e690a11000401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68f0c9800401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e690385500001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e690385500001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e690a11000401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e692a4ca00001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68bc67400001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e690a11000401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6850ff400401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6850ff400401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68bc67400001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e690385500001 > Nov 15 2007 09:33:52 ereport.fs.zfs.data > 0x915e6850ff400401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68a3d3900401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68a3d3900401 > Nov 15 2007 10:16:12 ereport.fs.zfs.vdev.open_failed > 0x0533bb1b56400401 > Nov 15 2007 10:16:12 ereport.fs.zfs.zpool > 0x0533bb1b56400401 > Oct 14 09:31:31.6092 ereport.fm.fmd.log_append > 0x02eb96a8b6502801 > Oct 14 09:31:31.8643 ereport.fm.fmd.mod_init > 0x02ec89eadd100401 > > > On 3. mars. 2009, at 08.10, Lars-Gunnar Persson wrote: > >> I''ve turned off iSCSI sharing at the moment. >> >> My first question is: how can zfs report available is larger than >> reservation on a zfs volume? I also know that used mshould be >> larger than 22.5 K. Isn''t this strange? >> >> Lars-Gunnar Persson >> >> Den 3. mars. 2009 kl. 00.38 skrev Richard Elling <richard.elling at gmail.com >> >: >> >>> Lars-Gunnar Persson wrote: >>>> Hey to everyone on this mailing list (since this is my first post)! >>> >>> Welcome! >>> >>>> >>>> We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after >>>> some system work this weekend we have a problem with only one ZFS >>>> volume. >>>> >>>> We have a pool called /Data with many file systems and two >>>> volumes. The status of my zpool is: >>>> >>>> -bash-3.00$ zpool status >>>> pool: Data >>>> state: ONLINE >>>> scrub: scrub in progress, 5.99% done, 13h38m to go >>>> config: >>>> >>>> NAME STATE READ WRITE CKSUM >>>> Data ONLINE 0 0 0 >>>> c4t5000402001FC442Cd0 ONLINE 0 0 0 >>>> >>>> errors: No known data errors >>>> >>>> >>>> Yesterday I started the scrub process because I read that was a >>>> smart thing to do after a zpool export and zpool import >>>> procedure. I did this because I wanted to move the zpool to >>>> another OS installation but changed my mind and did a zpool >>>> import on the same OS as I did an export. >>>> >>>> After checking as much information as I could find on the web, I >>>> was advised to to run the zpool scrub after an import. >>>> >>>> Well, the problem now is that one volume in this zpool is not >>>> working. I''ve shared it via iscsi to a Linux host (all of this >>>> was working on Friday). The Linux host reports that it can''t find >>>> a partition table. Here is the log from the Linux host: >>>> >>>> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte >>>> hdwr sectors (268435 MB) >>>> Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write >>>> through >>>> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte >>>> hdwr sectors (268435 MB) >>>> Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write >>>> through >>>> Mar 2 11:09:37 eva kernel: sdb: unknown partition table >>>> Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, >>>> channel 0, id 0, lun 0 >>>> >>>> >>>> So I checked the status on my Solaris server and I found this >>>> information a bit strange;: >>>> >>>> -bash-3.00$ zfs list Data/subversion1 >>>> NAME USED AVAIL REFER MOUNTPOINT >>>> Data/subversion1 22.5K 519G 22.5K - >>>> >>>> How can it bed 519GB available on a volume that is 250GB in size? >>>> Here are more details: >>>> >>>> -bash-3.00$ zfs get all Data/subversion1 >>>> NAME PROPERTY VALUE SOURCE >>>> Data/subversion1 type volume - >>>> Data/subversion1 creation Wed Apr 2 9:06 2008 - >>>> Data/subversion1 used 22.5K - >>>> Data/subversion1 available 519G - >>>> Data/subversion1 referenced 22.5K - >>>> Data/subversion1 compressratio 1.00x - >>>> Data/subversion1 reservation 250G local >>>> Data/subversion1 volsize 250G - >>>> Data/subversion1 volblocksize 8K - >>>> Data/subversion1 checksum on default >>>> Data/subversion1 compression off default >>>> Data/subversion1 readonly off default >>>> Data/subversion1 shareiscsi off local >>> >>> It does not appear that Data/subversion1 is being shared via iscsi? >>> -- richard >>> >>> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > >.--------------------------------------------------------------------------. |Lars-Gunnar Persson | |IT- sjef | | | |Nansen senteret for milj? og fjernm?ling | |Adresse : Thorm?hlensgate 47, 5006 Bergen | |Direkte : 55 20 58 31, sentralbord: 55 20 58 00, fax: 55 20 58 01 | |Internett: http://www.nersc.no, e-post: lars- gunnar.persson at nersc.no | ''--------------------------------------------------------------------------''
And then the zdb process ends with: Traversing all blocks to verify checksums and verify nothing leaked ... out of memory -- generating core dump Abort (core dumped) hmm, what does that mean?? I also ran these commands: -bash-3.00$ sudo fmstat module ev_recv ev_acpt wait svc_t %w %b open solve memsz bufsz cpumem-retire 0 0 0.0 0.1 0 0 0 0 0 0 disk-transport 0 0 0.0 4.1 0 0 0 0 32b 0 eft 0 0 0.0 5.7 0 0 0 0 1.4M 0 fmd-self-diagnosis 0 0 0.0 0.2 0 0 0 0 0 0 io-retire 0 0 0.0 0.2 0 0 0 0 0 0 snmp-trapgen 0 0 0.0 0.1 0 0 0 0 32b 0 sysevent-transport 0 0 0.0 1520.8 0 0 0 0 0 0 syslog-msgs 0 0 0.0 0.1 0 0 0 0 0 0 zfs-diagnosis 301 0 0.0 0.0 0 0 2 0 120b 80b zfs-retire 0 0 0.0 0.3 0 0 0 0 0 0 -bash-3.00$ sudo fmadm config MODULE VERSION STATUS DESCRIPTION cpumem-retire 1.1 active CPU/Memory Retire Agent disk-transport 1.0 active Disk Transport Agent eft 1.16 active eft diagnosis engine fmd-self-diagnosis 1.0 active Fault Manager Self-Diagnosis io-retire 1.0 active I/O Retire Agent snmp-trapgen 1.0 active SNMP Trap Generation Agent sysevent-transport 1.0 active SysEvent Transport Agent syslog-msgs 1.0 active Syslog Messaging Agent zfs-diagnosis 1.0 active ZFS Diagnosis Engine zfs-retire 1.0 active ZFS Retire Agent -bash-3.00$ sudo zpool upgrade -v This system is currently running ZFS version 4. The following versions are supported: VER DESCRIPTION --- -------------------------------------------------------- 1 Initial ZFS version 2 Ditto blocks (replicated metadata) 3 Hot spares and double parity RAID-Z 4 zpool history For more information on a particular version, including supported releases, see: http://www.opensolaris.org/os/community/zfs/version/N Where ''N'' is the version number. I hope I''ve provided enough information for all you ZFS experts out there. Any tips or solutions in sight? Or is this ZFS gone completely? Lars-Gunnar Persson On 3. mars. 2009, at 13.58, Lars-Gunnar Persson wrote:> I run a new command now zdb. Here is the current output: > > -bash-3.00$ sudo zdb Data > version=4 > name=''Data'' > state=0 > txg=9806565 > pool_guid=6808539022472427249 > vdev_tree > type=''root'' > id=0 > guid=6808539022472427249 > children[0] > type=''disk'' > id=0 > guid=2167768931511572294 > path=''/dev/dsk/c4t5000402001FC442Cd0s0'' > devid=''id1,sd at n6000402001fc442c6e1a0e9700000000/a'' > whole_disk=1 > metaslab_array=14 > metaslab_shift=36 > ashift=9 > asize=11801587875840 > Uberblock > > magic = 0000000000bab10c > version = 4 > txg = 9842225 > guid_sum = 8976307953983999543 > timestamp = 1236084668 UTC = Tue Mar 3 13:51:08 2009 > > Dataset mos [META], ID 0, cr_txg 4, 392M, 1213 objects > ... [snip] > > Dataset Data/subversion1 [ZVOL], ID 3527, cr_txg 2514080, 22.5K, 3 > objects > > ... [snip] > Dataset Data [ZPL], ID 5, cr_txg 4, 108M, 2898 objects > > Traversing all blocks to verify checksums and verify nothing > leaked ... > > and I''m still waiting for this process to finish. > > > On 3. mars. 2009, at 11.18, Lars-Gunnar Persson wrote: > >> I thought a ZFS file system wouldn''t destroy a ZFS volume? Hmm, I''m >> not sure what to do now ... >> >> First of all, this zfs volume Data/subversion1 has been working for >> a year and suddenly after a reboot of the Solaris server, running >> of the zpool export and zpool import command, I get problems with >> this ZFS volume? >> >> Today I checked some more, after reading this guide: http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide >> >> My main question is: Is my ZFS volume which is part of a zpool lost >> or can I recover it? >> >> If I upgrade the Solaris server to the latest and do a zpool export >> and zpool import help? >> >> All advices appreciated :-) >> >> Here is some more information: >> >> -bash-3.00$ zfs list -o >> name,type,used,avail,ratio,compression,reserv,volsize Data/ >> subversion1 >> NAME TYPE USED AVAIL RATIO COMPRESS RESERV >> VOLSIZE >> Data/subversion1 volume 22.5K 511G 1.00x off 250G >> 250G >> >> I''ve also learned the the AVAIL column reports what''s available in >> the zpool and NOT what''s available in the ZFS volume. >> >> -bash-3.00$ sudo zpool status -v >> Password: >> pool: Data >> state: ONLINE >> scrub: scrub in progress, 5.86% done, 12h46m to go >> config: >> >> NAME STATE READ WRITE CKSUM >> Data ONLINE 0 0 0 >> c4t5000402001FC442Cd0 ONLINE 0 0 0 >> >> errors: No known data errors >> >> Interesting thing here is that the scrub process should be finished >> today but the progress is much slower than reported here. And will >> the scrub process help anything in my case? >> >> >> -bash-3.00$ sudo fmdump >> TIME UUID SUNW-MSG-ID >> Nov 15 2007 10:16:38 8aa789d2-7f3a-45d5-9f5c-c101d73b795e ZFS-8000-CS >> Oct 14 09:31:40.8179 8c7d9847-94b7-ec09-8da7-c352de405b78 FMD-8000-2K >> >> bash-3.00$ sudo fmdump -ev >> TIME CLASS ENA >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6850ff400401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6850ff400401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6897db600401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e688d11500401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68926e600401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6897db600401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68a3d3900401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68bc67400001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68d8bb600401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68da5b500001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68da5b500001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68f0c9800401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68da5b500001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6897db600401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68e981900001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68f0c9800401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e690a11000401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68f0c9800401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e690385500001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e690385500001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e690a11000401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e692a4ca00001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68bc67400001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e690a11000401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6850ff400401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6850ff400401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68bc67400001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e690385500001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.data >> 0x915e6850ff400401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68a3d3900401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68a3d3900401 >> Nov 15 2007 10:16:12 ereport.fs.zfs.vdev.open_failed >> 0x0533bb1b56400401 >> Nov 15 2007 10:16:12 ereport.fs.zfs.zpool >> 0x0533bb1b56400401 >> Oct 14 09:31:31.6092 ereport.fm.fmd.log_append >> 0x02eb96a8b6502801 >> Oct 14 09:31:31.8643 ereport.fm.fmd.mod_init >> 0x02ec89eadd100401 >> >> >> On 3. mars. 2009, at 08.10, Lars-Gunnar Persson wrote: >> >>> I''ve turned off iSCSI sharing at the moment. >>> >>> My first question is: how can zfs report available is larger than >>> reservation on a zfs volume? I also know that used mshould be >>> larger than 22.5 K. Isn''t this strange? >>> >>> Lars-Gunnar Persson >>> >>> Den 3. mars. 2009 kl. 00.38 skrev Richard Elling <richard.elling at gmail.com >>> >: >>> >>>> Lars-Gunnar Persson wrote: >>>>> Hey to everyone on this mailing list (since this is my first >>>>> post)! >>>> >>>> Welcome! >>>> >>>>> >>>>> We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after >>>>> some system work this weekend we have a problem with only one >>>>> ZFS volume. >>>>> >>>>> We have a pool called /Data with many file systems and two >>>>> volumes. The status of my zpool is: >>>>> >>>>> -bash-3.00$ zpool status >>>>> pool: Data >>>>> state: ONLINE >>>>> scrub: scrub in progress, 5.99% done, 13h38m to go >>>>> config: >>>>> >>>>> NAME STATE READ WRITE CKSUM >>>>> Data ONLINE 0 0 0 >>>>> c4t5000402001FC442Cd0 ONLINE 0 0 0 >>>>> >>>>> errors: No known data errors >>>>> >>>>> >>>>> Yesterday I started the scrub process because I read that was a >>>>> smart thing to do after a zpool export and zpool import >>>>> procedure. I did this because I wanted to move the zpool to >>>>> another OS installation but changed my mind and did a zpool >>>>> import on the same OS as I did an export. >>>>> >>>>> After checking as much information as I could find on the web, I >>>>> was advised to to run the zpool scrub after an import. >>>>> >>>>> Well, the problem now is that one volume in this zpool is not >>>>> working. I''ve shared it via iscsi to a Linux host (all of this >>>>> was working on Friday). The Linux host reports that it can''t >>>>> find a partition table. Here is the log from the Linux host: >>>>> >>>>> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte >>>>> hdwr sectors (268435 MB) >>>>> Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write >>>>> through >>>>> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte >>>>> hdwr sectors (268435 MB) >>>>> Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write >>>>> through >>>>> Mar 2 11:09:37 eva kernel: sdb: unknown partition table >>>>> Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, >>>>> channel 0, id 0, lun 0 >>>>> >>>>> >>>>> So I checked the status on my Solaris server and I found this >>>>> information a bit strange;: >>>>> >>>>> -bash-3.00$ zfs list Data/subversion1 >>>>> NAME USED AVAIL REFER MOUNTPOINT >>>>> Data/subversion1 22.5K 519G 22.5K - >>>>> >>>>> How can it bed 519GB available on a volume that is 250GB in >>>>> size? Here are more details: >>>>> >>>>> -bash-3.00$ zfs get all Data/subversion1 >>>>> NAME PROPERTY VALUE SOURCE >>>>> Data/subversion1 type volume - >>>>> Data/subversion1 creation Wed Apr 2 9:06 2008 - >>>>> Data/subversion1 used 22.5K - >>>>> Data/subversion1 available 519G - >>>>> Data/subversion1 referenced 22.5K - >>>>> Data/subversion1 compressratio 1.00x - >>>>> Data/subversion1 reservation 250G local >>>>> Data/subversion1 volsize 250G - >>>>> Data/subversion1 volblocksize 8K - >>>>> Data/subversion1 checksum on default >>>>> Data/subversion1 compression off default >>>>> Data/subversion1 readonly off default >>>>> Data/subversion1 shareiscsi off local >>>> >>>> It does not appear that Data/subversion1 is being shared via iscsi? >>>> -- richard >>>> >>>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >> >> > > .--------------------------------------------------------------------------. > |Lars-Gunnar > Persson | > |IT- > sjef > | > | > | > |Nansen senteret for milj? og > fjernm?ling | > |Adresse : Thorm?hlensgate 47, 5006 > Bergen | > |Direkte : 55 20 58 31, sentralbord: 55 20 58 00, fax: 55 20 58 > 01 | > |Internett: http://www.nersc.no, e-post: lars- > gunnar.persson at nersc.no | > ''--------------------------------------------------------------------------'' > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >.--------------------------------------------------------------------------. |Lars-Gunnar Persson | |IT- sjef | | | |Nansen senteret for milj? og fjernm?ling | |Adresse : Thorm?hlensgate 47, 5006 Bergen | |Direkte : 55 20 58 31, sentralbord: 55 20 58 00, fax: 55 20 58 01 | |Internett: http://www.nersc.no, e-post: lars- gunnar.persson at nersc.no | ''--------------------------------------------------------------------------''
Lars-Gunnar, On Tue, Mar 03, 2009 at 11:18:27AM +0100, Lars-Gunnar Persson wrote:> -bash-3.00$ zfs list -o > name,type,used,avail,ratio,compression,reserv,volsize Data/subversion1 > NAME TYPE USED AVAIL RATIO COMPRESS RESERV VOLSIZE > Data/subversion1 volume 22.5K 511G 1.00x off 250G 250GThis shows that the volume still exists. Correct me if I am wrong here : - Did you mean that the contents of the volume subversion1 are corrupted ? What does that volume have on it ? Does it contain a filesystem which can can be mounted on Solaris ? If so, we could try mounting it locally on the Solaris box. This is to rule out any iSCSI issues. Also, do you have any snapshots of the volume ? If so, you could rollback to the latest snapshot. But, that would mean we lose some amount of data. Also, you mentioned that the volume was in use for a year. But, I see in the above output that it has only about 22.5K used. Is that correct ? I would have expected it to be higher. You should also check what ''zpool history -i '' says. Thanks and regards, Sanjeev> > I''ve also learned the the AVAIL column reports what''s available in the > zpool and NOT what''s available in the ZFS volume. > > -bash-3.00$ sudo zpool status -v > Password: > pool: Data > state: ONLINE > scrub: scrub in progress, 5.86% done, 12h46m to go > config: > > NAME STATE READ WRITE CKSUM > Data ONLINE 0 0 0 > c4t5000402001FC442Cd0 ONLINE 0 0 0 > > errors: No known data errors > > Interesting thing here is that the scrub process should be finished > today but the progress is much slower than reported here. And will the > scrub process help anything in my case? > > > -bash-3.00$ sudo fmdump > TIME UUID SUNW-MSG-ID > Nov 15 2007 10:16:38 8aa789d2-7f3a-45d5-9f5c-c101d73b795e ZFS-8000-CS > Oct 14 09:31:40.8179 8c7d9847-94b7-ec09-8da7-c352de405b78 FMD-8000-2K > > bash-3.00$ sudo fmdump -ev > TIME CLASS ENA > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6850ff400401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6850ff400401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6897db600401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e688d11500401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68926e600401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6897db600401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68a3d3900401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68bc67400001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68d8bb600401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68da5b500001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68da5b500001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68f0c9800401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68da5b500001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6897db600401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68e981900001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68f0c9800401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e690a11000401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68f0c9800401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e690385500001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e690385500001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e690a11000401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e692a4ca00001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68bc67400001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e690a11000401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6850ff400401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e6850ff400401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68bc67400001 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e690385500001 > Nov 15 2007 09:33:52 ereport.fs.zfs.data > 0x915e6850ff400401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68a3d3900401 > Nov 15 2007 09:33:52 ereport.fs.zfs.io > 0x915e68a3d3900401 > Nov 15 2007 10:16:12 ereport.fs.zfs.vdev.open_failed > 0x0533bb1b56400401 > Nov 15 2007 10:16:12 ereport.fs.zfs.zpool > 0x0533bb1b56400401 > Oct 14 09:31:31.6092 ereport.fm.fmd.log_append > 0x02eb96a8b6502801 > Oct 14 09:31:31.8643 ereport.fm.fmd.mod_init > 0x02ec89eadd100401 > > > On 3. mars. 2009, at 08.10, Lars-Gunnar Persson wrote: > >> I''ve turned off iSCSI sharing at the moment. >> >> My first question is: how can zfs report available is larger than >> reservation on a zfs volume? I also know that used mshould be larger >> than 22.5 K. Isn''t this strange? >> >> Lars-Gunnar Persson >> >> Den 3. mars. 2009 kl. 00.38 skrev Richard Elling >> <richard.elling at gmail.com>: >> >>> Lars-Gunnar Persson wrote: >>>> Hey to everyone on this mailing list (since this is my first post)! >>> >>> Welcome! >>> >>>> >>>> We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after >>>> some system work this weekend we have a problem with only one ZFS >>>> volume. >>>> >>>> We have a pool called /Data with many file systems and two >>>> volumes. The status of my zpool is: >>>> >>>> -bash-3.00$ zpool status >>>> pool: Data >>>> state: ONLINE >>>> scrub: scrub in progress, 5.99% done, 13h38m to go >>>> config: >>>> >>>> NAME STATE READ WRITE CKSUM >>>> Data ONLINE 0 0 0 >>>> c4t5000402001FC442Cd0 ONLINE 0 0 0 >>>> >>>> errors: No known data errors >>>> >>>> >>>> Yesterday I started the scrub process because I read that was a >>>> smart thing to do after a zpool export and zpool import procedure. >>>> I did this because I wanted to move the zpool to another OS >>>> installation but changed my mind and did a zpool import on the >>>> same OS as I did an export. >>>> >>>> After checking as much information as I could find on the web, I >>>> was advised to to run the zpool scrub after an import. >>>> >>>> Well, the problem now is that one volume in this zpool is not >>>> working. I''ve shared it via iscsi to a Linux host (all of this was >>>> working on Friday). The Linux host reports that it can''t find a >>>> partition table. Here is the log from the Linux host: >>>> >>>> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte >>>> hdwr sectors (268435 MB) >>>> Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write >>>> through >>>> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte >>>> hdwr sectors (268435 MB) >>>> Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write >>>> through >>>> Mar 2 11:09:37 eva kernel: sdb: unknown partition table >>>> Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, >>>> channel 0, id 0, lun 0 >>>> >>>> >>>> So I checked the status on my Solaris server and I found this >>>> information a bit strange;: >>>> >>>> -bash-3.00$ zfs list Data/subversion1 >>>> NAME USED AVAIL REFER MOUNTPOINT >>>> Data/subversion1 22.5K 519G 22.5K - >>>> >>>> How can it bed 519GB available on a volume that is 250GB in size? >>>> Here are more details: >>>> >>>> -bash-3.00$ zfs get all Data/subversion1 >>>> NAME PROPERTY VALUE SOURCE >>>> Data/subversion1 type volume - >>>> Data/subversion1 creation Wed Apr 2 9:06 2008 - >>>> Data/subversion1 used 22.5K - >>>> Data/subversion1 available 519G - >>>> Data/subversion1 referenced 22.5K - >>>> Data/subversion1 compressratio 1.00x - >>>> Data/subversion1 reservation 250G local >>>> Data/subversion1 volsize 250G - >>>> Data/subversion1 volblocksize 8K - >>>> Data/subversion1 checksum on default >>>> Data/subversion1 compression off default >>>> Data/subversion1 readonly off default >>>> Data/subversion1 shareiscsi off local >>> >>> It does not appear that Data/subversion1 is being shared via iscsi? >>> -- richard >>> >>> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- ---------------- Sanjeev Bagewadi Solaris RPE Bangalore, India
On 3. mars. 2009, at 14.51, Sanjeev wrote: Thank you for your reply.> Lars-Gunnar, > > > On Tue, Mar 03, 2009 at 11:18:27AM +0100, Lars-Gunnar Persson wrote: >> -bash-3.00$ zfs list -o >> name,type,used,avail,ratio,compression,reserv,volsize Data/ >> subversion1 >> NAME TYPE USED AVAIL RATIO COMPRESS RESERV >> VOLSIZE >> Data/subversion1 volume 22.5K 511G 1.00x off 250G >> 250G > > This shows that the volume still exists. > Correct me if I am wrong here : > - Did you mean that the contents of the volume subversion1 are > corrupted ?I''m not 100% sure if it''s the content of this volume or if it''s the zpool that is corrupted. It was iSCSI exported to a Linux host where it was formatted as an ext3 file system.> > What does that volume have on it ? Does it contain a filesystem > which can > can be mounted on Solaris ? If so, we could try mounting it locally > on the > Solaris box. This is to rule out any iSCSI issues.I don''t think that Solaris supports mounting of ext3 file systems or ?> > Also, do you have any snapshots of the volume ? If so, you could > rollback > to the latest snapshot. But, that would mean we lose some amount of > data.Nope, No snapshots - since this is a subversion repository with versioning built in. I didn''t think I''ll end up in this situation.> > Also, you mentioned that the volume was in use for a year. But, I > see in the > above output that it has only about 22.5K used. Is that correct ? I > would > have expected it to be higher.You''re absolutely right, the 22.5K is wrong. That is why I suspect zfs is doing something wrong ...> > You should also check what ''zpool history -i '' says.it says: -bash-3.00$ sudo zpool history Data | grep subversion 2008-04-02.09:08:53 zfs create -V 250GB Data/subversion1 2008-04-02.09:08:53 zfs set shareiscsi=on Data/subversion1 2008-08-14.14:13:58 zfs set shareiscsi=off Data/subversion1 2008-08-29.15:08:50 zfs set shareiscsi=on Data/subversion1 2009-03-02.10:37:36 zfs set shareiscsi=off Data/subversion1 2009-03-02.10:37:55 zfs set shareiscsi=on Data/subversion1 2009-03-02.11:37:22 zfs set shareiscsi=off Data/subversion1 2009-03-03.09:37:34 zfs set shareiscsi=on Data/subversion1 and: 2009-03-01.11:26:22 zpool export -f Data 2009-03-01.13:21:58 zpool import Data 2009-03-01.14:32:04 zpool scrub Data> > Thanks and regards, > SanjeevMore info: I just rebooted the SOlaris server and no change in status: -bash-3.00$ zpool status -v pool: Data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM Data ONLINE 0 0 0 c4t5000402001FC442Cd0 ONLINE 0 0 0 errors: No known data errors The scrubing has stopped and the zdb command crashed the server.>> >> I''ve also learned the the AVAIL column reports what''s available in >> the >> zpool and NOT what''s available in the ZFS volume. >> >> -bash-3.00$ sudo zpool status -v >> Password: >> pool: Data >> state: ONLINE >> scrub: scrub in progress, 5.86% done, 12h46m to go >> config: >> >> NAME STATE READ WRITE CKSUM >> Data ONLINE 0 0 0 >> c4t5000402001FC442Cd0 ONLINE 0 0 0 >> >> errors: No known data errors >> >> Interesting thing here is that the scrub process should be finished >> today but the progress is much slower than reported here. And will >> the >> scrub process help anything in my case? >> >> >> -bash-3.00$ sudo fmdump >> TIME UUID SUNW-MSG-ID >> Nov 15 2007 10:16:38 8aa789d2-7f3a-45d5-9f5c-c101d73b795e ZFS-8000-CS >> Oct 14 09:31:40.8179 8c7d9847-94b7-ec09-8da7-c352de405b78 FMD-8000-2K >> >> bash-3.00$ sudo fmdump -ev >> TIME CLASS ENA >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6850ff400401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6850ff400401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6897db600401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e688d11500401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68926e600401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6897db600401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68a3d3900401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68bc67400001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68d8bb600401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68da5b500001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68da5b500001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68f0c9800401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68da5b500001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6897db600401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68e981900001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68f0c9800401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e690a11000401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68f0c9800401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e690385500001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e690385500001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e690a11000401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e692a4ca00001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68bc67400001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e690a11000401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6850ff400401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e6850ff400401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68bc67400001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e690385500001 >> Nov 15 2007 09:33:52 ereport.fs.zfs.data >> 0x915e6850ff400401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68a3d3900401 >> Nov 15 2007 09:33:52 ereport.fs.zfs.io >> 0x915e68a3d3900401 >> Nov 15 2007 10:16:12 ereport.fs.zfs.vdev.open_failed >> 0x0533bb1b56400401 >> Nov 15 2007 10:16:12 ereport.fs.zfs.zpool >> 0x0533bb1b56400401 >> Oct 14 09:31:31.6092 ereport.fm.fmd.log_append >> 0x02eb96a8b6502801 >> Oct 14 09:31:31.8643 ereport.fm.fmd.mod_init >> 0x02ec89eadd100401 >> >> >> On 3. mars. 2009, at 08.10, Lars-Gunnar Persson wrote: >> >>> I''ve turned off iSCSI sharing at the moment. >>> >>> My first question is: how can zfs report available is larger than >>> reservation on a zfs volume? I also know that used mshould be larger >>> than 22.5 K. Isn''t this strange? >>> >>> Lars-Gunnar Persson >>> >>> Den 3. mars. 2009 kl. 00.38 skrev Richard Elling >>> <richard.elling at gmail.com>: >>> >>>> Lars-Gunnar Persson wrote: >>>>> Hey to everyone on this mailing list (since this is my first >>>>> post)! >>>> >>>> Welcome! >>>> >>>>> >>>>> We''ve a Sun Fire X4100 M2 server running Solaris 10 u6 and after >>>>> some system work this weekend we have a problem with only one ZFS >>>>> volume. >>>>> >>>>> We have a pool called /Data with many file systems and two >>>>> volumes. The status of my zpool is: >>>>> >>>>> -bash-3.00$ zpool status >>>>> pool: Data >>>>> state: ONLINE >>>>> scrub: scrub in progress, 5.99% done, 13h38m to go >>>>> config: >>>>> >>>>> NAME STATE READ WRITE CKSUM >>>>> Data ONLINE 0 0 0 >>>>> c4t5000402001FC442Cd0 ONLINE 0 0 0 >>>>> >>>>> errors: No known data errors >>>>> >>>>> >>>>> Yesterday I started the scrub process because I read that was a >>>>> smart thing to do after a zpool export and zpool import procedure. >>>>> I did this because I wanted to move the zpool to another OS >>>>> installation but changed my mind and did a zpool import on the >>>>> same OS as I did an export. >>>>> >>>>> After checking as much information as I could find on the web, I >>>>> was advised to to run the zpool scrub after an import. >>>>> >>>>> Well, the problem now is that one volume in this zpool is not >>>>> working. I''ve shared it via iscsi to a Linux host (all of this was >>>>> working on Friday). The Linux host reports that it can''t find a >>>>> partition table. Here is the log from the Linux host: >>>>> >>>>> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte >>>>> hdwr sectors (268435 MB) >>>>> Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write >>>>> through >>>>> Mar 2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte >>>>> hdwr sectors (268435 MB) >>>>> Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write >>>>> through >>>>> Mar 2 11:09:37 eva kernel: sdb: unknown partition table >>>>> Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, >>>>> channel 0, id 0, lun 0 >>>>> >>>>> >>>>> So I checked the status on my Solaris server and I found this >>>>> information a bit strange;: >>>>> >>>>> -bash-3.00$ zfs list Data/subversion1 >>>>> NAME USED AVAIL REFER MOUNTPOINT >>>>> Data/subversion1 22.5K 519G 22.5K - >>>>> >>>>> How can it bed 519GB available on a volume that is 250GB in size? >>>>> Here are more details: >>>>> >>>>> -bash-3.00$ zfs get all Data/subversion1 >>>>> NAME PROPERTY VALUE SOURCE >>>>> Data/subversion1 type volume - >>>>> Data/subversion1 creation Wed Apr 2 9:06 2008 - >>>>> Data/subversion1 used 22.5K - >>>>> Data/subversion1 available 519G - >>>>> Data/subversion1 referenced 22.5K - >>>>> Data/subversion1 compressratio 1.00x - >>>>> Data/subversion1 reservation 250G local >>>>> Data/subversion1 volsize 250G - >>>>> Data/subversion1 volblocksize 8K - >>>>> Data/subversion1 checksum on default >>>>> Data/subversion1 compression off default >>>>> Data/subversion1 readonly off default >>>>> Data/subversion1 shareiscsi off local >>>> >>>> It does not appear that Data/subversion1 is being shared via iscsi? >>>> -- richard >>>> >>>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > -- > ---------------- > Sanjeev Bagewadi > Solaris RPE > Bangalore, India >.--------------------------------------------------------------------------. |Lars-Gunnar Persson | |IT- sjef | | | |Nansen senteret for milj? og fjernm?ling | |Adresse : Thorm?hlensgate 47, 5006 Bergen | |Direkte : 55 20 58 31, sentralbord: 55 20 58 00, fax: 55 20 58 01 | |Internett: http://www.nersc.no, e-post: lars- gunnar.persson at nersc.no | ''--------------------------------------------------------------------------''