Jim Horng
2010-May-09 18:16 UTC
[zfs-discuss] How can I be sure the zfs send | zfs received is correct?
Okay, so after some test with dedup on snv_134. I decided we can not to use dedup feature for the time being. While unable to destroy a dedupped file system. I decided to migrate the file system to another pool then destroy the pool. (see below) http://opensolaris.org/jive/thread.jspa?threadID=128532&tstart=75 http://opensolaris.org/jive/thread.jspa?threadID=128620&tstart=60 Now here is my problem. I did a snapshot of the file system I want to migrate. I did a send and receive of the file system zfs send tank/export/projects/project1_nb at today | zfs receive -d mpool but the file system end up smaller than the original file system without the dedup turn on. How is this possible? Can someone explain. I am not able to trust the data now until I can verify the data are identical. SunOS filearch1 5.11 snv_134 i86pc i386 i86xpv Solaris root at filearch1:/var/adm# zpool status pool: mpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mpool ONLINE 0 0 0 c7t7d0 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c7t0d0s0 ONLINE 0 0 0 errors: No known data errors pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c7t1d0 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 c7t6d0 ONLINE 0 0 0 errors: No known data errors root at filearch1:/var/adm# zfs list NAME USED AVAIL REFER MOUNTPOINT mpool 407G 278G 22K /mpool mpool/export 407G 278G 22K /mpool/export mpool/export/projects 407G 278G 23K /mpool/export/projects mpool/export/projects/bali_nobackup 407G 278G 407G /mpool/export/projects/project1_nb < ...> tank 520G 4.11T 34.9K /tank tank/export/projects 515G 4.11T 41.5K /export/projects tank/export/projects/bali_nobackup 427G 4.11T 424G /export/projects/project1_nb root at filearch1:/var/adm# zfs get compressratio NAME PROPERTY VALUE SOURCE mpool compressratio 2.43x - mpool/export compressratio 2.43x - mpool/export/projects compressratio 2.43x - mpool/export/projects/project1_nb compressratio 2.43x - mpool/export/projects/project1_nb at today compressratio 2.43x - tank compressratio 2.34x - tank/export compressratio 2.34x - tank/export/projects compressratio 2.34x - tank/export/projects/project1_nb compressratio 2.44x - tank/export/projects/project1_nb at today compressratio 2.44x - tank/export/projects/project1_nb_2 compressratio 1.00x - tank/export/projects/project1_nb_3 compressratio 1.90x - root at filearch1:/var/adm# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mpool 696G 407G 289G 58% 1.00x ONLINE - rpool 19.9G 9.50G 10.4G 47% 1.00x ONLINE - tank 5.44T 403G 5.04T 7% 2.53x ONLINE - -- This message posted from opensolaris.org
Richard Elling
2010-May-09 18:47 UTC
[zfs-discuss] How can I be sure the zfs send | zfs received is correct?
On May 9, 2010, at 11:16 AM, Jim Horng wrote:> Okay, so after some test with dedup on snv_134. I decided we can not to use dedup feature for the time being. > > While unable to destroy a dedupped file system. I decided to migrate the file system to another pool then destroy the pool. (see below) > > http://opensolaris.org/jive/thread.jspa?threadID=128532&tstart=75 > http://opensolaris.org/jive/thread.jspa?threadID=128620&tstart=60 > > > Now here is my problem. > I did a snapshot of the file system I want to migrate. > I did a send and receive of the file system > > zfs send tank/export/projects/project1_nb at today | zfs receive -d mpool > > but the file system end up smaller than the original file system without the dedup turn on. How is this possible?What you think you are measuring is not what you are measuring. Compare the size of the snapshots. -- richard> Can someone explain. I am not able to trust the data now until I can verify the data are identical. > > SunOS filearch1 5.11 snv_134 i86pc i386 i86xpv Solaris > root at filearch1:/var/adm# zpool status > pool: mpool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > mpool ONLINE 0 0 0 > c7t7d0 ONLINE 0 0 0 > > errors: No known data errors > > pool: rpool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > rpool ONLINE 0 0 0 > c7t0d0s0 ONLINE 0 0 0 > > errors: No known data errors > > pool: tank > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > c7t1d0 ONLINE 0 0 0 > c7t2d0 ONLINE 0 0 0 > c7t3d0 ONLINE 0 0 0 > c7t4d0 ONLINE 0 0 0 > c7t5d0 ONLINE 0 0 0 > c7t6d0 ONLINE 0 0 0 > > errors: No known data errors > > root at filearch1:/var/adm# zfs list > NAME USED AVAIL REFER MOUNTPOINT > mpool 407G 278G 22K /mpool > mpool/export 407G 278G 22K /mpool/export > mpool/export/projects 407G 278G 23K /mpool/export/projects > mpool/export/projects/bali_nobackup 407G 278G 407G /mpool/export/projects/project1_nb > < ...> > tank 520G 4.11T 34.9K /tank > tank/export/projects 515G 4.11T 41.5K /export/projects > tank/export/projects/bali_nobackup 427G 4.11T 424G /export/projects/project1_nb > > root at filearch1:/var/adm# zfs get compressratio > NAME PROPERTY VALUE SOURCE > mpool compressratio 2.43x - > mpool/export compressratio 2.43x - > mpool/export/projects compressratio 2.43x - > mpool/export/projects/project1_nb compressratio 2.43x - > mpool/export/projects/project1_nb at today compressratio 2.43x - > tank compressratio 2.34x - > tank/export compressratio 2.34x - > tank/export/projects compressratio 2.34x - > tank/export/projects/project1_nb compressratio 2.44x - > tank/export/projects/project1_nb at today compressratio 2.44x - > tank/export/projects/project1_nb_2 compressratio 1.00x - > tank/export/projects/project1_nb_3 compressratio 1.90x - > > root at filearch1:/var/adm# zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > mpool 696G 407G 289G 58% 1.00x ONLINE - > rpool 19.9G 9.50G 10.4G 47% 1.00x ONLINE - > tank 5.44T 403G 5.04T 7% 2.53x ONLINE - > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- ZFS storage and performance consulting at http://www.RichardElling.com
Jim Horng
2010-May-09 19:04 UTC
[zfs-discuss] How can I be sure the zfs send | zfs received is correct?
size of snapshot? root at filearch1:/var/adm# zfs list mpool/export/projects/project1_nb at today NAME USED AVAIL REFER MOUNTPOINT mpool/export/projects/project1_nb at today 0 - 407G - root at filearch1:/var/adm# zfs list tank/export/projects/project1_nb at today NAME USED AVAIL REFER MOUNTPOINT tank/export/projects/project1_nb at today 2.44G - 424G - -- This message posted from opensolaris.org
Roy Sigurd Karlsbakk
2010-May-10 17:03 UTC
[zfs-discuss] How can I be sure the zfs send | zfs received is correct?
----- "Jim Horng" <jhorng at stretchinc.com> skrev:> zfs send tank/export/projects/project1_nb at today | zfs receive -d > mpoolPerhaps zfs send -R is what you''re looking for... roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
Brandon High
2010-May-10 17:06 UTC
[zfs-discuss] How can I be sure the zfs send | zfs received is correct?
On Sun, May 9, 2010 at 11:16 AM, Jim Horng <jhorng at stretchinc.com> wrote:> zfs send tank/export/projects/project1_nb at today | zfs receive -d mpoolThis won''t get any snapshots before @today, which may lead to the received size being smaller. I''ve also noticed that different pool types (eg: raidz vs. mirror) can lead slight differences in space usage. -B -- Brandon High : bhigh at freaks.com
Jim Horng
2010-May-10 18:52 UTC
[zfs-discuss] How can I be sure the zfs send | zfs received is correct?
I was expecting zfs send tank/export/projects/project1_nb at today would send everything up to @today. That is the only snapshot and I am not using the -i options. The things worries me is that tank/export/projects/project1_nb was the first file system that I tested with full dedup and compression. and the first ~300GB usage (before I merged the other file systems) showing ~2.5x dedup ratio. so the data should be easily more than 600 GB. My initial worry was the migration pool won''t even have enough space to receive the file system when I started but the turn out to be very unexpected result. My question is where is the dedupped data went if the new pool is showing 1.0x dedup ratio and the old pool is show a 2.53 ratio yet both take up about the same size ~400GB. Is the -R option required for what I am trying to do? what I am try to do is to un-dedup the file system. I actually preferred if non of the properties was replicated. This is quite confusing and I won''t be a surprise if other people are taking an incomplete backup with zfs send if that''s the case. I will redo the send again with -R and see what happens. -- This message posted from opensolaris.org