Justin Conover
2005-Dec-08 04:13 UTC
[zfs-discuss] zfs consumes 150gb more space than my ext3 server?
Previously I had one server at home running rhel 4 with lvm/ext3 and about 1TB of storage. I built a second server to back that one up with about 1TB and I used zfs for those file systems. Once all that was done I used rsync to mv the data over, ext3 df -h /home/amy Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup01-LogVol00 591G 448G 114G 80% /home/amy zfs zfs list zfs/home/amy NAME USED AVAIL REFER MOUNTPOINT zfs/home/amy 596G 104G 596G /export/home/amy I''m aware I can use compression, not sure how you tell it higher values or what not, but still from one-to-one should this really be 148gb fatter? Any thoughts? Thanks, This message posted from opensolaris.org
Jeff Bonwick
2005-Dec-08 06:24 UTC
[zfs-discuss] zfs consumes 150gb more space than my ext3 server?
> ext3 448G > zfs 596GEducated guess: you''ve set up a 4-disk RAID-Z stripe. There''s a bug in the way we report space usage for RAID-Z: it currently includes the parity. This is described in a little more detail here: http://www.opensolaris.org/jive/post!reply.jspa?messageID=15532 I''m guessing that you have a 4-disk RAID-Z because 596G * 3/4 = 447G. Jeff
Casper.Dik at Sun.COM
2005-Dec-08 08:04 UTC
[zfs-discuss] zfs consumes 150gb more space than my ext3 server?
>Previously I had one server at home running rhel 4 with lvm/ext3 and about 1TB of storage. > > I built a second server to back that one up with about 1TB and I used zfs for those file systems.Once all that was done I used rsync to mv the data over,> >ext3 >df -h /home/amy >Filesystem Size Used Avail Use% Mounted on >/dev/mapper/VolGroup01-LogVol00 > 591G 448G 114G 80% /home/amy > >zfs > zfs list zfs/home/amy >NAME USED AVAIL REFER MOUNTPOINT >zfs/home/amy 596G 104G 596G /export/home/amy > >I''m aware I can use compression, not sure how you tell it higher values or what not, but still from one-to-one should this really be 148gb fatter?> >Any thoughts?Does rsync handle files with holes and might you have had some? Casper
Joerg Schilling
2005-Dec-08 09:45 UTC
[zfs-discuss] zfs consumes 150gb more space than my ext3 server?
Casper.Dik at Sun.COM wrote:> Does rsync handle files with holes and might you have had some?AFAIK: No. The best way to sync two servers with different archs is to use star in incremental mode. There is a special flag that makes incremental syncing more easy. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
Justin Conover
2005-Dec-08 15:27 UTC
[zfs-discuss] zfs consumes 150gb more space than my ext3 server?
On 12/8/05, Jeff Bonwick <bonwick at zion.eng.sun.com> wrote:> > > ext3 448G > > zfs 596G > > Educated guess: you''ve set up a 4-disk RAID-Z stripe. > There''s a bug in the way we report space usage for > RAID-Z: it currently includes the parity. This is > described in a little more detail here: > > http://www.opensolaris.org/jive/post!reply.jspa?messageID=15532 > > I''m guessing that you have a 4-disk RAID-Z because 596G * 3/4 = 447G. > > Jeff > >There are 4x250gb disk in raidz which gave about 920gb total usable space. zfs list NAME USED AVAIL REFER MOUNTPOINT zfs 715G 208G 16K /zfs zfs/home 715G 208G 18.0K /export/home zfs/home/amy 596G 104G 596G /export/home/amy zfs/home/justin 120G 30.4G 120G /export/home/justin # zfs list -o name,compressratio NAME RATIO zfs 1.02x zfs/home 1.02x zfs/home/amy 1.01x zfs/home/justin 1.09x zfs get -o property,value,source all zfs/home/amy PROPERTY VALUE SOURCE type filesystem - creation Mon Dec 5 20:54 2005 - used 596G - available 104G - referenced 596G - compressratio 1.01x - mounted yes - quota 700G local reservation none default recordsize 128K default mountpoint /export/home/amy local sharenfs rw inherited from zfs/home checksum on default compression on local atime on default devices on default exec on default setuid on default readonly off default zoned off default snapdir visible default aclmode groupmask default aclinherit secure default Can you increase the compressratio ? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20051208/8e6da387/attachment.html>