We are using ZFS on a Sun E450 server (4 x 400 MHz CPU, 1 Gb memory, 18 Gb system disk and 19 x 300 Gb disks running OSOL snv 134) for archive storage where speed is not important. We have 2 RAID-Z1 pools of 8 disks plus one spare disk shared between the two pools and this has apparently worked well since it was set up several months ago. However, one of our users recently put a 35 Gb tar.gz file on this server and uncompressed it to a 215 Gb tar file. But when he tried to untar it, after about 43 Gb had been extracted we noticed the disk usage reported by df for that ZFS pool wasn''t changing much. Using du -sm on the extracted archive directory showed that the size would increase over a period of 30 seconds or so and then suddenly drop back about 50 Mb and start increasing again. In other words it seems to be going into some sort of a loop and all we could do was to kill tar and try again when exactly the same thing happened after 43 Gb had been extracted. Thinking the tar file could be corrupt, we sucessfully untarred the file on a Linux system (1 Tb disk with plain ext3 filesystem). I suspect my problem may be due to limited memory on this system but are there any other things I should take into consideration? It''s not a major problem as the system is intended for storage and users are not supposed to go in and untar huge tarfiles on it as it''s not a fast system ;-) Andy ---------------------------- Andy Thomas, Time Domain Systems Tel: +44 (0)7866 556626 Fax: +44 (0)20 8372 2582 http://www.time-domain.co.uk
On Sat, 13 Aug 2011, andy thomas wrote:> However, one of our users recently put a 35 Gb tar.gz file on this server and > uncompressed it to a 215 Gb tar file. But when he tried to untar it, after > about 43 Gb had been extracted we noticed the disk usage reported by df for > that ZFS pool wasn''t changing much. Using du -sm on the extracted archive > directory showed that the size would increase over a period of 30 seconds or > so and then suddenly drop back about 50 Mb and start increasing again. In > other words it seems to be going into some sort of a loop and all we could do > was to kill tar and try again when exactly the same thing happened after 43 > Gb had been extracted.What ''tar'' program were you using? Make sure to also try using the Solaris-provided tar rather than something like GNU tar. 1GB of memory is not very much for Solaris to use. A minimum of 2GB is recommended for zfs. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Sat, 13 Aug 2011, Bob Friesenhahn wrote:> On Sat, 13 Aug 2011, andy thomas wrote: >> However, one of our users recently put a 35 Gb tar.gz file on this server >> and uncompressed it to a 215 Gb tar file. But when he tried to untar it, >> after about 43 Gb had been extracted we noticed the disk usage reported by >> df for that ZFS pool wasn''t changing much. Using du -sm on the extracted >> archive directory showed that the size would increase over a period of 30 >> seconds or so and then suddenly drop back about 50 Mb and start increasing >> again. In other words it seems to be going into some sort of a loop and all >> we could do was to kill tar and try again when exactly the same thing >> happened after 43 Gb had been extracted. > > What ''tar'' program were you using? Make sure to also try using the > Solaris-provided tar rather than something like GNU tar.I was using GNU tar actually as the original archive was created on a Linux machine. I will try it again using Solaris tar.> 1GB of memory is not very much for Solaris to use. A minimum of 2GB is > recommended for zfs.We are going to upgrade the system to 4 Gb as soon as possible. Thanks for the quick response, Andy
andy thomas <andy at time-domain.co.uk> wrote:> > What ''tar'' program were you using? Make sure to also try using the > > Solaris-provided tar rather than something like GNU tar. > > I was using GNU tar actually as the original archive was created on a > Linux machine. I will try it again using Solaris tar.GNU tar does not follow the standard when creating archives, so Sun tar may be unable to unpack the archive correctly. But GNU tar makes strange things when unpacking symlinks. I recommend to use star, it understands GNU tar archives. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
On Sat, 13 Aug 2011, Bob Friesenhahn wrote:> On Sat, 13 Aug 2011, andy thomas wrote: >> However, one of our users recently put a 35 Gb tar.gz file on this server >> and uncompressed it to a 215 Gb tar file. But when he tried to untar it, >> after about 43 Gb had been extracted we noticed the disk usage reported by >> df for that ZFS pool wasn''t changing much. Using du -sm on the extracted >> archive directory showed that the size would increase over a period of 30 >> seconds or so and then suddenly drop back about 50 Mb and start increasing >> again. In other words it seems to be going into some sort of a loop and all >> we could do was to kill tar and try again when exactly the same thing >> happened after 43 Gb had been extracted. > > What ''tar'' program were you using? Make sure to also try using the > Solaris-provided tar rather than something like GNU tar.Using the default Solaris tar fixed the problem! I''ve tended to use GNU tar on Solaris as apparently there was a bug in the Solaris version of tar from very log ago where it would not extract files properly from tarfiles created on non-Solaris systems. Maybe this long-standing bug has been fixed at last? Thanks a lot for your help, Andy
On Sat, 13 Aug 2011, Joerg Schilling wrote:> andy thomas <andy at time-domain.co.uk> wrote: > >>> What ''tar'' program were you using? Make sure to also try using the >>> Solaris-provided tar rather than something like GNU tar. >> >> I was using GNU tar actually as the original archive was created on a >> Linux machine. I will try it again using Solaris tar. > > GNU tar does not follow the standard when creating archives, so Sun tar may be > unable to unpack the archive correctly.So it is GNU tar that is broken and not Solaris tar? I always thought it was the other way round. Thanks for letting me know.> But GNU tar makes strange things when unpacking symlinks. > > I recommend to use star, it understands GNU tar archives.I''ve just installed this (version 1.5a78) from Sunfreeware and am having a play. Danke! Andy
andy thomas <andy at time-domain.co.uk> wrote:> So it is GNU tar that is broken and not Solaris tar? I always thought it > was the other way round. Thanks for letting me know.Before autoumn 2004, Sun tar had several problems with standard compliance but then it has been tested against tartest(1) from star.> > But GNU tar makes strange things when unpacking symlinks. > > > > I recommend to use star, it understands GNU tar archives. > > I''ve just installed this (version 1.5a78) from Sunfreeware and am having a > play.1.5a78 is a bit old but it should be sufficient for you. It is recommended to compile recent versions (e.g. from ftp://ftp.berlios.de/pub/schily/) as even the latest 1.5.1 is a bit old and should get an update. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
andy thomas <andy at time-domain.co.uk> wrote:> I''ve tended to use GNU tar on Solaris as apparently there was a bug in the > Solaris version of tar from very log ago where it would not extract files > properly from tarfiles created on non-Solaris systems. Maybe this > long-standing bug has been fixed at last?This is a result from the fact that GNU tar does not create tar archives. GNU tar is a real problem with respect to interoperability. star works on any OS and is standard compliant. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
> GNU tar does not follow the standard when creating archives, so Sun > tar may be unable to unpack the archive correctly. > > But GNU tar makes strange things when unpacking symlinks. > > I recommend to use star, it understands GNU tar archives.Even if you used some wierd tar program, the I/O pattern would be the same. Blaming userspace for kernel issues is nonesense -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
I''ve noticed strange effects between ZFS compression and GNU tar 1.17. Take a look at this forum post: <http://forums.zmanda.com/showthread.php?t=3792> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110814/7d3413c0/attachment.html>