I would like to ask a question regarding ZFS performance overhead when having hundreds of millions of files We have a storage solution, where one of the datasets has a folder containing about 400 million files and folders (very small 1K files) What kind of overhead do we get from this kind of thing? Our storage performance has degraded over time, and we have been looking in different places for cause of problems, but now I am wondering if its simply a file pointer issue? Cheers //Rey -- This message posted from opensolaris.org
> What kind of overhead do we get from this kind of thing?Overheadache... [i](Tack Kronberg f?r svaret)[/i] -- This message posted from opensolaris.org
Kjetil Torgrim Homme
2010-Feb-24 13:48 UTC
[zfs-discuss] ZFS with hundreds of millions of files
Steve <steve.jackson at norman.com> writes:> I would like to ask a question regarding ZFS performance overhead when > having hundreds of millions of files > > We have a storage solution, where one of the datasets has a folder > containing about 400 million files and folders (very small 1K files) > > What kind of overhead do we get from this kind of thing?at least 50%. I don''t think this is obvious, so I''ll state it: RAID-Z will not gain you any additional capacity over mirroring in this scenario. remember each individual file gets its own stripe. if the file is 512 bytes or less, you''ll need another 512 byte block for the parity (actually as a special case, it''s not parity, but a copy. parity would just be an inversion of all bits, so it''s not useful to spend time doing it.) what''s more, even if the file is 1024 bytes or less, ZFS will allocate an additional padding block to reduce the chance of unusable single disk blocks. a 1536 byte file will also consume 2048 bytes of physical disk, however. the reasoning for RAID-Z2 is similar, except it will add a padding block even for the 1536 byte file. to summarise: net raid-z1 raidz-2 -------------------------- 512 1024 2x 1536 3x 1024 2048 2x 3072 3x 1536 2048 1?x 3072 2x 2048 3072 1?x 3072 1?x 2560 3072 1?x 3584 1?x the above assumes at least 8 (9) disks in the vdev, otherwise you''ll get a little more overhead for the "larger" filesizes.> Our storage performance has degraded over time, and we have been > looking in different places for cause of problems, but now I am > wondering if its simply a file pointer issue?adding new files will fragment directories, that might cause performance degradation depending on access patterns. I don''t think many files in itself will cause problems, but since you get a lot more ZFS records in your dataset (128x!), more of the disk space is "wasted" on block pointers, and you may get more block pointer writes since more levels are needed. -- Kjetil T. Homme Redpill Linpro AS - Changing the game
Hei Kjetil. Actually we are using hardware RAID5 on this setup.. so solaris only sees a single device... The overhead I was thinking of was more in the pointer structures... (bearing in mind this is a 128 bit file system), I would guess that memory requirements would be HUGE for all these files...otherwise arc is gonna struggle, and paging system is going mental....? //Rey -- This message posted from opensolaris.org
Bob Friesenhahn
2010-Feb-24 20:09 UTC
[zfs-discuss] ZFS with hundreds of millions of files
On Wed, 24 Feb 2010, Steve wrote:> > The overhead I was thinking of was more in the pointer structures... > (bearing in mind this is a 128 bit file system), I would guess that > memory requirements would be HUGE for all these files...otherwise > arc is gonna struggle, and paging system is going mental....?It is not reasonable to assume that zfs has to retain everything in memory. I have a directory here containing a million files and it has not caused any strain for zfs at all although it can cause considerable stress on applications. 400 million tiny files is quite a lot and I would hate to use anything but mirrors with so many tiny files. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On 24 February, 2010 - Bob Friesenhahn sent me these 1,0K bytes:> On Wed, 24 Feb 2010, Steve wrote: >> >> The overhead I was thinking of was more in the pointer structures... >> (bearing in mind this is a 128 bit file system), I would guess that >> memory requirements would be HUGE for all these files...otherwise arc >> is gonna struggle, and paging system is going mental....? > > It is not reasonable to assume that zfs has to retain everything in > memory. > > I have a directory here containing a million files and it has not caused > any strain for zfs at all although it can cause considerable stress on > applications. > > 400 million tiny files is quite a lot and I would hate to use anything > but mirrors with so many tiny files.Another tought is "am I using the correct storage model for this data"? /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
Nicolas Williams
2010-Feb-24 20:39 UTC
[zfs-discuss] ZFS with hundreds of millions of files
On Wed, Feb 24, 2010 at 02:09:42PM -0600, Bob Friesenhahn wrote:> I have a directory here containing a million files and it has not > caused any strain for zfs at all although it can cause considerable > stress on applications.The biggest problem is always the apps. For example, ls by default sorts, and if you''re using a locale with a non-trivial collation (e.g., any UTF-8 locales) then the sort gets very expensive. Nico --
On 24-Feb-10, at 3:38 PM, Tomas ?gren wrote:> On 24 February, 2010 - Bob Friesenhahn sent me these 1,0K bytes: > >> On Wed, 24 Feb 2010, Steve wrote: >>> >>> The overhead I was thinking of was more in the pointer structures... >>> (bearing in mind this is a 128 bit file system), I would guess that >>> memory requirements would be HUGE for all these files...otherwise >>> arc >>> is gonna struggle, and paging system is going mental....? >> >> It is not reasonable to assume that zfs has to retain everything in >> memory. >> >> I have a directory here containing a million files and it has not >> caused >> any strain for zfs at all although it can cause considerable >> stress on >> applications. >> >> 400 million tiny files is quite a lot and I would hate to use >> anything >> but mirrors with so many tiny files. > > Another tought is "am I using the correct storage model for this > data"?You''re not the only one wondering that. :) --Toby> > /Tomas > -- > Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ > |- Student at Computing Science, University of Ume? > `- Sysadmin at {cs,acc}.umu.se > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Feb 24 at 14:09, Bob Friesenhahn wrote:>400 million tiny files is quite a lot and I would hate to use >anything but mirrors with so many tiny files.And at 400 million, you''re in the realm of needing mirrors of SSDs, with their fast random reads. Even at the 500+ IOPS of good SAS drives, you''re looking at a TON of spindles to move through 400 million 1KB files quickly. --eric -- Eric D. Mudama edmudama at mail.bounceswoosh.org
It was never the intention that this storage system should be used in this way... And I am now clearning alot of this stuff out.. This is very static files, and is rarely used... so traversing it any way is a rare occasion... What has happened is that reading and writing large files which are unrelated to these ones has become appallingly slow... So I was wondering if just the presence of so many files was in some way putting alot of stress on the pool, even if these files arent used very often... -- This message posted from opensolaris.org
On Wed, Feb 24, 2010 at 11:09 PM, Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:> On Wed, 24 Feb 2010, Steve wrote: >> >> The overhead I was thinking of was more in the pointer structures... >> (bearing in mind this is a 128 bit file system), I would guess that memory >> requirements would be HUGE for all these files...otherwise arc is gonna >> struggle, and paging system is going mental....? > > It is not reasonable to assume that zfs has to retain everything in memory.At the same time 400M files in a single directory should lead to a lot of contention on locks associated with look-ups. Spreading files between a reasonable number of dirs could mitigate this. Regards, Andrey> > I have a directory here containing a million files and it has not caused any > strain for zfs at all although it can cause considerable stress on > applications. > > 400 million tiny files is quite a lot and I would hate to use anything but > mirrors with so many tiny files. > > Bob > -- > Bob Friesenhahn > bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, ? ?http://www.GraphicsMagick.org/ > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
thats not the issue here, as they are spread out in a folder structure based on an integer split into hex blocks... 00/00/00/01 etc... but the number of pointers involved with all these files, and directories (which are files) must have an impact on a system with limited RAM? There is 4GB RAM in this system btw... -- This message posted from opensolaris.org
Bob Friesenhahn
2010-Feb-24 21:31 UTC
[zfs-discuss] ZFS with hundreds of millions of files
On Wed, 24 Feb 2010, Steve wrote:> > What has happened is that reading and writing large files which are > unrelated to these ones has become appallingly slow... So I was > wondering if just the presence of so many files was in some way > putting alot of stress on the pool, even if these files arent used > very often...If these millions of files was built up over a long period of time while large files are also being created, then they may contribute to an increased level of filesystem fragmentation. With millions of such tiny files, it makes sense to put the small files in a separate zfs filesystem which has its recordsize property set to a size not much larger than the size of the files. This should reduce waste, resulting in reduced potential for fragmentation in the rest of the pool. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Feb 24, 2010, at 1:17 PM, Steve wrote:> It was never the intention that this storage system should be used in this way... > > And I am now clearning alot of this stuff out.. > > This is very static files, and is rarely used... so traversing it any way is a rare occasion... > > What has happened is that reading and writing large files which are unrelated to these ones has become appallingly slow... So I was wondering if just the presence of so many files was in some way putting alot of stress on the pool, even if these files arent used very often...There are (recent) improvements to the allocator that should help this scenario. What release are you running? -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)
On Thu, Feb 25, 2010 at 12:26 AM, Steve <steve.jackson at norman.com> wrote:> thats not the issue here, as they are spread out in a folder structure based on an integer split into hex blocks... ?00/00/00/01 etc... > > but the number of pointers involved with all these files, and directories (which are files) > must have an impact on a system with limited RAM? > > There is 4GB RAM in this system btw...If any significant portion of these 400M files is accessed on a regular basis, you''d be (1) stressing ARC to the limits (2) stressing spindles so that any concurrent sequential I/O would suffer. Small files are always an issue, try moving them off HDDs onto a mirrored SSDs, not necessarily most expensive ones. 400M 2K files is just 400GB, within the reach of a few SSDs. Regards, Andrey> -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Thu, Feb 25, 2010 at 12:34 AM, Andrey Kuzmin <andrey.v.kuzmin at gmail.com> wrote:> On Thu, Feb 25, 2010 at 12:26 AM, Steve <steve.jackson at norman.com> wrote: >> thats not the issue here, as they are spread out in a folder structure based on an integer split into hex blocks... ?00/00/00/01 etc... >> >> but the number of pointers involved with all these files, and directories (which are files) >> must have an impact on a system with limited RAM? >> >> There is 4GB RAM in this system btw... > > If any significant portion of these 400M files is accessed on a > regular basis, you''d be > (1) stressing ARC to the limits > (2) stressing spindles so that any concurrent sequential I/O would suffer. > > Small files are always an issue, try moving them off HDDs onto a > mirrored SSDs, not necessarily most expensive ones. 400M 2K files is1K meant, fat fingers. Regards, Andrey> just 400GB, within the reach of a few SSDs. > > > Regards, > Andrey > > >> -- >> This message posted from opensolaris.org >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >
Well I am deleting most of them anyway... they are not needed anymore... Will deletion solve the problem... or do I need to do something more to defrag the file system? I have understood that defrag willl not be available until this block rewrite thing is done? -- This message posted from opensolaris.org
Robert Milkowski
2010-Feb-24 21:46 UTC
[zfs-discuss] ZFS with hundreds of millions of files
On 24/02/2010 21:31, Bob Friesenhahn wrote:> On Wed, 24 Feb 2010, Steve wrote: >> >> What has happened is that reading and writing large files which are >> unrelated to these ones has become appallingly slow... So I was >> wondering if just the presence of so many files was in some way >> putting alot of stress on the pool, even if these files arent used >> very often... > > If these millions of files was built up over a long period of time > while large files are also being created, then they may contribute to > an increased level of filesystem fragmentation. > > With millions of such tiny files, it makes sense to put the small > files in a separate zfs filesystem which has its recordsize property > set to a size not much larger than the size of the files. This should > reduce waste, resulting in reduced potential for fragmentation in the > rest of the pool.except for one bug which has been fixed which had to do with consuming lots of CPU to find a free block I don''t think you are right. You don''t have to set recordsize to smaller value for small files. Recordsize property sets a maximum allowed recordsize but other than that it is being selected automatically when file is being created so for small files their recordsize will be small even if it is set to default 128KB. -- Robert Milkowski http://milek.blogspot.com
Nicolas Williams
2010-Feb-24 21:47 UTC
[zfs-discuss] ZFS with hundreds of millions of files
On Wed, Feb 24, 2010 at 03:31:51PM -0600, Bob Friesenhahn wrote:> With millions of such tiny files, it makes sense to put the small > files in a separate zfs filesystem which has its recordsize property > set to a size not much larger than the size of the files. This should > reduce waste, resulting in reduced potential for fragmentation in the > rest of the pool.Tuning the dataset recordsize down does not help in this case. The files are already small, so their recordsize is already small. Nico --
Bob Friesenhahn
2010-Feb-24 21:54 UTC
[zfs-discuss] ZFS with hundreds of millions of files
On Wed, 24 Feb 2010, Robert Milkowski wrote:> except for one bug which has been fixed which had to do with > consuming lots of CPU to find a free block I don''t think you are > right. You don''t have to set recordsize to smaller value for small > files. Recordsize property sets a maximum allowed recordsize but > other than that it is being selected automatically when file is > being created so for small files their recordsize will be small even > if it is set to default 128KB.Didn''t we hear on this list just recently that zfs no longer writes short tail blocks (i.e. zfs behavior has been changed)? Did I misunderstand? Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
David Dyer-Bennet
2010-Feb-24 21:57 UTC
[zfs-discuss] ZFS with hundreds of millions of files
On Wed, February 24, 2010 14:39, Nicolas Williams wrote:> On Wed, Feb 24, 2010 at 02:09:42PM -0600, Bob Friesenhahn wrote: >> I have a directory here containing a million files and it has not >> caused any strain for zfs at all although it can cause considerable >> stress on applications. > > The biggest problem is always the apps. For example, ls by default > sorts, and if you''re using a locale with a non-trivial collation (e.g., > any UTF-8 locales) then the sort gets very expensive.Which is bad enough if you say "ls". And there''s no option to say "don''t sort" that I know of, either. If you say "ls *" it''s in some ways worse, in that the "*" is expanded by the shell, and most of the filenames don''t make it to ls at all. ("ls abc*" is more likely, but with a million files that can still easily overlow the argument limit.) There really ought to be an option to make ls not sort, at least. -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
I manage several systems with near a billion objects (largest is currently 800M) on each and also discovered slowness over time. This is on X4540 systems with average file sizes being ~5KB. In our environment the following readily sped up performance significantly: Do not use RAID-Z. Use as many mirrored disks as you can. This has been discussed before. Nest data in directories as deeply as possible. Although ZFS doesn''t really care, client utilities certainly do and operations in large directories causes needless overhead. Make sure you do not use the filesystem past 80% capacity. As available space decreases so does overhead for allocating new files. Do not keep snapshots around forever, (although we keep them around for months now without issue.) Use ZFS compression (gzip worked best for us.) Record size did not make a significant change with our data, so we left it at 128K. You need lots of memory for a big ARC. Do not use the system for anything else other than serving files. Don''t put pressure on system memory and let ARC do its thing. We now use the F20 cache cards as a huge L2ARC in each server which makes a large impact. one the cache is primed. Caching all that file metadata really helps I found using SSD''s over iSCSI as a L2ARC was just as effective, so you don''t necessarily need expensive PCIe flash. After these tweaks the systems are blazingly quick, able to do many 1000''s of ops/second and deliver full gigE line speed even on fully random workloads. Your mileage may very but for now I am very happy with the systems finally (and rightfully so given their performance potential!) -- Adam Serediuk -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100224/ef150ff5/attachment.html>
Also you will need to ensure that atime is turned off for the ZFS volume(s) in question as well as any client-side NFS mount settings. There are a number of client-side NFS tuning parameters that can be done if you are using NFS clients with this system. Attributes caches, atime, diratime, etc all make a large different when dealing with very large data sets. On 24-Feb-10, at 2:05 PM, Adam Serediuk wrote:> I manage several systems with near a billion objects (largest is > currently 800M) on each and also discovered slowness over time. This > is on X4540 systems with average file sizes being ~5KB. In our > environment the following readily sped up performance significantly: > > Do not use RAID-Z. Use as many mirrored disks as you can. This has > been discussed before. > Nest data in directories as deeply as possible. > Although ZFS doesn''t really care, client utilities certainly do and > operations in large directories causes needless overhead. > Make sure you do not use the filesystem past 80% capacity. As > available space decreases so does overhead for allocating new files. > Do not keep snapshots around forever, (although we keep them around > for months now without issue.) > Use ZFS compression (gzip worked best for us.) > Record size did not make a significant change with our data, so we > left it at 128K. > You need lots of memory for a big ARC. > Do not use the system for anything else other than serving files. > Don''t put pressure on system memory and let ARC do its thing. > We now use the F20 cache cards as a huge L2ARC in each server which > makes a large impact. one the cache is primed. Caching all that file > metadata really helps > I found using SSD''s over iSCSI as a L2ARC was just as effective, so > you don''t necessarily need expensive PCIe flash. > > After these tweaks the systems are blazingly quick, able to do many > 1000''s of ops/second and deliver full gigE line speed even on fully > random workloads. Your mileage may very but for now I am very happy > with the systems finally (and rightfully so given their performance > potential!) > > -- > Adam Serediuk-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100224/2ca8d9c3/attachment.html>
Robert Milkowski
2010-Feb-24 22:45 UTC
[zfs-discuss] ZFS with hundreds of millions of files
On 24/02/2010 21:54, Bob Friesenhahn wrote:> On Wed, 24 Feb 2010, Robert Milkowski wrote: > >> except for one bug which has been fixed which had to do with >> consuming lots of CPU to find a free block I don''t think you are >> right. You don''t have to set recordsize to smaller value for small >> files. Recordsize property sets a maximum allowed recordsize but >> other than that it is being selected automatically when file is being >> created so for small files their recordsize will be small even if it >> is set to default 128KB. > > Didn''t we hear on this list just recently that zfs no longer writes > short tail blocks (i.e. zfs behavior has been changed)? Did I > misunderstand?yep, but the last block will be the same as all the other block. So if you have a small file where zfs used 1kb block the tail block will be 1kb as well and not 128kb even if the default recordsize is 128k. I think that only if you would make the recordsize property considerably smaller than average blocksize you could in theory save some space on tail blocks. -- Robert Milkowski http://milek.blogspot.com
Robert Milkowski
2010-Feb-24 22:50 UTC
[zfs-discuss] ZFS with hundreds of millions of files
On 24/02/2010 21:40, Steve wrote:> Well I am deleting most of them anyway... they are not needed anymore... > > Will deletion solve the problem... or do I need to do something more to defrag the file system? > > I have understood that defrag willl not be available until this block rewrite thing is done? >first the question is if fragmentation is actually your problem. Then if it is if you have lots of free disk space (continuous) then by copying a file you want to defrag you will defragment that file. You might get better results if you do it after you remove most of the files even if you have plenty of disk space available now. It is not as elegant as having a background defrag though. -- Robert Milkowski http://milek.blogspot.com
David Dyer-Bennet
2010-Feb-25 01:25 UTC
[zfs-discuss] ZFS with hundreds of millions of files
On 2/24/2010 4:11 PM, Stefan Walk wrote:> > On 24 Feb 2010, at 22:57, David Dyer-Bennet wrote: > >> >> On Wed, February 24, 2010 14:39, Nicolas Williams wrote: >>> On Wed, Feb 24, 2010 at 02:09:42PM -0600, Bob Friesenhahn wrote: >>>> I have a directory here containing a million files and it has not >>>> caused any strain for zfs at all although it can cause considerable >>>> stress on applications. >>> >>> The biggest problem is always the apps. For example, ls by default >>> sorts, and if you''re using a locale with a non-trivial collation (e.g., >>> any UTF-8 locales) then the sort gets very expensive. >> >> Which is bad enough if you say "ls". And there''s no option to say >> "don''t >> sort" that I know of, either. >> >> If you say "ls *" it''s in some ways worse, in that the "*" is >> expanded by >> the shell, and most of the filenames don''t make it to ls at all. ("ls >> abc*" is more likely, but with a million files that can still easily >> overlow the argument limit.) >> >> There really ought to be an option to make ls not sort, at least. > > ls -f?The man page sure doesn''t sound like it: -f Forces each argument to be interpreted as a directory and list the name found in each slot. This option turns off -l, -t, -s, -S, and -r, and turns on -a. The order is the order in which entries appear in the directory. And it doesn''t look like it plays well with others. Playing with it, it does look like it works for listing all the files in one directory without sorting, though, so yes, that''s a useful solution to the problem. (Yikes what an awful description in the man pages!) -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
Kjetil Torgrim Homme
2010-Feb-25 01:29 UTC
[zfs-discuss] ZFS with hundreds of millions of files
"David Dyer-Bennet" <dd-b at dd-b.net> writes:> Which is bad enough if you say "ls". And there''s no option to say > "don''t sort" that I know of, either./bin/ls -f "/bin/ls" makes sure an alias for "ls" to "ls -F" or similar doesn''t cause extra work. you can also write "\ls -f" to ignore a potential alias. without an argument, GNU ls and SunOS ls behave the same. if you write "ls -f *" you''ll only get output for directories in SunOS, while GNU ls will list all files. (ls -f has been there since SunOS 4.0 at least) -- Kjetil T. Homme Redpill Linpro AS - Changing the game
Could also try /usr/gnu/bin/ls -U. I''m working on improving the memory profile of /bin/ls (as it gets somewhat excessive when dealing with large directories), which as a side effect should also help with this. Currently /bin/ls allocates a structure for every file, and doesn''t output anything until it''s finished reading the entire directory, so even if it skips the sort, that''s generally a fraction of the total time spent, and doesn''t save you much. The structure also contains some duplicative data (I''m guessing that at the time -- a _long_ time ago, the decision was made to precompute some stuff versus testing the mode bits -- probably premature optimization, even then). I''m trying to make it so that it does what''s necessary and avoid duplicate work (so for example, if the output doesn''t need to be sorted it can display the entries as they are read -- though the situations where this can be done are not as often as you think). Hopefully once I''m done (I''ve been tied down with some other stuff), I''ll be able to post some results. On Wed, Feb 24, 2010 at 7:29 PM, Kjetil Torgrim Homme <kjetilho at linpro.no> wrote:> "David Dyer-Bennet" <dd-b at dd-b.net> writes: > >> Which is bad enough if you say "ls". ?And there''s no option to say >> "don''t sort" that I know of, either. > > /bin/ls -f > > "/bin/ls" makes sure an alias for "ls" to "ls -F" or similar doesn''t > cause extra work. ?you can also write "\ls -f" to ignore a potential > alias. > > without an argument, GNU ls and SunOS ls behave the same. ?if you write > "ls -f *" you''ll only get output for directories in SunOS, while GNU ls > will list all files. > > (ls -f has been there since SunOS 4.0 at least) > -- > Kjetil T. Homme > Redpill Linpro AS - Changing the game > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >