Hi, An IT organization needs to implement highly available file server, using Solaris 10, SunCluster, NFS and Samba. We are talking about thousands, even 10s of thousands of ZFS file systems. Is this doable? Should I expect any impact on performance or stability due to the fact I''ll have that many mounted filesystems, with everything implied from that fact (''df | wc -l'' with thousands of lines of result, for instance)? Thanks, Rafael. -- <sun.com/solaris> * Rafael Friedlander * Solutions Architect *Sun Microsystems, Inc.* Phone 972 9 971-0564 (X10564) Mobile 972 544 971-564 Fax 972 9 951-3467 Email Rafael.Friedlander at Sun.COM <sun.com/solaris> -------------- next part -------------- An HTML attachment was scrubbed... URL: <mail.opensolaris.org/pipermail/zfs-discuss/attachments/20061030/1721211b/attachment.html>
Hello Rafael, Monday, October 30, 2006, 2:58:56 PM, you wrote: > Hi, An IT organization needs to implement highly available file server, using Solaris 10, SunCluster, NFS and Samba. We are talking about thousands, even 10s of thousands of ZFS file systems. Is this doable? Should I expect any impact on performance or stability due to the fact I''ll have that many mounted filesystems, with everything implied from that fact (''df | wc -l'' with thousands of lines of result, for instance)? Thanks, 1. rebooting server could take several hours right now with so many file system I belive this problem is being addressed right now 2. each new fs when mounted consumes some memory - so you can endup with much of the memory consumed just by mounting file system - something was done to fix it recently but I haven''t been following 3. backup - depends on software you''re going to use it could be tricky or it could not to backup/restore so many file systems -- Best regards, Robert mailto:rmilkowski@task.gda.pl milek.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi, My suggestion is direct any command output to a file that may print thous of lines. I have not tried that number of FSs. So, my first suggestion is to have alot of phys mem installed. The second item that I could be concerned with is path translation going thru alot of mount points. I think I remember in some old code that their was a limit of 256 mount points thru a path. I don''t know if it still exists. Mitchell Erblich -----------------> Rafael Friedlander wrote: > > Hi, > > An IT organization needs to implement highly available file server, > using Solaris 10, SunCluster, NFS and Samba. We are talking about > thousands, even 10s of thousands of ZFS file systems. > > Is this doable? Should I expect any impact on performance or stability > due to the fact I''ll have that many mounted filesystems, with > everything implied from that fact (''df | wc -l'' with thousands of > lines of result, for instance)? > > Thanks, > > Rafael. > -- > > --------------------------------------------------------------- > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 10/30/06, Robert Milkowski <rmilkowski at task.gda.pl> wrote:> > > 1. rebooting server could take several hours right now with so many file system > > I belive this problem is being addressed right nowWell, I''ve done a quick test on b50 - 10K filesystems took around 5 minutes to boot. Not bad, considering it was done on a single SATA disk. I am quite sure S10U2 wouldn''t be as quick as b50. On the other hand S10U3 may have these fixes included. -- Regards, Cyril
Erblichs writes: > Hi, > > My suggestion is direct any command output to a file > that may print thous of lines. > > I have not tried that number of FSs. So, my first > suggestion is to have alot of phys mem installed. I seem to recall 64K per FS and being worked on to reduce it further. So it''s not a huge deal unless we''re talking 1000s of FS on a smallish or old system. -r
On 10/31/06, Robert Milkowski <rmilkowski at task.gda.pl> wrote:> Hello Cyril, > > Tuesday, October 31, 2006, 8:30:50 AM, you wrote: > > CP> On 10/30/06, Robert Milkowski <rmilkowski at task.gda.pl> wrote: > >> > >> > >> 1. rebooting server could take several hours right now with so many file system > >> > >> I belive this problem is being addressed right now > > CP> Well, I''ve done a quick test on b50 - 10K filesystems took around 5 minutes > CP> to boot. Not bad, considering it was done on a single SATA disk. I am quite > CP> sure S10U2 wouldn''t be as quick as b50. On the other hand S10U3 may have > CP> these fixes included. > > Now add a sharenfs property for each of these file systems and see > what happens.Hm, will do first thing tomorrow -- Regards, Cyril
Hello Cyril, Tuesday, October 31, 2006, 8:30:50 AM, you wrote: CP> On 10/30/06, Robert Milkowski <rmilkowski at task.gda.pl> wrote:>> >> >> 1. rebooting server could take several hours right now with so many file system >> >> I belive this problem is being addressed right nowCP> Well, I''ve done a quick test on b50 - 10K filesystems took around 5 minutes CP> to boot. Not bad, considering it was done on a single SATA disk. I am quite CP> sure S10U2 wouldn''t be as quick as b50. On the other hand S10U3 may have CP> these fixes included. Now add a sharenfs property for each of these file systems and see what happens. -- Best regards, Robert mailto:rmilkowski at task.gda.pl milek.blogspot.com