Sergey
2006-Sep-20 10:45 UTC
[zfs-discuss] Some questions about how to organize ZFS-based filestorage
Hi all, I am trying to organize our small (and the only one) filestorage using and thinking in ZFS-style ) So I have SF x4100 (2 x DualCore AMD Opteron 280, 4 Gb of RAM, Solaris 10 x86 06/06 64 bit kernel + updates), Sun Fiber Channel HBA card (Qlogic-based) and Apple Xraid 7Tb (2 raid controllers with 7 x 500 Gb ATA disks per each controller). Two internal SAS drives are in RAID1 mode using built-in LSI controller. Xraid is confugured like the following - 6 disks in HW RAID 5 and one spare disk per controller. So I have : # format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t2d0 <DEFAULT cyl 8872 alt 2 hd 255 sec 63> /pci at 0,0/pci1022,7450 at 2/pci1000,3060 at 3/sd at 2,0 1. c4t6000393000017312d0 <APPLE-Xserve RAID-1.50-2.27TB> /pci at 1,0/pci1022,7450 at 2/pci1077,10a at 1/fp at 0,0/disk at w6000393000017312,0 2. c5t600039300001742Bd0 <APPLE-Xserve RAID-1.50-2.27TB> /pci at 1,0/pci1022,7450 at 2/pci1077,10a at 1,1/fp at 0,0/disk at w600039300001742b,0 I need a place to keep multiple builds of the products (a huge number of small files). This will take about 2 Tb - so it''s quite logical to give the whole "1." or "2." from the output above. What will be the best block size that I need to supply to "zfs create" command to get the most from filesystem that has a huge number of small files? The other tank will host users'' homes, projects'' files and other files. Now I am thinking to create two separate ZFS pools. "1." and "2." will be the only physical devices in both pools. Or I''d better go and create one xfs pool that includes both "1." and "2."? Later on I will use NFS to share this filestorage between Linux, Solaris, OpenSolaris and MacOSX hosts. This message posted from opensolaris.org
James C. McPherson
2006-Sep-20 12:00 UTC
[zfs-discuss] Some questions about how to organize ZFS-based filestorage
Sergey wrote:> I am trying to organize our small (and the only one) filestorage using and > thinking in ZFS-style )..> AVAILABLE DISK SELECTIONS: 0. c0t2d0 <DEFAULT cyl 8872 alt 2 hd 255 sec 63> > /pci at 0,0/pci1022,7450 at 2/pci1000,3060 at 3/sd at 2,0 1. c4t6000393000017312d0 > <APPLE-Xserve RAID-1.50-2.27TB> > /pci at 1,0/pci1022,7450 at 2/pci1077,10a at 1/fp at 0,0/disk at w6000393000017312,0 2. > c5t600039300001742Bd0 <APPLE-Xserve RAID-1.50-2.27TB> > /pci at 1,0/pci1022,7450 at 2/pci1077,10a at 1,1/fp at 0,0/disk at w600039300001742b,0 > > > I need a place to keep multiple builds of the products (a huge number of > small files). This will take about 2 Tb - so it''s quite logical to give the > whole "1." or "2." from the output above. What will be the best block size > that I need to supply to "zfs create" command to get the most from > filesystem that has a huge number of small files?You''re not quite thinking ZFS-style yet. With ZFS you do not have to worry about block sizes unless you want to - the filesystem handles that for you. cheers, James C. McPherson -- Solaris kernel software engineer, system admin and troubleshooter http://www.jmcp.homeunix.com/blog Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
Matthew Ahrens
2006-Sep-21 17:17 UTC
[zfs-discuss] Some questions about how to organize ZFS-based filestorage
Sergey wrote:> What will be the > best block size that I need to supply to "zfs create" command to get > the most from filesystem that has a huge number of small files?You do not need to specify a recordsize. Since your workload is not record-structured (ie. a database), you should not specify a recordsize.> The other tank will host users'' homes, projects'' files and other > files. > > Now I am thinking to create two separate ZFS pools. "1." and "2." > will be the only physical devices in both pools. > > Or I''d better go and create one xfs pool that includes both "1." and > "2."?It will most likely be better to create one zfs pool with both your devices. This will provide better performance and easier management. You should create different filesystems for each project, home directory, etc.> Later on I will use NFS to share this filestorage between Linux, > Solaris, OpenSolaris and MacOSX hosts.That should work just fine. --matt