Binny Raphael
2006-Oct-27 00:00 UTC
[zfs-discuss] Planing to use ZFS in production.. some queries...
Hi, We are planing to migrate two servers to Solaris 10 06/6 and use ZFS. One of the server (Production) is currently using VxVM/VxFS and the Development server is using SVM. Production Server: The disks on the production server is from an HDS 9500 SAN (i.e LUNs which are already RAID 5 at the hardware/HDS level) and we use Veritas to create simple 4-way stripe volumes (when you need good I/O) (we create multiple smaller volumes) OR simple concatenation Volumes (say 3 LUNs concatenated to and make bigger/smaller volumes as needed). There is no RAID redundancy configured at Veritas level. Just plain 4-way stripe or Concat [ e.g for 4-Way stripe:: Allocate 4 LUNs are of 20GB each (so we create 4-way stripe using the 4x20GB LUNs and a block size of 6144 -- block size recommended for our HDS SAN for a 4-way stripe ). Then we carve out say 5 volumes of 10 GB each (each of the 5 volumes is a 4-way stripe). This we end up with 5 Volumes of 10GB (50GB in total) and 30GB total in Free/Unallocated space on the 4-LUNs. If we run out of free space we add LUNs in multiples of 4 all the time to keep the 4-Way stripe e.g for ConCat:: Allocate 3-LUNs (20GB each for a total of 60GB ) from the SAN; Create concatenated Volumes to form a 30GB, 15GB and 5GB Volumes (50GB allocated) and 10GB Free. In future if we need to increase the current allocation or add more volumes, we will add more 20GB LUN and create/grow the volumes. We add 1 LUN at a time to meet out requirements Query: 1. How do we get the same 4-Way stripe of the LUNs in ZFS (we do not want any redundancy at ZFS level since it is taken care at HDS 9500 level in hardware)? How do do we specify a block size of 6144? do I need to worry about it? 2. How do we do simple contact of the LUNs and get the same results? (do we use zpool create tank LUN0 LUN1 LUN2) 3. what is the best way to do use ZFS for a SAN storage? 4. In Veritas the Volumes(file system) are 10 GB each and we grow it when it reach 90%. Do you recommend using zfs set quota=10G .... for the mount points and increase it when needed? [we want to control the growth at all times do that one file system does not hog up all the space] Development Server: The disk for the development server is from SE3120 (4x300GB 10000 RPM Disks). We are currently using SVM in RAID 10 (so we get approx 600GB usable Disk space) and created on big Soft Partition (using SVM). The we have allocated smaller partitions(volumes) for each mount points (e.g say 5 mount point of 10GB each + one 30GB + one 15GB + one 5GB). If we need to increase any partition we add the space and use growfs commands. Query: 1. What is the recommended way to use the 4 disk on a SE3120 (do we use zpool create tank mirror disk0 disk1 mirror disk2 disk3 OR use zpool create tank raidz disk0 disk1 disk2 disk3 ) [with raidz we will get more disk space] 2. What are any other recommendation that you have for using SE3120? Binny This message posted from opensolaris.org
Darren Dunham
2006-Oct-27 00:37 UTC
[zfs-discuss] Planing to use ZFS in production.. some queries...
> 1. How do we get the same 4-Way stripe of the LUNs in ZFS (we donot want any redundancy at ZFS level since it is taken care at HDS 9500 level in hardware)? Just add the disks to the pool. They''ll be automatically striped.> How do do we specify a block size of 6144? do I need to worry aboutit? You probably don''t want to specify a block size unless you''re always doing transfers in that size (like a database)> 2. How do we do simple contact of the LUNs and get the same results?(do we use zpool create tank LUN0 LUN1 LUN2) There is no "concat". All top level devices in a pool are striped together.> 3. what is the best way to do use ZFS for a SAN storage?There''s no single "best way". There are a lot of ideas on this topic that have been discussed. However, the more information you can give to ZFS and the more redundancy available at its level (rather than via the array), the better.> 4. In Veritas the Volumes(file system) are 10 GB each and we grow itwhen it reach 90%. Do you recommend using zfs set quota=10G .... for the mount points and increase it when needed? [we want to control the growth at all times do that one file system does not hog up all the space] Then quotas are great. They''ll let you assign the space used within a particular pool.> Development Server: > The disk for the development server is from SE3120 (4x300GB 10000 RPM > Disks). We are currently using SVM in RAID 10 (so we get approx 600GB > usable Disk space) and created on big Soft Partition (using SVM). The > we have allocated smaller partitions(volumes) for each mount points > (e.g say 5 mount point of 10GB each + one 30GB + one 15GB + one > 5GB). If we need to increase any partition we add the space and use > growfs commands.So on ZFS, rather than starting with a small soft partition and increasing it, you''d just start with a smaller quota and increase it if necessary.> Query: > 1. What is the recommended way to use the 4 disk on a SE3120 (do weuse zpool create tank mirror disk0 disk1 mirror disk2 disk3 OR use zpool create tank raidz disk0 disk1 disk2 disk3 ) [with raidz we will get more disk space] Either way is valid. The mirror should be faster than the raidz in some situations. Is there a lot of change in your disk setup at the moment, or do the volumes you create tend to remain fairly constant? -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >