Hi Louwtjie,
(CC''d to the list as an FYI to others)
The biggest gotcha is the SE6140''s have a 12 byte SCSI control data
block, and thus can only do 2TB LUNs out to the host. That''s not an
issue with ZFS however since you can just tack them together and grow
your pool that way. See the attached PNG. That''s how we''re
doing it.
You''d have one ZFS file system on top of the pool for your customers
setup.
SE6140 limitations:
Maximum Volumes Per Array 2,048
Maximum Volumes Per RAID group 256
Maximum Volume size 2 TB (minus 12 GB) <<<----!!!!
Maximum Drives in RAID group 30 <<<----!!!!
Maximum RAID group Size 20 TB <<<----!!!!
Storage Partitions Yes
Maximum Total Partitions 64
Maximum Hosts per Partition 256
Maximum Volumes per Partition 256
Maximum Number of Global Hot Spares 15
The above limits might matter if you thought you''d just have one fat
LUN
coming from your SE6140. You can''t do it that way. But as shown in the
picture you can use ZFS to do all that for you. If you keep all of your
LUNs at exactly 2000GB when you make them then you can mirror and then
detach an array''s LUNs one by one until you can remove the array.
It''ll be nice when ZFS has the ability to natively remove LUNs from a
pool, expected in several months apparently.
Don''t try to install the SE6140 software on Solaris 11 unless
you''re
good at porting. It''s possible (our NFS server is Sol 11 b64a) but
it''s
not end-user friendly. Solaris 9 or 10 is fine. We needed the Sol 11
b64a version for the ZFS ISCSI abilities which are fixed in that release.
When setting up the SE6140s, I found the serial ports didn''t function
with the supplied cables, at least on our equipment. Be proactive and
wire all the SE6140s into one management network and put a DHCP server
on there that allocates IPs according to the MAC addresses on the
controller cards. Then, and not before, go and register the arrays in
the Common Array Manager software. (And download the latest (May 2007)
from Sun first too). Trying to change an arrays IP once it''s in the CAM
setup is nasty.
From what you describe you can do it all with just one array. All of
ours are 750GB SATA disk SE6140s, expandable to 88TB per array. Our
biggest is one controller and one expansion tray so we have lots of
headroom.
You lose up to 20% of your raw capacity after the SE6140 RAID5 volume
setup and the ZFS overheads too. Keep that in mind when scoping your
solution.
In your /etc/system, put in the tuning:
set zfs:zil_disable=1
set zfs:zfs_nocacheflush=1
Our NFS database clients also have these tunings:
VCS mount opts (or /etc/vfstab for you)
MountOpt =
"rw,bg,hard,intr,proto=tcp,vers=3,rsize=32768,wsize=32768,forcedirectio"
ce.conf mods:
name="pci108e,abba" parent="/pci at 1f,700000"
unit-address="1"
adv_autoneg_cap=1 adv_1000fdx_cap=1 accept_jumbo=1 adv_pause_cap=1;
This gets about 400MBytes/s all together, running to a T2000 NFS server.
That''s pretty much the limits of the hardware so we''re happy
with that :)
We''re yet to look at MPxIO and load balancing across controllers. Plus
I''m not sure I''ve tuned the file systems for Oracle block
sizes.
Depending on your solution that probably isn''t an issue with you.
We like the ability to do ZFS snapshots and clones, we can copy an
entire DB setup and create a clone in about ten seconds. Before it took
hours using the EMCs.
Cheers,
Mark.
> After reading your post ... I was wondering whether you would give
> some input/advice on a certain configuration I''m working on.
>
> A customer (potential) are considering using a server (probably Sun
> galaxy) connected to 2 switches and lots (lots!) of 6140''s.
>
> - One large filesystem
> - 70TB
> - No downtime growth/expansion
>
> Since it seems that you have several 6140''s under ZFS control ...
any
> problems/comments for me?
>
> Thank you.
>
> On 7/19/07, Mark Ashley <mark at ibiblio.org> wrote:
>> Hi folks,
>>
>> One of the things I''m really hanging out for is the ability to
evacuate
>> the data from a zpool device onto the other devices and then remove the
>> device. Without mirroring it first etc. The zpool would of course
shrink
>> in size according to how much space you just took away.
>>
>> Our situation is we have a number of SE6140 arrays attached to a host
>> with a total of 35TB. Some arrays are owned by other projects but are
on
>> loan for a while. I''d like to make one very large pool from
the (maximum
>> 2TB! wtf!) LUNs from the SE6140s and once our DBAs are done with the
>> workspace, remove the LUNs and free up the SE6140 arrays so their
owners
>> can begin to use them.
>>
>> At the moment once a device is in a zpool, it''s stuck there.
That''s a
>> problem. What sort of time frame are we looking at until it''s
possible
>> to remove LUNs from zpools?
>>
>> ta,
>> Mark.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: SE6140_to_ZFS.png
Type: image/png
Size: 22166 bytes
Desc: not available
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070724/f56f85c0/attachment.png>