I''m looking to move our file storage from Windows to Opensolaris/zfs. The windows box will be connected through 10g for iscsi to the storage. The windows box will continue to serve the windows clients and will be hosting approximately 4TB of data. The physical box is a sunfire x4240, single AMD 2435 processor, 16G ram, LSI 3801E HBA, ixgbe 10g card. I''m looking for suggestions about optimum recordsize/blocksize and general tuning from someone whose been down this road. It will be hosting about 4TB of data for now. thanks. -- This message posted from opensolaris.org
hello do you want to use it as a file smb-fileserver or do you want to have other windows services? if you want to use it as a file server only, i would suggest to use build in cifs server. iscsi will be always slower than native cifs server and you have snapshots via windows property previous version with opensolaris cifs server. i''m just on the way to replace our windows file server with zfs server (nexenta + napp-it) for our students and up to now i have a good feeling about - after one year of evaluating and hesitating. i would say, now it*s not just good enough, its better than a windows 200x file server. acl is working nearly like on windows. the best is, we can allow our user to set access rights. even if they remove admin access (needed for backuos), root can read files due to underlaying unix. gea -- This message posted from opensolaris.org
For ease of administration with everyone in the department i''d prefer to keep everything consistent in the windows world. -- This message posted from opensolaris.org
I have used build 124 in this capacity, although I did zero tuning. I had about 4T of data on a single 5T iSCSI volume over gigabit. The windows server was a VM, and the opensolaris box is on a Dell 2950, 16G of RAM, x25e for the zil, no l2arc cache device. I used comstar. It was being used as a target for Doubletake, so it only saw write IO, with very little read. My load testing using iometer was very positive, and I would not have hesitated to use it as the primary node serving about 1000 users, maybe 200-300 active at a time. Scott -- This message posted from opensolaris.org
Thanks Scott. I really appreciate your feedback. I''m curious about the number of disks, raidz/2 setup information? -- This message posted from opensolaris.org
We''re using a x4250 with a J4400 attached for a similar configuration. However, it''s running Solaris 10u8. We have 16 disks in the x4250, 10 of which make up 2x raidz (4-disk each) groups, with 2 available hot spares. These are 300gb disks, so I''m less afraid of data loss from a parity failure. The J4400 holds 12 disks organized into 2x 5-disk raidz2 with 2 hot spares; these are 1tb disks. There is also a 146gb mirrored zil device for the 1gb disk array. Reads are random but writes are mostly linear, so spinning disks are fine and it seems to keep reads from binding during large writes. These were originally supposed to be X-25E SSDs, but we had an edge-case issue that Sun and Intel are still apparently attempting to fix. We have not really had any problems with the system whatsoever. We just use the ''shareiscsi=on'' and then add a target port group. --- Karl Katzke Systems Analyst II TAMU DRGS -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of JOrdan Sent: Friday, April 16, 2010 2:42 PM To: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] ZFS for ISCSI ntfs backing store. Thanks Scott. I really appreciate your feedback. I''m curious about the number of disks, raidz/2 setup information? -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
At the time we had it setup as 3 x 5 disk raidz, plus a hot spare. These 16 disks were in a SAS cabinet, and the the slog was on the server itself. We are now running 2 x 7 raidz2 plus a hot spare and slog, all inside the cabinet. Since the disks are 1.5T, I was concerned about resliver times for a failed disk. About the only thing I would consider at this point is getting an SSD for the l2arc for dedupe performance. -- This message posted from opensolaris.org