Gary Mills
2010-May-03 15:36 UTC
[zfs-discuss] Is the J4200 SAS array suitable for Sun Cluster?
I''m setting up a two-node cluster with 1U x86 servers. It needs a small amount of shared storage, with two or four disks. I understand that the J4200 with SAS disks is approved for this use, although I haven''t seen this information in writing. Does anyone have experience with this sort of configuration? I have a few questions. I understand that the J4200 with SATA disks will not do SCSI reservations. Will it with SAS disks? The X4140 seems to require two SAS HBAs, one for the internal disks and one for the external disks. Is this correct? Will the disks in the J4200 be accessible from both nodes, so that the cluster can fail over the storage? I know this works with a multi-initiator SCSI bus, but I don''t know about SAS behavior. Is there a smaller, and cheaper, SAS array that can be used in this configuration? It would still need to have redundant power and redundant SAS paths. I plan to use ZFS everywhere, for the root filesystem and the shared storage. The only exception will be UFS for /globaldevices . -- -Gary Mills- -Unix Group- -Computer and Network Services-
Charles Hedrick
2010-May-16 20:14 UTC
[zfs-discuss] Is the J4200 SAS array suitable for Sun Cluster?
We use this configuration. It works fine. However I don''t know enough about the details to answer all of your questions. The disks are accessible from both systems at the same time. Of course with ZFS you had better not actually use them from both systems. Actually, let me be clear about what we do. We have two J4200''s and one J4400. One J4200 uses SAS disks, the others SATA. The two with SATA disks are used in Sun cluster configurations as NFS servers. They fail over just fine, losing no state. The one with SAS is not used with Sun Cluster. Rather, it''s a Mysql server with two systems, one of them as a hot spare. (It also acts as a mysql slave server, but it uses different storage for that.) That means that our actual failover experience is with the SATA configuration. I will say from experience that in the SAS configuration both systems see the disks at the same time. I even managed to get ZFS to mount the same pool from both systems, which shouldn''t be possible. Behavior was very strange until we realized what was going on. I get the impression that they have special hardware in the SATA version that simulates SAS dual interface drives. That''s what lets you use SATA drives in a two-node configuration. There''s also some additional software setup for that configuration. Note however that they do not support SSD in the J4000. That means that a Sun cluster configuration is going to have slow write performance in any application that uses synchronous writes (e.g. the NFS server). The recommended approach is to put the ZIL in SSD. But in SunCluster it would have to be SSD that''s shared between the two systems, or you''d lose the contents of the ZIL when you do a failover. Since you can''t put SSD in the J4200, it''s not clear how you''d set that up. Personally I consider this a very serious disadvantage to the J4000 series. I kind of wish we had gotten a higher end storage system with some non-volatile cache. Of course when we got the hardware, Sun claimed they were going to support SSD in it. -- This message posted from opensolaris.org
Gary Mills
2010-May-17 14:49 UTC
[zfs-discuss] Is the J4200 SAS array suitable for Sun Cluster?
On Sun, May 16, 2010 at 01:14:24PM -0700, Charles Hedrick wrote:> We use this configuration. It works fine. However I don''t know > enough about the details to answer all of your questions. > > The disks are accessible from both systems at the same time. Of > course with ZFS you had better not actually use them from both > systems.That''s what I wanted to know. I''m not familiar with SAS fabrics, so it''s good to know that they operate similarly to multi-initiator SCSI in a cluster.> Actually, let me be clear about what we do. We have two J4200''s and > one J4400. One J4200 uses SAS disks, the others SATA. The two with > SATA disks are used in Sun cluster configurations as NFS > servers. They fail over just fine, losing no state. The one with SAS > is not used with Sun Cluster. Rather, it''s a Mysql server with two > systems, one of them as a hot spare. (It also acts as a mysql slave > server, but it uses different storage for that.) That means that our > actual failover experience is with the SATA configuration. I will > say from experience that in the SAS configuration both systems see > the disks at the same time. I even managed to get ZFS to mount the > same pool from both systems, which shouldn''t be possible. Behavior > was very strange until we realized what was going on.Our situation is that we only need a small amount of shared storate in the cluster. It''s intended for high-availability of core services, such as DNS and NIS, rather than as a NAS server.> I get the impression that they have special hardware in the SATA > version that simulates SAS dual interface drives. That''s what lets > you use SATA drives in a two-node configuration. There''s also some > additional software setup for that configuration.That would be the SATA interposer that does that. -- -Gary Mills- -Unix Group- -Computer and Network Services-