Does anyone have a 6140 expansion shelf that they can hook directly to a host? Just wondering if this configuration works. Previously I though the expansion connector was proprietary but now I see it''s just fibre channel. I tried this before with a 3511 and it "kind of" worked but ultimately had various problems and I had to give up on it. Hoping to avoid the cost of the raid controller. -frank
I should be able to reply to you next Tuesday -- my 6140 SATA expansion tray is due to arrive. Meanwhile, what kind of problem do you have with the 3511? -- Just me, Wire ... On 3/23/07, Frank Cusack <fcusack at fcusack.com> wrote:> Does anyone have a 6140 expansion shelf that they can hook directly to > a host? Just wondering if this configuration works. Previously I > though the expansion connector was proprietary but now I see it''s > just fibre channel. > > I tried this before with a 3511 and it "kind of" worked but ultimately > had various problems and I had to give up on it. > > Hoping to avoid the cost of the raid controller. > > -frank > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On March 23, 2007 5:38:20 PM +0800 Wee Yeh Tan <weeyeh at gmail.com> wrote:> I should be able to reply to you next Tuesday -- my 6140 SATA > expansion tray is due to arrive. Meanwhile, what kind of problem do > you have with the 3511?I''m not sure that it had anything to do with the raid controller being present or not. The initial configuration (5x250 original sata disks) worked well. Changing the disks to 750gb disks worked well. Then I had to get 7 more drive carriers and then some of the slots didn''t work -- disks would not spin up. The 7 addt''l carriers had different electronics than the original 5. Just a hardware revision, I suppose. Oh, and they were "dot hill" labelled instead of Sun labelled (dot hill is the OEM for the 3510/3511). When I was able to replace the 7 new carriers with ones that looked like the original 5 (same electronics and Sun branding), I had better luck but there was still one or two slots that were SOL. Swapping hardware around, I identified that it was definitely the slot and not a carrier or drive problem. But maybe a bad carrier "broke" the slot itself. I dunno! I was tempted to just use the array with the 10 or 11 slots that worked, since I got it for a very good price, but I was worried that there''d be more failures in the future, and the cost savings wasn''t worth even the potential hassle of having to deal with that. -frank
Greetings... Although I''ve not tried to directly connect a 6140 JBOD unit to a host, I''ve noticed that the JBOD''s disk drives do not online on their own. Without the controller unit activated, the drives continue to flash as if waiting to online... when the hardware controller switches on it discovers and online the drives to their correct state. Sorry, no extra technical detail ... just noticed the strange bahaviour of the LED''s for each of the drives. Maby having a active HBA might also create the right condition for the drives to becomes active...? On 3/23/07, Frank Cusack <fcusack at fcusack.com> wrote:> On March 23, 2007 5:38:20 PM +0800 Wee Yeh Tan <weeyeh at gmail.com> wrote: > > I should be able to reply to you next Tuesday -- my 6140 SATA > > expansion tray is due to arrive. Meanwhile, what kind of problem do > > you have with the 3511? > > I''m not sure that it had anything to do with the raid controller being > present or not. The initial configuration (5x250 original sata disks) > worked well. Changing the disks to 750gb disks worked well. Then I > had to get 7 more drive carriers and then some of the slots didn''t > work -- disks would not spin up. The 7 addt''l carriers had different > electronics than the original 5. Just a hardware revision, I suppose. > Oh, and they were "dot hill" labelled instead of Sun labelled (dot hill > is the OEM for the 3510/3511). > > When I was able to replace the 7 new carriers with ones that looked like > the original 5 (same electronics and Sun branding), I had better luck but > there was still one or two slots that were SOL. Swapping hardware around, > I identified that it was definitely the slot and not a carrier or drive > problem. But maybe a bad carrier "broke" the slot itself. I dunno! > > I was tempted to just use the array with the 10 or 11 slots that worked, > since I got it for a very good price, but I was worried that there''d be > more failures in the future, and the cost savings wasn''t worth even the > potential hassle of having to deal with that. > > -frank > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On 3/24/07, Frank Cusack <fcusack at fcusack.com> wrote:> On March 23, 2007 5:38:20 PM +0800 Wee Yeh Tan <weeyeh at gmail.com> wrote: > > I should be able to reply to you next Tuesday -- my 6140 SATA > > expansion tray is due to arrive. Meanwhile, what kind of problem do > > you have with the 3511?Frank, As promised. I got my 6140 SATA delivered yesterday and I hooked it up to a T2000 on S10u3. The T2000 saw the disks straight away and is "working" for the last 1 hour. I''ll be running some benchmarks on it. I''ll probably have a week with it until our vendor comes around and steals it from me.> I''m not sure that it had anything to do with the raid controller being > present or not. The initial configuration (5x250 original sata disks) > worked well. Changing the disks to 750gb disks worked well. Then I > had to get 7 more drive carriers and then some of the slots didn''t > work -- disks would not spin up. The 7 addt''l carriers had different > electronics than the original 5. Just a hardware revision, I suppose. > Oh, and they were "dot hill" labelled instead of Sun labelled (dot hill > is the OEM for the 3510/3511). > > When I was able to replace the 7 new carriers with ones that looked like > the original 5 (same electronics and Sun branding), I had better luck but > there was still one or two slots that were SOL. Swapping hardware around, > I identified that it was definitely the slot and not a carrier or drive > problem. But maybe a bad carrier "broke" the slot itself. I dunno! > > I was tempted to just use the array with the 10 or 11 slots that worked, > since I got it for a very good price, but I was worried that there''d be > more failures in the future, and the cost savings wasn''t worth even the > potential hassle of having to deal with that.I had bits of problems with using my controller 3510 as JBOD as well. Essentially, the disks seems to go away and come back especially when the system reboots. The problem is cleared only when I replaced the controller circuit with the JBOD circuit. That said, I had another 3510 JBOD that has been running very well for over 6 months now. -- Just me, Wire ...
Pity the price of a JBOD is so close to a Controller unit... On 3/27/07, Wee Yeh Tan <weeyeh at gmail.com> wrote:> On 3/24/07, Frank Cusack <fcusack at fcusack.com> wrote: > > On March 23, 2007 5:38:20 PM +0800 Wee Yeh Tan <weeyeh at gmail.com> wrote: > > > I should be able to reply to you next Tuesday -- my 6140 SATA > > > expansion tray is due to arrive. Meanwhile, what kind of problem do > > > you have with the 3511? > > Frank, > > As promised. I got my 6140 SATA delivered yesterday and I hooked it > up to a T2000 on S10u3. The T2000 saw the disks straight away and is > "working" for the last 1 hour. I''ll be running some benchmarks on it. > I''ll probably have a week with it until our vendor comes around and > steals it from me. > > > I''m not sure that it had anything to do with the raid controller being > > present or not. The initial configuration (5x250 original sata disks) > > worked well. Changing the disks to 750gb disks worked well. Then I > > had to get 7 more drive carriers and then some of the slots didn''t > > work -- disks would not spin up. The 7 addt''l carriers had different > > electronics than the original 5. Just a hardware revision, I suppose. > > Oh, and they were "dot hill" labelled instead of Sun labelled (dot hill > > is the OEM for the 3510/3511). > > > > When I was able to replace the 7 new carriers with ones that looked like > > the original 5 (same electronics and Sun branding), I had better luck but > > there was still one or two slots that were SOL. Swapping hardware around, > > I identified that it was definitely the slot and not a carrier or drive > > problem. But maybe a bad carrier "broke" the slot itself. I dunno! > > > > I was tempted to just use the array with the 10 or 11 slots that worked, > > since I got it for a very good price, but I was worried that there''d be > > more failures in the future, and the cost savings wasn''t worth even the > > potential hassle of having to deal with that. > > I had bits of problems with using my controller 3510 as JBOD as well. > Essentially, the disks seems to go away and come back especially when > the system reboots. The problem is cleared only when I replaced the > controller circuit with the JBOD circuit. That said, I had another > 3510 JBOD that has been running very well for over 6 months now. > > > -- > Just me, > Wire ... > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On March 27, 2007 1:43:01 PM +0800 Wee Yeh Tan <weeyeh at gmail.com> wrote:> As promised. I got my 6140 SATA delivered yesterday and I hooked it > up to a T2000 on S10u3. The T2000 saw the disks straight away and is > "working" for the last 1 hour. I''ll be running some benchmarks on it. > I''ll probably have a week with it until our vendor comes around and > steals it from me.That''s great! Now I have the option of spending just 2x as much as I would spend on another vendor instead of 4-5x as much. Sun just put the 6140 head unit in the startup essentials program, I wish they would put the expansion shelf in there also! I''m not being facetious about giving the 6140 hard consideration if it can be done for 2x the cost of some other solution. I just had a fan fail on an x4100m2. First of all, what a reliable box. There are 2 rows of fans so if one row dies the other one is still working, thus no hotspots. Then, each fan module has a little LED on it that stays lit to indicate it has failed. It was easy for the Sun guy to just open the access door and swap the new fan in, while the system was running. Plug connectors also -- no wires. THAT is a quality product and worth the premium. (But even so, the x4100m2 is well priced given the specs.) I don''t have any real experience with Sun arrays, but I have had lots of poor experience with other vendors'' arrays (no big names). I see a definite trend towards more reliability (or at least, servicability) in Sun''s hardware in general; from the way Sun is pushing it, I''d think the 6140 must be part of that trend. Sorry to ramble. Might I suggest sticking a ''zpool scrub'' or 2 in there? I''ve found that to be a good way to shake out problems. -frank
BTW, did anyone try this?? http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for Rayson On 3/27/07, Wee Yeh Tan <weeyeh at gmail.com> wrote:> As promised. I got my 6140 SATA delivered yesterday and I hooked it > up to a T2000 on S10u3. The T2000 saw the disks straight away and is > "working" for the last 1 hour. I''ll be running some benchmarks on it. > I''ll probably have a week with it until our vendor comes around and > steals it from me.
Cool blog! I''ll try a run at this on the benchmark. On 3/27/07, Rayson Ho <rayrayson at gmail.com> wrote:> BTW, did anyone try this?? > > http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for > > Rayson > > > > On 3/27/07, Wee Yeh Tan <weeyeh at gmail.com> wrote: > > As promised. I got my 6140 SATA delivered yesterday and I hooked it > > up to a T2000 on S10u3. The T2000 saw the disks straight away and is > > "working" for the last 1 hour. I''ll be running some benchmarks on it. > > I''ll probably have a week with it until our vendor comes around and > > steals it from me. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Just me, Wire ...
right on for optimizing throughput on solaris .. a couple of notes though (also mentioned in the QFS manuals): - on x86/x64 you''re just going to have an sd.conf so just increase the max_xfer_size for all with a line at the bottom like: sd_max_xfer_size=0x800000; (note: if you look at the source the ssd driver is built from the sd source .. it got collapsed back down to sd in S10 x86) - ssd_max_throttle or sd_max_throttle is typically a point of contention that has had many years of history with storage vendors .. this will limit the maximum queue depth across the board for all sd or ssd devices (read all disks) .. if you''re using the native Leadville stack, there is a dynamic throttle that should adjust per target, so you really shouldn''t have to set this unless you''re seeing command timeouts either on the port or on the host. By tuning this down you can affect performance on the root drives as well as external storage making solaris appear slower than it may or may not be. - ZFS has a maximum block size of 128KB - so i don''t think that tuning up maxphys and the max transfer sizes to 8MB isn''t going to make that much difference here .. if you want larger block transfers (possibly matching to a full stripe width) you''d have to either go with QFS or raw - (but note that with larger block transfers you can get into higher cache latency response times depending on the storage controller .. and that''s a whole other discussion) On Mar 27, 2007, at 08:24, Rayson Ho wrote:> BTW, did anyone try this?? > > http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for > > Rayson > > > > On 3/27/07, Wee Yeh Tan <weeyeh at gmail.com> wrote: >> As promised. I got my 6140 SATA delivered yesterday and I hooked it >> up to a T2000 on S10u3. The T2000 saw the disks straight away and is >> "working" for the last 1 hour. I''ll be running some benchmarks on >> it. >> I''ll probably have a week with it until our vendor comes around and >> steals it from me. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
talking of which, what''s the effort and consequences to increase the max allowed block size in zfs to highr figures like 1M... s. On 3/28/07, Jonathan Edwards <Jonathan.Edwards at sun.com> wrote:> right on for optimizing throughput on solaris .. a couple of notes > though (also mentioned in the QFS manuals): > > - on x86/x64 you''re just going to have an sd.conf so just increase > the max_xfer_size for all with a line at the bottom like: > sd_max_xfer_size=0x800000; > (note: if you look at the source the ssd driver is built from the sd > source .. it got collapsed back down to sd in S10 x86) > > - ssd_max_throttle or sd_max_throttle is typically a point of > contention that has had many years of history with storage vendors .. > this will limit the maximum queue depth across the board for all sd > or ssd devices (read all disks) .. if you''re using the native > Leadville stack, there is a dynamic throttle that should adjust per > target, so you really shouldn''t have to set this unless you''re seeing > command timeouts either on the port or on the host. By tuning this > down you can affect performance on the root drives as well as > external storage making solaris appear slower than it may or may not be. > > - ZFS has a maximum block size of 128KB - so i don''t think that > tuning up maxphys and the max transfer sizes to 8MB isn''t going to > make that much difference here .. if you want larger block transfers > (possibly matching to a full stripe width) you''d have to either go > with QFS or raw - (but note that with larger block transfers you can > get into higher cache latency response times depending on the storage > controller .. and that''s a whole other discussion) > > > On Mar 27, 2007, at 08:24, Rayson Ho wrote: > > > BTW, did anyone try this?? > > > > http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for > > > > Rayson > > > > > > > > On 3/27/07, Wee Yeh Tan <weeyeh at gmail.com> wrote: > >> As promised. I got my 6140 SATA delivered yesterday and I hooked it > >> up to a T2000 on S10u3. The T2000 saw the disks straight away and is > >> "working" for the last 1 hour. I''ll be running some benchmarks on > >> it. > >> I''ll probably have a week with it until our vendor comes around and > >> steals it from me. > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Hello Selim, Wednesday, March 28, 2007, 5:45:42 AM, you wrote: SD> talking of which, SD> what''s the effort and consequences to increase the max allowed block SD> size in zfs to highr figures like 1M... Think what would happen then if you try to read 100KB of data - due to chekcsumming ZFS would have to read entire MB. However it should be possible to batch several IOs together and issue one larger with ZFS - at least I hope it''s possible. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Robert Milkowski writes: > Hello Selim, > > Wednesday, March 28, 2007, 5:45:42 AM, you wrote: > > SD> talking of which, > SD> what''s the effort and consequences to increase the max allowed block > SD> size in zfs to highr figures like 1M... > > Think what would happen then if you try to read 100KB of data - due to > chekcsumming ZFS would have to read entire MB. > > However it should be possible to batch several IOs together and issue > one larger with ZFS - at least I hope it''s possible. > As you note The max coherency unit (blocksize) in ZFS is 128K. It''s also the max I/O size. And smaller I/O are already aggregated or batched up to that size. At 128K size the control to data ratio on the wire is already quite reasonable. So I don''t see much benefit to increasing this (there maybe some but the context needs to be well defined). The issue subject to debate because traditionally, one I/O came with an implied overhead of a full head seek. In that case, the larger the I/O the better. So at 60MB/s throughput and 5ms head seek time, we need I/Os 300K to make the data transfer time larger than the seek time and ~ 3MB I/O sizes to reach the point of diminishing return. But with a write allocate scheme we are not hit with the head seek for every I/O and common I/O size wisdom needs to be reconsidered. -r > > -- > Best regards, > Robert mailto:rmilkowski at task.gda.pl > http://milek.blogspot.com > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
However, even with sequential writes, a large I/O size makes a huge difference in throughput. Ask the QFS folks about data capture applications. ;-) (This is less true with ATA disks that tend to have less buffering and much less sophisticated architectures. I''m not aware of any dual-processor ATA drives, for instance.) This message posted from opensolaris.org
Le 30 mars 07 ? 08:36, Anton B. Rang a ?crit :> However, even with sequential writes, a large I/O size makes a huge > difference in throughput. Ask the QFS folks about data capture > applications. ;-) >I quantified the ''huge'' this as such 60MB/s and 5ms per seek means that for a FS that requires a seek per I/O (QFS ?) the throughput at infinite I/O size will be at most 10% better than at 3MB I/O size. For ZFS the equation does not stand because it does not incur a seek per I/O. -r>> (This is less true with ATA disks that tend to have less buffering > and much less sophisticated architectures. I''m not aware of any > dual-processor ATA drives, for instance.) > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On March 27, 2007 1:43:01 PM +0800 Wee Yeh Tan <weeyeh at gmail.com> wrote:> On 3/24/07, Frank Cusack <fcusack at fcusack.com> wrote: >> On March 23, 2007 5:38:20 PM +0800 Wee Yeh Tan <weeyeh at gmail.com> wrote: >> > I should be able to reply to you next Tuesday -- my 6140 SATA >> > expansion tray is due to arrive. Meanwhile, what kind of problem do >> > you have with the 3511? > > Frank, > > As promised. I got my 6140 SATA delivered yesterday and I hooked it > up to a T2000 on S10u3. The T2000 saw the disks straight away and is > "working" for the last 1 hour. I''ll be running some benchmarks on it. > I''ll probably have a week with it until our vendor comes around and > steals it from me.Any update? -frank
On 4/3/07, Frank Cusack <fcusack at fcusack.com> wrote:> > As promised. I got my 6140 SATA delivered yesterday and I hooked it > > up to a T2000 on S10u3. The T2000 saw the disks straight away and is > > "working" for the last 1 hour. I''ll be running some benchmarks on it. > > I''ll probably have a week with it until our vendor comes around and > > steals it from me. > > Any update?Nope. Got distracted by a series of unfortunate events. :(. -- Just me, Wire ...
On Thu, Mar 22, 2007 at 01:21:04PM -0700, Frank Cusack wrote:> Does anyone have a 6140 expansion shelf that they can hook directly to > a host? Just wondering if this configuration works. Previously I > though the expansion connector was proprietary but now I see it''s > just fibre channel.The 6140 controller unit has either 2GB or 4GB cache. Does the 6140 expansion shelf have cache as well or is the cache in the controller unit used for all expansions shelves? -- albert chin (china at thewrittenword.com)
The controller unit contains all of the cache. On 4/21/07, Albert Chin <opensolaris-zfs-discuss at mlists.thewrittenword.com> wrote:> On Thu, Mar 22, 2007 at 01:21:04PM -0700, Frank Cusack wrote: > > Does anyone have a 6140 expansion shelf that they can hook directly to > > a host? Just wondering if this configuration works. Previously I > > though the expansion connector was proprietary but now I see it''s > > just fibre channel. > > The 6140 controller unit has either 2GB or 4GB cache. Does the 6140 > expansion shelf have cache as well or is the cache in the controller > unit used for all expansions shelves? > > -- > albert chin (china at thewrittenword.com) > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >