I want to "fix" (as much as is possible) a misalignment issue with an X-25E that I am using for both OS and as an slog device. This is on x86 hardware running Solaris 10U8. Partition table looks as follows: Part Tag Flag Cylinders Size Blocks 0 root wm 1 - 1306 10.00GB (1306/0/0) 20980890 1 unassigned wu 0 0 (0/0/0) 0 2 backup wm 0 - 3886 29.78GB (3887/0/0) 62444655 3 unassigned wu 1307 - 3886 19.76GB (2580/0/0) 41447700 4 unassigned wu 0 0 (0/0/0) 0 5 unassigned wu 0 0 (0/0/0) 0 6 unassigned wu 0 0 (0/0/0) 0 7 unassigned wu 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 unassigned wu 0 0 (0/0/0) 0 And here is fdisk: Total disk size is 3890 cylinders Cylinder size is 16065 (512 byte) blocks Cylinders Partition Status Type Start End Length % ========= ====== ============ ===== === ====== == 1 Active Solaris 1 3889 3889 100 Slice 0 is where the OS lives and slice 3 is our slog. As you can see from the fdisk partition table (and from the slice view), the OS partition starts on cylinder 1 -- which is not 4k aligned. I don''t think there is much I can do to fix this without reinstalling. However, I''m most concerned about the slog slice and would like to recreate its partition such that it begins on cylinder 1312. So a few questions: - Would making s3 be 4k block aligned help even though s0 is not? - Do I need to worry about 4k block aligning the *end* of the slice? eg instead of ending s3 on cylinder 3886, end it on 3880 instead? Thanks, Ray
On Mon, Aug 30 at 15:05, Ray Van Dolson wrote:>I want to "fix" (as much as is possible) a misalignment issue with an >X-25E that I am using for both OS and as an slog device. > >This is on x86 hardware running Solaris 10U8. > >Partition table looks as follows: > >Part Tag Flag Cylinders Size Blocks > 0 root wm 1 - 1306 10.00GB (1306/0/0) 20980890 > 1 unassigned wu 0 0 (0/0/0) 0 > 2 backup wm 0 - 3886 29.78GB (3887/0/0) 62444655 > 3 unassigned wu 1307 - 3886 19.76GB (2580/0/0) 41447700 > 4 unassigned wu 0 0 (0/0/0) 0 > 5 unassigned wu 0 0 (0/0/0) 0 > 6 unassigned wu 0 0 (0/0/0) 0 > 7 unassigned wu 0 0 (0/0/0) 0 > 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 > 9 unassigned wu 0 0 (0/0/0) 0 > >And here is fdisk: > > Total disk size is 3890 cylinders > Cylinder size is 16065 (512 byte) blocks > > Cylinders > Partition Status Type Start End Length % > ========= ====== ============ ===== === ====== ==> 1 Active Solaris 1 3889 3889 100 > >Slice 0 is where the OS lives and slice 3 is our slog. As you can see >from the fdisk partition table (and from the slice view), the OS >partition starts on cylinder 1 -- which is not 4k aligned. > >I don''t think there is much I can do to fix this without reinstalling. > >However, I''m most concerned about the slog slice and would like to >recreate its partition such that it begins on cylinder 1312. > >So a few questions: > > - Would making s3 be 4k block aligned help even though s0 is not? > - Do I need to worry about 4k block aligning the *end* of the > slice? eg instead of ending s3 on cylinder 3886, end it on 3880 > instead? > >Thanks, >RayDo you specifically have benchmark data indicating unaligned or aligned+offset access on the X25-E is significantly worse than aligned access? I''d thought the "tier1" SSDs didn''t have problems with these workloads. --eric -- Eric D. Mudama edmudama at mail.bounceswoosh.org
On Mon, Aug 30, 2010 at 03:37:52PM -0700, Eric D. Mudama wrote:> On Mon, Aug 30 at 15:05, Ray Van Dolson wrote: > >I want to "fix" (as much as is possible) a misalignment issue with an > >X-25E that I am using for both OS and as an slog device. > > > >This is on x86 hardware running Solaris 10U8. > > > >Partition table looks as follows: > > > >Part Tag Flag Cylinders Size Blocks > > 0 root wm 1 - 1306 10.00GB (1306/0/0) 20980890 > > 1 unassigned wu 0 0 (0/0/0) 0 > > 2 backup wm 0 - 3886 29.78GB (3887/0/0) 62444655 > > 3 unassigned wu 1307 - 3886 19.76GB (2580/0/0) 41447700 > > 4 unassigned wu 0 0 (0/0/0) 0 > > 5 unassigned wu 0 0 (0/0/0) 0 > > 6 unassigned wu 0 0 (0/0/0) 0 > > 7 unassigned wu 0 0 (0/0/0) 0 > > 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 > > 9 unassigned wu 0 0 (0/0/0) 0 > > > >And here is fdisk: > > > > Total disk size is 3890 cylinders > > Cylinder size is 16065 (512 byte) blocks > > > > Cylinders > > Partition Status Type Start End Length % > > ========= ====== ============ ===== === ====== ==> > 1 Active Solaris 1 3889 3889 100 > > > >Slice 0 is where the OS lives and slice 3 is our slog. As you can see > >from the fdisk partition table (and from the slice view), the OS > >partition starts on cylinder 1 -- which is not 4k aligned. > > > >I don''t think there is much I can do to fix this without reinstalling. > > > >However, I''m most concerned about the slog slice and would like to > >recreate its partition such that it begins on cylinder 1312. > > > >So a few questions: > > > > - Would making s3 be 4k block aligned help even though s0 is not? > > - Do I need to worry about 4k block aligning the *end* of the > > slice? eg instead of ending s3 on cylinder 3886, end it on 3880 > > instead? > > > >Thanks, > >Ray > > Do you specifically have benchmark data indicating unaligned or > aligned+offset access on the X25-E is significantly worse than aligned > access? > > I''d thought the "tier1" SSDs didn''t have problems with these workloads.I''ve been experiencing heavy Device Not Ready errors with this configuration, and thought perhaps it could be exacerbated by the block alignment issue. See this thread[1]. So this would be a troubleshooting step to attempt to further isolate the problem -- by eliminating the 4k alignment issue as a factor. Just want to make sure I set up the alignment as optimally as possible. Ray [1] http://markmail.org/message/5rmfzvqwlmosh2oh
comment below... On Aug 30, 2010, at 3:42 PM, Ray Van Dolson wrote:> On Mon, Aug 30, 2010 at 03:37:52PM -0700, Eric D. Mudama wrote: >> On Mon, Aug 30 at 15:05, Ray Van Dolson wrote: >>> I want to "fix" (as much as is possible) a misalignment issue with an >>> X-25E that I am using for both OS and as an slog device. >>> >>> This is on x86 hardware running Solaris 10U8. >>> >>> Partition table looks as follows: >>> >>> Part Tag Flag Cylinders Size Blocks >>> 0 root wm 1 - 1306 10.00GB (1306/0/0) 20980890 >>> 1 unassigned wu 0 0 (0/0/0) 0 >>> 2 backup wm 0 - 3886 29.78GB (3887/0/0) 62444655 >>> 3 unassigned wu 1307 - 3886 19.76GB (2580/0/0) 41447700 >>> 4 unassigned wu 0 0 (0/0/0) 0 >>> 5 unassigned wu 0 0 (0/0/0) 0 >>> 6 unassigned wu 0 0 (0/0/0) 0 >>> 7 unassigned wu 0 0 (0/0/0) 0 >>> 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 >>> 9 unassigned wu 0 0 (0/0/0) 0 >>> >>> And here is fdisk: >>> >>> Total disk size is 3890 cylinders >>> Cylinder size is 16065 (512 byte) blocks >>> >>> Cylinders >>> Partition Status Type Start End Length % >>> ========= ====== ============ ===== === ====== ==>>> 1 Active Solaris 1 3889 3889 100 >>> >>> Slice 0 is where the OS lives and slice 3 is our slog. As you can see >>> from the fdisk partition table (and from the slice view), the OS >>> partition starts on cylinder 1 -- which is not 4k aligned.To get to a fine alignment, you need an EFI label. However, Solaris does not (yet) support booting from EFI labeled disks. The older SMI labels are all "cylinder" aligned which gives you a 1/4 chance of alignment.>>> >>> I don''t think there is much I can do to fix this without reinstalling. >>> >>> However, I''m most concerned about the slog slice and would like to >>> recreate its partition such that it begins on cylinder 1312. >>> >>> So a few questions: >>> >>> - Would making s3 be 4k block aligned help even though s0 is not? >>> - Do I need to worry about 4k block aligning the *end* of the >>> slice? eg instead of ending s3 on cylinder 3886, end it on 3880 >>> instead? >>> >>> Thanks, >>> Ray >> >> Do you specifically have benchmark data indicating unaligned or >> aligned+offset access on the X25-E is significantly worse than aligned >> access? >> >> I''d thought the "tier1" SSDs didn''t have problems with these workloads. > > I''ve been experiencing heavy Device Not Ready errors with this > configuration, and thought perhaps it could be exacerbated by the block > alignment issue. > > See this thread[1]. > > So this would be a troubleshooting step to attempt to further isolate > the problem -- by eliminating the 4k alignment issue as a factor.In my experience, port expanders with SATA drives do not handle the high I/O rate that can be generated by a modest server. We are still trying to get to the bottom of these issues, but they do not appear to be related to the OS, mpt driver, ZIL use, or alignment. -- richard> > Just want to make sure I set up the alignment as optimally as possible. > > Ray > > [1] http://markmail.org/message/5rmfzvqwlmosh2oh > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- OpenStorage Summit, October 25-27, Palo Alto, CA http://nexenta-summit2010.eventbrite.com ZFS and performance consulting http://www.RichardElling.com
On Mon, Aug 30, 2010 at 03:56:42PM -0700, Richard Elling wrote:> comment below... > > On Aug 30, 2010, at 3:42 PM, Ray Van Dolson wrote: > > > On Mon, Aug 30, 2010 at 03:37:52PM -0700, Eric D. Mudama wrote: > >> On Mon, Aug 30 at 15:05, Ray Van Dolson wrote: > >>> I want to "fix" (as much as is possible) a misalignment issue with an > >>> X-25E that I am using for both OS and as an slog device. > >>> > >>> This is on x86 hardware running Solaris 10U8. > >>> > >>> Partition table looks as follows: > >>> > >>> Part Tag Flag Cylinders Size Blocks > >>> 0 root wm 1 - 1306 10.00GB (1306/0/0) 20980890 > >>> 1 unassigned wu 0 0 (0/0/0) 0 > >>> 2 backup wm 0 - 3886 29.78GB (3887/0/0) 62444655 > >>> 3 unassigned wu 1307 - 3886 19.76GB (2580/0/0) 41447700 > >>> 4 unassigned wu 0 0 (0/0/0) 0 > >>> 5 unassigned wu 0 0 (0/0/0) 0 > >>> 6 unassigned wu 0 0 (0/0/0) 0 > >>> 7 unassigned wu 0 0 (0/0/0) 0 > >>> 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 > >>> 9 unassigned wu 0 0 (0/0/0) 0 > >>> > >>> And here is fdisk: > >>> > >>> Total disk size is 3890 cylinders > >>> Cylinder size is 16065 (512 byte) blocks > >>> > >>> Cylinders > >>> Partition Status Type Start End Length % > >>> ========= ====== ============ ===== === ====== ==> >>> 1 Active Solaris 1 3889 3889 100 > >>> > >>> Slice 0 is where the OS lives and slice 3 is our slog. As you can see > >>> from the fdisk partition table (and from the slice view), the OS > >>> partition starts on cylinder 1 -- which is not 4k aligned. > > To get to a fine alignment, you need an EFI label. However, Solaris does > not (yet) support booting from EFI labeled disks. The older SMI labels > are all "cylinder" aligned which gives you a 1/4 chance of alignment.Yep... our other boxes similar to this one are using whole disks as ZIL, so we''re able to use EFI. The Device Not Ready errors happen there too (SSD''s are on an expander) but only from between 5-15 errors per day (vs the 500 per hour on the split OS/slog setup).> > >>> > >>> I don''t think there is much I can do to fix this without reinstalling. > >>> > >>> However, I''m most concerned about the slog slice and would like to > >>> recreate its partition such that it begins on cylinder 1312. > >>> > >>> So a few questions: > >>> > >>> - Would making s3 be 4k block aligned help even though s0 is not? > >>> - Do I need to worry about 4k block aligning the *end* of the > >>> slice? eg instead of ending s3 on cylinder 3886, end it on 3880 > >>> instead? > >>> > >>> Thanks, > >>> Ray > >> > >> Do you specifically have benchmark data indicating unaligned or > >> aligned+offset access on the X25-E is significantly worse than aligned > >> access? > >> > >> I''d thought the "tier1" SSDs didn''t have problems with these workloads. > > > > I''ve been experiencing heavy Device Not Ready errors with this > > configuration, and thought perhaps it could be exacerbated by the block > > alignment issue. > > > > See this thread[1]. > > > > So this would be a troubleshooting step to attempt to further isolate > > the problem -- by eliminating the 4k alignment issue as a factor. > > In my experience, port expanders with SATA drives do not handle > the high I/O rate that can be generated by a modest server. We are > still trying to get to the bottom of these issues, but they do not appear > to be related to the OS, mpt driver, ZIL use, or alignment. > -- richardVery interesting. We''ve been looking at Nexenta as we haven''t been able to reproduce our issues on OpenSolaris -- I was hoping this meant NexentaStor wouldn''t have the issue. In any case -- any thoughts on whether or not I''ll be helping anything if I change my slog slice starting cylinder to be 4k aligned even though slice 0 isn''t?> > > > > Just want to make sure I set up the alignment as optimally as possible. > > > > Ray > > > > [1] http://markmail.org/message/5rmfzvqwlmosh2ohThanks, Ray
On Tue, Aug 31, 2010 at 6:03 AM, Ray Van Dolson <rvandolson at esri.com> wrote:> In any case -- any thoughts on whether or not I''ll be helping anything > if I change my slog slice starting cylinder to be 4k aligned even > though slice 0 isn''t? >some people claims that due to how zfs works, there will be performance hit as long the reported sector size is different with the physical size. This thread[1] has the discussion on what happened and how to handle such drives on freebsd. [1] http://marc.info/?l=freebsd-fs&m=126976001214266&w=2 -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org
On Mon, Aug 30, 2010 at 04:12:48PM -0700, Edho P Arief wrote:> On Tue, Aug 31, 2010 at 6:03 AM, Ray Van Dolson <rvandolson at esri.com> wrote: > > In any case -- any thoughts on whether or not I''ll be helping anything > > if I change my slog slice starting cylinder to be 4k aligned even > > though slice 0 isn''t? > > > > some people claims that due to how zfs works, there will be > performance hit as long the reported sector size is different with the > physical size. > > This thread[1] has the discussion on what happened and how to handle > such drives on freebsd. > > [1] http://marc.info/?l=freebsd-fs&m=126976001214266&w=2Thanks for the pointer -- these posts seem to reference data disks within the pool rather than disks being used for slog. Perhaps some of the same issues could arise, but I''m not sure that variable stripe sizing in a RAIDZ pool would change how the ZIL / slog devices are addressed. I''m sure someone will correct me if I''m wrong on that... Ray
On Tue, Aug 31 at 6:12, Edho P Arief wrote:>On Tue, Aug 31, 2010 at 6:03 AM, Ray Van Dolson <rvandolson at esri.com> wrote: >> In any case -- any thoughts on whether or not I''ll be helping anything >> if I change my slog slice starting cylinder to be 4k aligned even >> though slice 0 isn''t? >> > >some people claims that due to how zfs works, there will be >performance hit as long the reported sector size is different with the >physical size. > >This thread[1] has the discussion on what happened and how to handle >such drives on freebsd. > >[1] http://marc.info/?l=freebsd-fs&m=126976001214266&w=2Yes, but that''s for a 4k rotating drive, which has a much different latency profile than an SSD. I was wondering if anyone had a benchmarking showing this alignment mattered on the latest SSDs. My guess is no, but I have no data. -- Eric D. Mudama edmudama at mail.bounceswoosh.org
Christopher George
2010-Aug-31 05:11 UTC
[zfs-discuss] 4k block alignment question (X-25E)
> I was wondering if anyone had a benchmarking showing this alignment > mattered on the latest SSDs. My guess is no, but I have no data.I don''t believe there can be any doubt whether a Flash based SSD (tier1 or not) is negatively affected by partition misalignment. It is intrinsic to the required asymmetric erase/program dual operation and the resultant RMW penalty to perform a write if unaligned. This is detailed in the following vendor benchmarking guidelines (SF-1500 controller): http://www.smartm.com/files/salesLiterature/storage/AN001_Benchmark_XceedIOPSSATA_Apr2010_.pdf Highlight from link - "Proper partition alignment is one of the most critical attributes that can greatly boost the I/O performance of an SSD due to reduced read modify?write operations." It should be noted, the above highlight only applies to Flash based SSD as an NVRAM based SSD does *not* suffer the same fate, as its performance is not bound by or vary with partition (mis)alignment. Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org
On Mon, Aug 30, 2010 at 10:11:32PM -0700, Christopher George wrote:> > I was wondering if anyone had a benchmarking showing this alignment > > mattered on the latest SSDs. My guess is no, but I have no data. > > I don''t believe there can be any doubt whether a Flash based SSD (tier1 > or not) is negatively affected by partition misalignment. It is intrinsic to > the required asymmetric erase/program dual operation and the resultant > RMW penalty to perform a write if unaligned. This is detailed in the > following vendor benchmarking guidelines (SF-1500 controller): > > http://www.smartm.com/files/salesLiterature/storage/AN001_Benchmark_XceedIOPSSATA_Apr2010_.pdf > > Highlight from link - "Proper partition alignment is one of the most critical > attributes that can greatly boost the I/O performance of an SSD due to > reduced read modify?write operations." > > It should be noted, the above highlight only applies to Flash based SSD > as an NVRAM based SSD does *not* suffer the same fate, as its > performance is not bound by or vary with partition (mis)alignment.Here''s an article with some benchmarks: http://wikis.sun.com/pages/viewpage.action?pageId=186241353 Seems to really impact IOPS. Ray
On Mon, 30 Aug 2010, Christopher George wrote:> > It should be noted, the above highlight only applies to Flash based SSD > as an NVRAM based SSD does *not* suffer the same fate, as its > performance is not bound by or vary with partition (mis)alignment.What is a "NVRAM" based SSD? It seems to me that you are misusing the term "NVRAM". Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Christopher George
2010-Aug-31 17:42 UTC
[zfs-discuss] 4k block alignment question (X-25E)
> What is a "NVRAM" based SSD?It is simply an SSD (Solid State Drive) which does not use Flash, but does use power protected (non-volatile) DRAM, as the primary storage media. http://en.wikipedia.org/wiki/Solid-state_drive I consider the DDRdrive X1 to be a NVRAM based SSD even though we delineate the storage media used depending on host power condition. The X1 exclusively uses DRAM for all IO processing (host is on) and then Flash for permanent non-volatility (host is off). Thanks, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org
On Mon, Aug 30, 2010 at 3:05 PM, Ray Van Dolson <rvandolson at esri.com> wrote:> I want to "fix" (as much as is possible) a misalignment issue with an > X-25E that I am using for both OS and as an slog device.It''s pretty easy to get the alignment right fdisk uses a default of 63/255/*, which isn''t easy to change. This makes each cylinder ( 63 * 255 * 512b ). You want ( $cylinder_offset ) * ( 63 * 255 * 512b ) / ( $block_alignment_size ) to be evenly divisible. For a 4k alignment you want the offset to be 8. With fdisk, create your SOLARIS2 partition that uses the entire disk. The partition will be from cylinder 1 to whatever. Cylinder 0 is used for the MBR, so it''s automatically un-aligned. When you create slices in format, the MBR cylinder isn''t visible, so you have to subtract 1 from the offset, so your first slice should start on cylinder 7. Each additional cylinder should start on a multiple of 8, minus 1. eg: 63, 1999, etc. It doesn''t matter if the end of a slice is unaligned, other than to make aligning the next slice easier. -B -- Brandon High : bhigh at freaks.com
Christopher George wrote:>> What is a "NVRAM" based SSD? >> > > It is simply an SSD (Solid State Drive) which does not use Flash, > but does use power protected (non-volatile) DRAM, as the primary > storage media. > > http://en.wikipedia.org/wiki/Solid-state_drive > > I consider the DDRdrive X1 to be a NVRAM based SSD even > though we delineate the storage media used depending on host > power condition. The X1 exclusively uses DRAM for all IO > processing (host is on) and then Flash for permanent non-volatility > (host is off). >NVRAM = non-volatile random access memory. It is a general category. EEPROM = electrically-erasable programmable read-only memory. It is a specific type of NVRAM. Flash memory = memory used in flash devices, commonly NOR or NAND based. It is a specific type of EEPROM, which in turn is a specific type of NVRAM. http://en.wikipedia.org/wiki/Non-volatile_random_access_memory http://en.wikipedia.org/wiki/EEPROM http://en.wikipedia.org/wiki/Flash_memory He means a DRAM based SSD with NVRAM (flash) backup vs. SSDs that use NVRAM (flash) directly. This class of SSD may use DDR DIMMs or may be integrated. Almost all of these devices that retain their data upon power loss are technically NVRAM based. (Exception could be a hard drive based device that uses a DRAM cache equal to its hard drive storage capacity.) It is effectively what you would get if you had a regular flash based SSD with an internal RAM cache equal in size to the nonvolatile storage plus enough energy storage to write out the whole cache upon power loss. I doubt there would be any additional performance beyond what you could see from a RAMDISK carved from main memory (actually there would probably be theoretical lower performance because of lower bus bandwidths). It does effectively solve the problems posed by motherboard physical RAM limits and of an unexpected power loss due to failed power supplies or failed UPSes. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100831/3b3ccc71/attachment.html>
31.08.2010 21:23, Ray Van Dolson ?????:> Here''s an article with some benchmarks: > > http://wikis.sun.com/pages/viewpage.action?pageId=186241353 > > Seems to really impact IOPS.This is really interesting reading. Can someone do same tests with Intel X25-E?
On Tue, Aug 31, 2010 at 12:47:49PM -0700, Brandon High wrote:> On Mon, Aug 30, 2010 at 3:05 PM, Ray Van Dolson <rvandolson at esri.com> wrote: > > I want to "fix" (as much as is possible) a misalignment issue with an > > X-25E that I am using for both OS and as an slog device. > > It''s pretty easy to get the alignment right > > fdisk uses a default of 63/255/*, which isn''t easy to change. This > makes each cylinder ( 63 * 255 * 512b ). You want ( $cylinder_offset > ) * ( 63 * 255 * 512b ) / ( $block_alignment_size ) to be evenly > divisible. For a 4k alignment you want the offset to be 8. > > With fdisk, create your SOLARIS2 partition that uses the entire disk. > The partition will be from cylinder 1 to whatever. Cylinder 0 is used > for the MBR, so it''s automatically un-aligned. > > When you create slices in format, the MBR cylinder isn''t visible, so > you have to subtract 1 from the offset, so your first slice should > start on cylinder 7. Each additional cylinder should start on a > multiple of 8, minus 1. eg: 63, 1999, etc. > > It doesn''t matter if the end of a slice is unaligned, other than to > make aligning the next slice easier. > > -BThanks Brandon. Just a follow-up to my original post... unfortunately I couldn''t try aligning the slice on the SSD I was also using for slog/ZIL. The slog/ZIL slice was too small to be added to the ZIL mirror as the disk we''d thrown in the system bypassing the expander was being used completely (via EFI label). Still wanted to test, however, so I pulled one of the drives from my rpool, and added the entire disk to my mirror. This uses the EFI label and aligns everything correctly. Unit Attention errors immediately began showing up. I pulled that drive from the ZIL mirror and then used one of my two L2ARC drives (also X-25E''s) in the same fashion. Same problem. So I believe the problem is still expander related moreso than alignment related. Too bad. Ray