Hy, I''m planning to test on pre-production data center a ZFS solution for our application and I''m searching a good filesystem benchmark for see which configuration is the best solution. Server are Solaris 10 connected to a EMC Clariion CX3-20 with two FC cable in a total high-availability (two HBA connected to different switch, swtich are cross-connected to both storage processor). HBA installed on the host and swicth port are 2Gbps, while CX3-20 is equipped (disks and SP) for support 4Gbps. LUN are configured as RAID5 accross 15 disks. I used in the past iozone (http://www.iozone.org/) but I''m wondering if there are other tools. Thanks. Cesare
Tried filebench before?? http://www.solarisinternals.com/wiki/index.php/FileBench Rayson On 5/9/07, cesare VoltZ <voltsz at gmail.com> wrote:> I used in the past iozone (http://www.iozone.org/) but I''m wondering > if there are other tools. > > Thanks. > > Cesare > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
cesare VoltZ wrote:> Hy, > > I''m planning to test on pre-production data center a ZFS solution for > our application and I''m searching a good filesystem benchmark for see > which configuration is the best solution.Pedantically, your application is always best. -- richard
On Wed, 2007-05-09 at 16:27 +0200, cesare VoltZ wrote:> Hy, > > I''m planning to test on pre-production data center a ZFS solution for > our application and I''m searching a good filesystem benchmark for see > which configuration is the best solution. > > Server are Solaris 10 connected to a EMC Clariion CX3-20 with two FC > cable in a total high-availability (two HBA connected to different > switch, swtich are cross-connected to both storage processor). HBA > installed on the host and swicth port are 2Gbps, while CX3-20 is > equipped (disks and SP) for support 4Gbps. > > LUN are configured as RAID5 accross 15 disks.any point to use raid5 if you have raidz in zfs?> > I used in the past iozone (http://www.iozone.org/) but I''m wondering > if there are other tools. > > Thanks. > > Cesare > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> > LUN are configured as RAID5 accross 15 disks.Won''t such a large amount of spindles have a negative impact on performance (in a single RAID-5 setup) ... single I/O from system generates lots of backend I/O''s ?
On Wed, 2007-05-09 at 21:09 +0200, Louwtjie Burger wrote:> > > LUN are configured as RAID5 accross 15 disks. > > Won''t such a large amount of spindles have a negative impact on > performance (in a single RAID-5 setup) ... single I/O from system > generates lots of backend I/O''s ?yes, single io which hard to generate full stripe write on such large number of disks and then eventually depends on NVRAM in your EMC box to do the work. this is why small number of disks in a raid5 is recommended by EMC also. of course, again depends on your application workload pattern. Ming
go ahead with filebench and don''t forget to set set zfs:zfs_nocacheflush=1 in /etc/system (if using nevada) s. On 5/9/07, cesare VoltZ <voltsz at gmail.com> wrote:> Hy, > > I''m planning to test on pre-production data center a ZFS solution for > our application and I''m searching a good filesystem benchmark for see > which configuration is the best solution. > > Server are Solaris 10 connected to a EMC Clariion CX3-20 with two FC > cable in a total high-availability (two HBA connected to different > switch, swtich are cross-connected to both storage processor). HBA > installed on the host and swicth port are 2Gbps, while CX3-20 is > equipped (disks and SP) for support 4Gbps. > > LUN are configured as RAID5 accross 15 disks. > > I used in the past iozone (http://www.iozone.org/) but I''m wondering > if there are other tools. > > Thanks. > > Cesare > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Hi Cesare, Hope you don''t mind me asking but we are planning to use a CX3-20 Dell/EMC SAN connected to a T5220 server (Solaris 10). Can you tell me if you were forced to use PowerPath or have you used MPXIO/Traffic Manager. Did you use LPe11000-E (Single Channel) or LPe11002-E (dual channel) HBA''s? Did you encounter any problems with configuring this. Any comments greatly appreciated. This message posted from opensolaris.org
On 11/14/07, Gary Wright <gary.wright at digica.com> wrote:> > Hope you don''t mind me asking but we are planning to use a CX3-20 Dell/EMC SAN connected to a T5220 server (Solaris 10). Can you tell me > if you were forced to use PowerPath or have you used MPXIO/Traffic Manager. Did you use LPe11000-E (Single Channel) or LPe11002-E (dual channel) HBA''s? > > Did you encounter any problems with configuring this.My experience in this area is that powerpath doesn''t get along with zfs (I couldn''t import the pool); using MPxIO worked fine. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/