Jason Zhang
2016-Jun-22 02:08 UTC
mfi driver performance too bad on LSI MegaRAID SAS 9260-8i
Mark, Thanks We have same RAID setting both on FreeBSD and CentOS including cache setting. In FreeBSD, I enabled the write cache but the performance is the same. We don?t use ZFS or UFS, and test the performance on the RAW GEOM disk ?mfidx? exported by mfi driver. We observed the ?gstat? result and found that the write latency is too high. When we ?dd" the disk with 8k, it is lower than 1ms, but it is 6ms on 64kb write. It seems that each single write operation is very slow. But I don?t know whether it is a driver problem or not. Jason> ? 2016?6?22????12:36?Mark Felder <feld at FreeBSD.org> ??? > > > > On Fri, Jun 17, 2016, at 02:17, Jason Zhang wrote: >> Hi, >> >> I am working on storage service based on FreeBSD. I look forward to a >> good result because many professional storage company use FreeBSD as its >> OS. But I am disappointed with the Bad performance. I tested the the >> performance of LSI MegaRAID 9260-8i and had the following bad result: >> >> 1. Test environment: >> (1) OS: FreeBSD 10.0 release > > 10.0-RELEASE is no longer supported. Can you test this on 10.3-RELEASE? > > Have you confirmed that both servers are using identical RAID controller > settings? It's possible the CentOS install has enabled write caching but > it's disabled on your FreeBSD server. Are you using UFS or ZFS on > FreeBSD? Do you have atime enabled? I believe CentOS is going to have > "relatime" or "nodiratime" by default to mitigate the write penalty on > each read access. > > We need more data :-) > > > -- > Mark Felder > ports-secteam member > feld at FreeBSD.org
Doros Eracledes
2016-Jun-22 05:28 UTC
mfi driver performance too bad on LSI MegaRAID SAS 9260-8i
As a side note, we also use this controller with FreeBSD 10.1 but configured each drive as a JBOD and then created raidz zfs pools and that was much faster than to let the LSI do raid5.? Best Doros
Borja Marcos
2016-Jun-22 06:58 UTC
mfi driver performance too bad on LSI MegaRAID SAS 9260-8i
> On 22 Jun 2016, at 04:08, Jason Zhang <jasonzhang at cyphytech.com> wrote: > > Mark, > > Thanks > > We have same RAID setting both on FreeBSD and CentOS including cache setting. In FreeBSD, I enabled the write cache but the performance is the same. > > We don?t use ZFS or UFS, and test the performance on the RAW GEOM disk ?mfidx? exported by mfi driver. We observed the ?gstat? result and found that the write latency > is too high. When we ?dd" the disk with 8k, it is lower than 1ms, but it is 6ms on 64kb write. It seems that each single write operation is very slow. But I don?t know > whether it is a driver problem or not.There is an option you can use (I do it all the time!) to make the card behave as a plain HBA so that the disks are handled by the ?da? driver. Add this to /boot/loader.conf hw.mfi.allow_cam_disk_passthrough=1 mfip_load=?YES" And do the tests accessing the disks as ?da?. To avoid confusions, it?s better to make sure the disks are not part of a ?jbod? or logical volume configuration. Borja.
O. Hartmann
2016-Aug-01 06:45 UTC
mfi driver performance too bad on LSI MegaRAID SAS 9260-8i
On Wed, 22 Jun 2016 08:58:08 +0200 Borja Marcos <borjam at sarenet.es> wrote:> > On 22 Jun 2016, at 04:08, Jason Zhang <jasonzhang at cyphytech.com> wrote: > > > > Mark, > > > > Thanks > > > > We have same RAID setting both on FreeBSD and CentOS including cache > > setting. In FreeBSD, I enabled the write cache but the performance is the > > same. > > > > We don?t use ZFS or UFS, and test the performance on the RAW GEOM disk > > ?mfidx? exported by mfi driver. We observed the ?gstat? result and found > > that the write latency is too high. When we ?dd" the disk with 8k, it is > > lower than 1ms, but it is 6ms on 64kb write. It seems that each single > > write operation is very slow. But I don?t know whether it is a driver > > problem or not. > > There is an option you can use (I do it all the time!) to make the card > behave as a plain HBA so that the disks are handled by the ?da? driver. > > Add this to /boot/loader.conf > > hw.mfi.allow_cam_disk_passthrough=1 > mfip_load=?YES" > > And do the tests accessing the disks as ?da?. To avoid confusions, it?s > better to make sure the disks are not part of a ?jbod? or logical volume > configuration. > > > > > Borja.[...] How is this supposed to work when ALL disks (including boot device) are settled with the mfi (in our case, it is a Fujitsu CP400i, based upon LSI3008 and detected within FreeBSD 11-BETA and 12-CURRENT) controller itself? I did not find any solution to force the CP400i into a mode making itself acting as a HBA (we intend to use all drives with ZFS and let FreeBSD kernel/ZFS control everything). The boot device is a 256 GB Samsung SSD for enterprise use and putting the UEFI load onto a EFI partition from 11-CURRENT-ALPHA4 is worse: dd takes up to almost a minute to put the image onto the SSD. The SSD active LED is blinking alle the time indicating activity. Caches are off. I tried to enable the cache via the mfiutil command by 'mfiutil cache mfid0 enable', but it failed ... It failed also on all other attached drives. I didn't further go into more investigations right now, since the experience with the EFI boot loader makes me suspect bad performance and that is harsh so to speak. Glad to have found this thread anyway. I cross post this also to CURRENT as it might be an issue with CURRENT ... Kind regards, Oliver Hartmann