Alex Lee
2008-Sep-08 07:51 UTC
[Lustre-discuss] very slow IO using o_direct write on RHEL 5.1
Anyone know if there is something inherently slow about o_direct writes on linux? I am running lustre 1.6.5.1 on few dell 2950s. Each OST is capable of 300MB/s and I have 8 OST on my FS. Using buffers I can max out the bandwidth fine but soon as I try a single file o_direct write I get only 135MB/s no matter what stripecount, rpc flight number or linux sectorsize I use. I can get a little bit more bandwidth using larger stripesize but that only takes me up to 200mb/s. I cant help wonder if there something thats holding up the IOPS using single client, single file write. Trying to see if its the lustre client or just the way linux handles IO... Anyone have any settings I might be forgetting on the linux server/client? I have /sys/fs/block/sd*/max_sectorsize set and elevator set to noop. I cant think of anything on lustre side since I''m not even using more then 1-2 RPC in flight when running. Any help would be really appreciated, -Alex
Andreas Dilger
2008-Sep-08 10:33 UTC
[Lustre-discuss] very slow IO using o_direct write on RHEL 5.1
On Sep 08, 2008 16:51 +0900, Alex Lee wrote:> Anyone know if there is something inherently slow about o_direct writes > on linux? > > I am running lustre 1.6.5.1 on few dell 2950s. Each OST is capable of > 300MB/s and I have 8 OST on my FS. > > Using buffers I can max out the bandwidth fine but soon as I try a > single file o_direct write I get only 135MB/s no matter what > stripecount, rpc flight number or linux sectorsize I use.How big is the IO size? Without writes of at least 8MB (or larger) in order to keep enough IOs in flight to saturate the network.> I can get a little bit more bandwidth using larger stripesize but that > only takes me up to 200mb/s. I cant help wonder if there something thats > holding up the IOPS using single client, single file write. Trying to > see if its the lustre client or just the way linux handles IO... > > Anyone have any settings I might be forgetting on the linux > server/client? I have /sys/fs/block/sd*/max_sectorsize set and elevator > set to noop. I cant think of anything on lustre side since I''m not even > using more then 1-2 RPC in flight when running. > > Any help would be really appreciated, > -Alex > > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discussCheers, Andreas -- Andreas Dilger Sr. Staff Engineer, Lustre Group Sun Microsystems of Canada, Inc.
Alex Lee
2008-Sep-08 12:32 UTC
[Lustre-discuss] very slow IO using o_direct write on RHEL 5.1
> How big is the IO size? Without writes of at least 8MB (or largin order to keep enough IOs in flight to saturate the networkI''m using 4MB. Hm ok, maybe I was wondering if linux or lustre just doesnt like direct IO. Other then larger IO size is there anything I can set?
Alex Lee
2008-Sep-08 13:30 UTC
[Lustre-discuss] very slow IO using o_direct write on RHEL 5.1
> How big is the IO size? Without writes of at least 8MB (or larger) > in order to keep enough IOs in flight to saturate the network. > >I''v tried 8mb and 16mb and speed doesnt seem to change from 4mb. hmm...