Peter Niemayer
2010-Jun-07 16:54 UTC
Poor performance (1/4 that of XFS) when appending to lots of files
Hi, we ran a benchmark using btrfs on a server that essentially does the equivalent of the following: Open one large (25GB) test-set file for reading, which consists of many small randomly generated messages. Each message consists of a primary key (an integer in the range of 0 to 1,000,000) and a random number of arbitrary data bytes (length in the range from 10 to 1000 byte). For each message, the server opens the file that is determined by the primary key with O_APPEND, write()s the random data bytes to the file. Then it closes the file. The server runs 4 threads in parallel to spread the above action over 4 CPU cores, each thread processes a quarter of the primary keys (primary_key & 0x03). The server does so until the whole 25GB test-set is processed (it does not do any sync or fsync operation. The machine has 4GB memory, so it has to actually write out most of the data). This test, when run on a fast SSD (attached to a SAS channel), took us ~ 30min to complete using XFS (mounted with "nobarrier", data security is not an issue in this scenario). When using btrfs on the same hardware (same SSD, same system), it took us ~ 120min. The filesystem was mounted using the following options:> mount -t btrfs -o nodatasum,nodatacow,nobarrier,ssd,noacl,notreelog,noatime,nodiratime /dev/sdg /data-ssd3(Both measurements done under linux-2.6.34). Looks like btrfs is not really tuned to perform well in the above scenario. I would appreciate any advise on how to improve btrfs'' performance for the above scenario. Regards, Peter Niemayer -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Roy Sigurd Karlsbakk
2010-Jun-08 07:53 UTC
Re: Poor performance (1/4 that of XFS) when appending to lots of files
----- "Peter Niemayer" <niemayer@isg.de> skrev:> Hi, > > we ran a benchmark using btrfs on a server that essentially does > the equivalent of the following: > > Open one large (25GB) test-set file for reading, which consists of > many > small randomly generated messages. Each message consists of a > primary key (an integer in the range of 0 to 1,000,000) and a random > number of arbitrary data bytes (length in the range from 10 to 1000 > byte). > <snip/>Can you share the code for this benchmark, please? Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy@karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Peter Niemayer
2010-Jun-08 17:44 UTC
Re: Poor performance (1/4 that of XFS) when appending to lots of files
On 06/08/2010 09:53 AM, Roy Sigurd Karlsbakk wrote:> Can you share the code for this benchmark, please?I will find out whether it is reasonably easy to extract a distributable benchmark code from the whole server (which I couldn''t share as a whole, since it''s a commercial piece of software, after all...). Regards, Peter Niemayer -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html