On Wed, Jan 24, 2018 at 11:30 AM, Kostya Serebryany <kcc at google.com> wrote:> +Aleksey, who has been dealing with the allocator recently. > > If you have a "((idx)) < ((kMaxNumChunks))" (0x40000, 0x40000) > check failure, it means that you've allocated (and did not deallocate) 2^18 > large heap regions, each *at least* (2^17+1) bytes. > This means, that you have live large heap chunks of 2^35 bytes (or more) in > total, which is 32Gb. > Does this sound correct?Yes, it does. Our software typically allocates several hundred Gb, and the size of the objects depend on the traffic. They would realistically reach 2^17+1 bytes.> If yes, yea, I guess we need to bump kMaxNumChunks > >I'll increase the limit to 2^19 for our build, and I'll report the results here. Thanks, Frederik
On Wed, Jan 24, 2018 at 12:10 PM, Frederik Deweerdt < frederik.deweerdt at gmail.com> wrote:> On Wed, Jan 24, 2018 at 11:30 AM, Kostya Serebryany <kcc at google.com> > wrote: > > +Aleksey, who has been dealing with the allocator recently. > > > > If you have a "((idx)) < ((kMaxNumChunks))" (0x40000, 0x40000) > > check failure, it means that you've allocated (and did not deallocate) > 2^18 > > large heap regions, each *at least* (2^17+1) bytes. > > This means, that you have live large heap chunks of 2^35 bytes (or more) > in > > total, which is 32Gb. > > Does this sound correct? > Yes, it does. Our software typically allocates several hundred Gb, and > the size of the objects depend on the traffic. They would > realistically reach 2^17+1 bytes. > > > If yes, yea, I guess we need to bump kMaxNumChunks > > > > > I'll increase the limit to 2^19 for our build, and I'll report the results > here. >Yes, please do. Thanks!> > Thanks, > Frederik >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180124/7a9f6b66/attachment.html>
Hello, On Wed, Jan 24, 2018 at 4:22 PM, Kostya Serebryany <kcc at google.com> wrote:> > > On Wed, Jan 24, 2018 at 12:10 PM, Frederik Deweerdt > <frederik.deweerdt at gmail.com> wrote:[...]>> >> > If yes, yea, I guess we need to bump kMaxNumChunks >> > >> > >> I'll increase the limit to 2^19 for our build, and I'll report the results >> here. > >I ended up increasing the limit to 2^20, because the max allocation for large objects is around 100G on those hosts. With that done, i hit an issue where adding stacks to the StackDepot became a visible bottleneck after a few hours. To solve that, i've set `malloc_context_size=0`, because we're not tracking leaks, and in our experience getting the call site of the ASAN report carries enough information to diagnose the issue. With these two changes, the build is behaving as expected. If you think it's worth doing, i'm happy to post a patch for the constant increase. Thanks, Frederik