samba-bugs at samba.org
2018-May-11 16:27 UTC
[Bug 13433] New: out_of_memory in receive_sums on large files
https://bugzilla.samba.org/show_bug.cgi?id=13433
Bug ID: 13433
Summary: out_of_memory in receive_sums on large files
Product: rsync
Version: 3.1.3
Hardware: All
OS: All
Status: NEW
Severity: normal
Priority: P5
Component: core
Assignee: wayned at samba.org
Reporter: toasty at dragondata.com
QA Contact: rsync-qa at samba.org
I'm attempting to rsync a 4TB file. It fails with:
generating and sending sums for 0
count=33554432 rem=0 blength=131072 s2length=6 flength=4398046511104
chunk[0] offset=0 len=131072 sum1=8d15ed6f
chunk[1] offset=131072 len=131072 sum1=3d66e7f7
[omitted]
chunk[6550] offset=858521600 len=131072 sum1=d70deab6
chunk[6551] offset=858652672 len=131072 sum1=657e34df
send_files(0, /bay3/b.tc)
count=33554432 n=131072 rem=0
ERROR: out of memory in receive_sums [sender]
[sender] _exit_cleanup(code=22, file=util2.c, line=105): entered
rsync error: error allocating core memory buffers (code 22) at util2.c(105)
[sender=3.1.3]
This is getting called:
92 if (!(s->sums = new_array(struct sum_buf, s->count)))
93 out_of_memory("receive_sums");
And the size of a sum_buf(40 bytes) * the number of sums (33554432) exceeds
MALLOC_MAX.
How is this supposed to work/why is it breaking here, when I'm pretty sure
I've
transferred files bigger than this before?
--
You are receiving this mail because:
You are the QA Contact for the bug.
samba-bugs at samba.org
2018-May-16 18:58 UTC
[Bug 13433] out_of_memory in receive_sums on large files
https://bugzilla.samba.org/show_bug.cgi?id=13433 --- Comment #1 from Dave Gordon <dg32768 at zoho.eu> --- Maybe try --block-size=10485760 --protocol=29 as mentioned here: https://bugzilla.samba.org/show_bug.cgi?id=10518#c8 -- You are receiving this mail because: You are the QA Contact for the bug.
samba-bugs at samba.org
2018-May-16 23:07 UTC
[Bug 13433] out_of_memory in receive_sums on large files
https://bugzilla.samba.org/show_bug.cgi?id=13433
--- Comment #2 from Kevin Day <toasty at dragondata.com> ---
(In reply to Dave Gordon from comment #1)
It looks like that's no longer allowed?
rsync: --block-size=10485760 is too large (max: 131072)
rsync error: syntax or usage error (code 1) at main.c(1591) [client=3.1.3]
#define MAX_BLOCK_SIZE ((int32)1 << 17)
if (block_size > MAX_BLOCK_SIZE) {
snprintf(err_buf, sizeof err_buf,
"--block-size=%lu is too large (max: %u)\n",
block_size, MAX_BLOCK_SIZE);
return 0;
}
OLD_MAX_BLOCK_SIZE is defined, but options.c would need to be patched to allow
looser block sizes if protocol_version < 30
--
You are receiving this mail because:
You are the QA Contact for the bug.
samba-bugs at samba.org
2018-May-16 23:12 UTC
[Bug 13433] out_of_memory in receive_sums on large files
https://bugzilla.samba.org/show_bug.cgi?id=13433 --- Comment #3 from Kevin Day <toasty at dragondata.com> --- Just adding --protocol=29 falls back to the older chunk generator code and automatically selects 2MB chunks which is enough to at least make this work without a malloc error. -- You are receiving this mail because: You are the QA Contact for the bug.
samba-bugs at samba.org
2018-May-19 16:46 UTC
[Bug 13433] out_of_memory in receive_sums on large files
https://bugzilla.samba.org/show_bug.cgi?id=13433 --- Comment #4 from Ben RUBSON <ben.rubson at gmail.com> --- util2.c:#define MALLOC_MAX 0x40000000 Which is 1 GB. 1 GB / 40 bytes x 131072 bytes = 3276 GB, which is then the maximum file size in protocol_version >= 30. Did you try to increase MALLOC_MAX on sending side ? Btw, would be interesting to know why MAX_BLOCK_SIZE has been limited to 128 KB. rsync.h:#define MAX_BLOCK_SIZE ((int32)1 << 17) -- You are receiving this mail because: You are the QA Contact for the bug.