I run Samba 2.2.2 on any of there vendors/osversions/filesystems:
o Solaris 8 / ufs
o Tru64 Unix V5.0A / advfs
o RedHat 7.1 / Kernel 2.4.2 / ext2fs
all these are capable of handling large files (files with a
64-bit-offset larger than 4GB). At configure time, samba selects the
proper compile flags (-D_LARGEFILE_SOURCE, -D_FILE_OFFSET_BITS=64)
for use with large files.
The problem: When I back up a Win2000 machine using ntbackup onto a
file on a samba share, and the backup file is bigger than 4GB, the
backup is corrupted. This has also been reported by others in the
comp.protocols.smb newsgroup.
When I run tests using:
smbclient //127.1/myshare
smb: \> put 4200MB 4200remote
where `4200MB' is a plain file of 4200 MB, the resulting file will
be only a little bit bigger than 4GB. When I use "truss" or
"strace"
on the smbd server, near the 4GB limit I get:
_llseek(19, 18446744073709406208, [4294821888], SEEK_SET) = 0
write(19, "UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU"..., 64512) =
64512
_llseek(19, 18446744073709470720, [4294886400], SEEK_SET) = 0
write(19, "UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU"..., 64512) =
64512
[1] _llseek(19, 18446744073709535232, [4294950912], SEEK_SET) = 0
[2] write(19, "UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU"..., 64512) =
64512
![3] _llseek(19, 48128, [48128], SEEK_SET) = 0
write(19, "UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU"..., 64512) =
64512
_llseek(19, 112640, [112640], SEEK_SET) = 0
write(19, "UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU"..., 64512) =
64512
[1] seek to (4GB - 16384 bytes)
[2] write 64512, offset now (4GB + 48128 bytes) = 4295015424
[3] seek to 48128 instead of (4GB + 48128 bytes) !!!!
This looks very much like a wrap-around of some offset variable that
was declared `unsigned long' instead of `off_t'. By looking at the
source, I cannot find an obvious point where this would happen, though.
When I read a file from a samba server bigger than 4GB with something like:
smbclient //127.1/myshare
smb: \> get 4200MB 4200local
the file `4200local' grows without bounds. Using "truss", I find
that there is a similar
warp-around after 4 GB.
Questions :
o Is smbd supposed to be large-file-proof, e.g. capable of handling files
larger than 4 GB?
* On Solaris 8?
* On Tru64 Unix?
* On RedHat 7.1 / Kernel 2.4.2 / ext2?
o Is smbclient supposed to be large-file-proof? (I see a few minor
problems in the source; the variables get_total_size,
put_total_size, nread in client/client.c should be declared
as off_t.
o Am I missing something really obvious, such as a smb.conf file option?
Regards,
- Andi Karrer
On Mon, Nov 05, 2001 at 03:57:58PM +0100, Andreas Karrer wrote:> > Questions : > o Is smbd supposed to be large-file-proof, e.g. capable of handling files > larger than 4 GB? > * On Solaris 8? > * On Tru64 Unix? > * On RedHat 7.1 / Kernel 2.4.2 / ext2?Yes, on all these I believe.> o Is smbclient supposed to be large-file-proof? (I see a few minor > problems in the source; the variables get_total_size, > put_total_size, nread in client/client.c should be declared > as off_t. > > o Am I missing something really obvious, such as a smb.conf file option?Only smbd the server is 64 bit clean. This work has not yet been done for smbclient. It works with a Win32 client. Jeremy.
Hi Andreas,
I posted this very same problem a month ago, but no one could
tell me any workaround. I also posted a level 10 log from the smbd
daemon in the exact point of break. What I found there was that 1 smb
packet before reaching 4 Gb. everything was ok, and the next smb packet
reported a 0 byte and slowly growing corrupted file.
I hope someone will help this time, since using linux as a
backup storage system is ideal. (What if a FIFO file redirected to bzip2
is used to receive the backup stream?... but this is another war to
fight..)
Cheers!
-----Mensaje original-----
De: samba-admin@lists.samba.org [mailto:samba-admin@lists.samba.org]En
nombre de Andreas Karrer
Enviado el: lunes, 05 de noviembre de 2001 15:58
Para: samba@lists.samba.org
Asunto: smdb warp-around after 4 GB
I run Samba 2.2.2 on any of there vendors/osversions/filesystems:
o Solaris 8 / ufs
o Tru64 Unix V5.0A / advfs
o RedHat 7.1 / Kernel 2.4.2 / ext2fs
all these are capable of handling large files (files with a
64-bit-offset larger than 4GB). At configure time, samba selects the
proper compile flags (-D_LARGEFILE_SOURCE, -D_FILE_OFFSET_BITS=64)
for use with large files.
The problem: When I back up a Win2000 machine using ntbackup onto a
file on a samba share, and the backup file is bigger than 4GB, the
backup is corrupted. This has also been reported by others in the
comp.protocols.smb newsgroup.
When I run tests using:
smbclient //127.1/myshare
smb: \> put 4200MB 4200remote
where `4200MB' is a plain file of 4200 MB, the resulting file will
be only a little bit bigger than 4GB. When I use "truss" or
"strace"
on the smbd server, near the 4GB limit I get:
_llseek(19, 18446744073709406208, [4294821888], SEEK_SET) = 0
write(19, "UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU"..., 64512) =
64512
_llseek(19, 18446744073709470720, [4294886400], SEEK_SET) = 0
write(19, "UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU"..., 64512) =
64512
[1] _llseek(19, 18446744073709535232, [4294950912], SEEK_SET) = 0
[2] write(19, "UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU"..., 64512) =
64512
![3] _llseek(19, 48128, [48128], SEEK_SET) = 0
write(19, "UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU"..., 64512) =
64512
_llseek(19, 112640, [112640], SEEK_SET) = 0
write(19, "UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU"..., 64512) =
64512
[1] seek to (4GB - 16384 bytes)
[2] write 64512, offset now (4GB + 48128 bytes) = 4295015424
[3] seek to 48128 instead of (4GB + 48128 bytes) !!!!
This looks very much like a wrap-around of some offset variable that
was declared `unsigned long' instead of `off_t'. By looking at the
source, I cannot find an obvious point where this would happen, though.
When I read a file from a samba server bigger than 4GB with something
like:
smbclient //127.1/myshare
smb: \> get 4200MB 4200local
the file `4200local' grows without bounds. Using "truss", I find
that
there is a similar
warp-around after 4 GB.
Questions :
o Is smbd supposed to be large-file-proof, e.g. capable of handling
files
larger than 4 GB?
* On Solaris 8?
* On Tru64 Unix?
* On RedHat 7.1 / Kernel 2.4.2 / ext2?
o Is smbclient supposed to be large-file-proof? (I see a few minor
problems in the source; the variables get_total_size,
put_total_size, nread in client/client.c should be declared
as off_t.
o Am I missing something really obvious, such as a smb.conf file
option?
Regards,
- Andi Karrer
--
To unsubscribe from this list go to the following URL and read the
instructions: http://lists.samba.org/mailman/listinfo/samba
On Thu, Nov 08, 2001 at 03:42:02PM +0100, Ivan Fernandez wrote:> Hi Andreas, > > I posted this very same problem a month ago, but no one could > tell me any workaround. I also posted a level 10 log from the smbd > daemon in the exact point of break. What I found there was that 1 smb > packet before reaching 4 Gb. everything was ok, and the next smb packet > reported a 0 byte and slowly growing corrupted file. > > I hope someone will help this time, since using linux as a > backup storage system is ideal. (What if a FIFO file redirected to bzip2 > is used to receive the backup stream?... but this is another war to > fight..)This is now fixed in HEAD and 2.2 CVS trees. Thanks, Jeremy.
jra@samba.org wrote:>On Tue, Nov 06, 2001 at 03:53:49PM +0100, Andreas Karrer wrote: > >> So apparently my smbd is not as 64-bit-clean as it should be, but what am >> I doing wrong? > >Exactly what form does the corruption take ? I've transferred several >>4GB files between W2k clients and Samba servers as part of the tests >in shipping a release. > >Jeremy.After a couple of mails between Jeremy and me, Jeremy has come up with a fix and incorporated it into the HEAD and SAMBA_2_2 branches on CVS. Works great. Thanks, Jeremy! - Andi