On Tue, Apr 02, 2002 at 02:28:30PM -0500, White, Janet L.
wrote:>
> I am running RH 7.2 (professional release 2.4 kernel) I have a ext3 file
> system which stores large oracle "dump files" over 4GB. We write
to it via
> NFS which is working great, the problem is when we issue a file
> largefile.dmp from the linux box it fails on anything over 4GB stating that
> can't stat 'largefile.dmp' (value too large for defined data
type). We can
> do the same command on the NFS mount on an HP 11.0 server and it works
fine.
> We have "journaling" turned on , what are we missing on the linux
box??
> Thank you so much for your help.
Whatever program you are using to stat the file needs to be compiled
to use the LFS (Large File Summit) API. The problem is that the
standard stat() system call returns data in a structure where the
fields are generally 32 bits, and that's not enough to show a filesize
for file which is larger than 2GB. So the program needs to be
recompiled to use the LFS API. What program specifically is issueing
this error? If this error is coming from a program which was provided
by Red Hat, I suggest you open bug report with them and ask them to
recompile the program using the LFS API. This will cause the program
to use the stat64 system call, which uses a different stat structure
where the filesize field (among others) has been changed to use a
64-bit field.
There will be a slight performance hit for programs which are compiled
to use the LFS API, but that's to be expected when you are doing
64-bit arithmatic on a 32-bit platform. However, if you need to
manipulate files > 32 bits on a 32-bit CPU, that's what you'll have
to
do.
(Note, this isn't a problem which is specific to ext3; no matter what
filesystem you use: ext2, ext3, xfs, jfs, etc., they all will have
this issue, since it relates to the interface between user application
programs and the Linux kernel.)
- Ted