I have now re-read all my partitions, and while llverdev gave some
intelligible result:
> llverdev -w -v -l /dev/sdd
> llverdev: /dev/sdd is 9996536381440 bytes (9310 GB) in size
> Timestamp: 1296134131
> write offset: 9761436672kB 269.8 MB/s
> write complete
> llverdev -r -t 1296134131 -v -l /dev/sdd
> llverdev: /dev/sdd is 9996536381440 bytes (9310 GB) in size
> Timestamp: 1296134131
> read offset: 9761298432kB 530.6 MB/s
> read complete
llverfs does not, however. After reaching the said
> llverfs: writing /srv/OST0003/llverfs.filecount failed :No space left
> on device
I tried to read this directory:
> llverfs -r -t 1296134089 -v /srv/OST0003
> Timestamp: 1296134089
>
> llverfs: reading /srv/OST0003/llverfs.filecount failed :Success
> read File name: /srv/OST0003/dir00065/file013
> llverfs: verify /srv/OST0003/dir00065/file013 failed
offset/timestamp/inode 4243845120/1296134089/1509761: found
4242796544/1296134089/1509761 instead
>
> llverfs: Data verification failed
O.k, I do as I am told:
> llverfs -r -t 1296134089 -o 4242796544 -v /srv/OST0003
> Timestamp: 1296134089
>
> llverfs: reading /srv/OST0003/llverfs.filecount failed :Success
> read File name:
> read complete
and this result comes immediately - probably because the bytes that got
written at that offset are at the and of that disk.
This OST0003 is on a 8.2 TB partition, but I did the same with a 7.3 TB
partition, with exactly the same behavior/results. The purpose of the
exercise was of course to test this partition, because with kernel
2.6.27, Lustre 1.8.4, it had to be formatted with
"mountfsoptions=force_over_8tb".
The 7TB OST should be within all safe limits, but it doesn''t perform
differently with llverfs.
So I''m still in the dark as to whether we should use these larger
partitions..
Cheers,
Thomas
On 27.01.2011 20:06, Andreas Dilger wrote:> On 2011-01-27, at 04:56, Thomas Roth wrote:
> > I have run llverfs (lustre-utils 1.8.4) on an OST partition as
"llverfs
> > -w -v /srv/OST0002".
> > That went smoothly until all 9759209724 kB were written, terminating
> with:
> >
> > write File name: /srv/OST0002/dir00072/file022
> > write complete
> >
> > llverfs: writing /srv/OST0002/llverfs.filecount failed :No space left
on
> > device
> >
> > My question: What should be the result of llverfs? I haven''t
found any
> > documentation on this tool, so I can just suspect that this was a
> > successful run?
>
> It shouldn''t be terminating at this point, but I suspect a bug in
> llverfs and not in the filesystem. I _thought_ there was an llverfs(8)
> man page, but it turns out there is only an old llverfs.txt file.
>
> > (llverdev terminates with ''write complete'' also, no
errors indicated -
> > good?)
>
> You can restart llverfs with the "-r" option so that it does the
read
> tests to verify the data, and the "-t" option is needed to
specify the
> timestamp used for the writes (so that it can distinguish stale data
> written from two different tests). In hindsight, it probably makes sense
> from a usability POV to allow automatically detecting the timestamp
> value from the first file read, if unspecified, and then use that for
> the rest of the test.
>
> Cheers, Andreas
> --
> Andreas Dilger
> Principal Engineer
> Whamcloud, Inc.
>
>
>