You need to test the system in ways that match the way you intend to use
the system.
Bonnie is a nice generic test of disk and file system performance but it
will seldom match any real world application unless your application is
to run bonnie on systems.
If your running a database with large tables then the way you test will
be different than if your running something like a mail server with
millions of very small files that are seldom updated..
If you want to check failure modes you need to think of all the
possible failure modes.
- Loss of a server
- Loss of a brick
- Loss of communication between servers
- Loss of communication between clients and some/all servers
- file system corruption
- an error prone network connection
and so on.
If you want to check data integrity then you need to have something like
a checksum on each file and when your done check against the checksum or
actually run a full compare on each file.
Correctly setup you could get rsync to in effect do the comparison for you.
On 04/29/2017 10:26 AM, Gandalf Corvotempesta wrote:> I would like to heavy test a small gluster installation.
> Anyone did this previously ?
>
> I think that running bonnie++ for 2 or more days and trying to remove
> nodes/bricks
> would be enough to test everything, but how can i ensure that, after
> some days, all
> file stored are exactly how bonnie++ has created ?
>
> Probably, rsync would be better ? I can try to sync a directory with
> millions of files
> and while the syncing is running, trying to make some damages (power
> off, unplug, etc etc).
> After all, re-running rsync should not transfer any file, they should
> be already present.
>
> Right ? If rsync re-sync files, means that gluster has made some data
> loss or data corruption.
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
--
Alvin Starr || voice: (905)513-7688
Netvel Inc. || Cell: (416)806-0133
alvin at netvel.net ||
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20170429/f0da7d73/attachment.html>