On 27/10/2016 3:53 AM, Gandalf Corvotempesta wrote:> Are you using any ZFS RAID on your servers?Yah, RAID10. - Two nodes with 4 WD 3TB RED - One node with 2 WD 3TB reds and 6 * 600MB SAS Velocirtprs High Endurance High Speed SSD's for SLOG devices on each node. Std SSD's don't cut it, I've run through a lot of them. ZFS/Gluster/VM Hosting generates an extraordinary amount of writes. And the quoted write speeds for most SSD's are for *compressible* data, their performance goes to shit when you write pre-compressed data to them, which I have activated for ZFS. -- Lindsay Mathieson
I was wondering with your setup you mention, how high are your context switches? I mean what is your typical average context switch and what are your highest context switch peeks (as seen in iostat). Best, M. -------- Original Message -------- Subject: Re: [Gluster-users] Production cluster planning Local Time: October 26, 2016 10:31 PM UTC Time: October 26, 2016 8:31 PM From: lindsay.mathieson at gmail.com To: Gandalf Corvotempesta <gandalf.corvotempesta at gmail.com> gluster-users <gluster-users at gluster.org> On 27/10/2016 3:53 AM, Gandalf Corvotempesta wrote:> Are you using any ZFS RAID on your servers?Yah, RAID10. - Two nodes with 4 WD 3TB RED - One node with 2 WD 3TB reds and 6 * 600MB SAS Velocirtprs High Endurance High Speed SSD's for SLOG devices on each node. Std SSD's don't cut it, I've run through a lot of them. ZFS/Gluster/VM Hosting generates an extraordinary amount of writes. And the quoted write speeds for most SSD's are for *compressible* data, their performance goes to shit when you write pre-compressed data to them, which I have activated for ZFS. -- Lindsay Mathieson _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <gluster.org/pipermail/gluster-users/attachments/20161026/069c4437/attachment.html>
2016-10-26 22:31 GMT+02:00 Lindsay Mathieson <lindsay.mathieson at gmail.com>:> Yah, RAID10. > > - Two nodes with 4 WD 3TB REDI really hate RAID10. I'm evaluating 2 RAIZ2 on each gluster node (12 disks: 6+6 on each RAIDZ2) or one huge RAIDZ3 with 12 disks. The biggest drawback with RAIDZ is that is impossible to add disks on an existing pool. So, I have to start immediatly with 12 disks (in case of a RAIDZ3) Probably, 2 RAIDZ2 made with 6 disks is enough.> - One node with 2 WD 3TB reds and 6 * 600MB SAS VelocirtprsVelociraptors: are still around ? I heard that were EOL a couple of years ago.