One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 x3,it already crashed two time because of brown out and block out, it got production vms on it, about 1.3TB. Never got split-brain, and healed quickly. ?Can we say 3.7.13 two nodes with ssd is solid rock or just lucky? My other gluster is on 3 nodes 3713, but one node never got up (old server proliant wants to retire), ssh raid 5 with combination sshd lol laptop seagate, it never healed about 586 occurences but there's no split-brain too. ?and vms are intact too, working fine and fast. ahh never turned on caching on the array, the esx might not come up right away, u need to go to setup first to make it work and restart and then you can go to array setup (hp array F8) and turned off caching. ?then esx finally boot up. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160803/f01efcb3/attachment.html>
On 8/3/2016 11:13 AM, Leno Vo wrote:> One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 > x3,it already crashed two time because of brown out and block out, it got > production vms on it, about 1.3TB. > > Never got split-brain, and healed quickly. Can we say 3.7.13 two nodes > with ssd is solid rock or just lucky? > > My other gluster is on 3 nodes 3713, but one node never got up (old server > proliant wants to retire), ssh raid 5 with combination sshd lol laptop > seagate, it never healed about 586 occurences but there's no split-brain > too. and vms are intact too, working fine and fast. > > ahh never turned on caching on the array, the esx might not come up right > away, u need to go to setup first to make it work and restart and then you > can go to array setup (hp array F8) and turned off caching. then esx > finally boot up. > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-usersI would say you are very lucky. I would not use anything less that replica 3 in production. Ted Miller Elkhart, IN, USA -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160803/0f0fcf2f/attachment.html>
my mistakes, the corruption happened after 6 hours, some vm had sharding won't heal but there's no split brain.... On Wednesday, August 3, 2016 11:13 AM, Leno Vo <lenovolastname at yahoo.com> wrote: One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 x3,it already crashed two time because of brown out and block out, it got production vms on it, about 1.3TB. Never got split-brain, and healed quickly. ?Can we say 3.7.13 two nodes with ssd is solid rock or just lucky? My other gluster is on 3 nodes 3713, but one node never got up (old server proliant wants to retire), ssh raid 5 with combination sshd lol laptop seagate, it never healed about 586 occurences but there's no split-brain too. ?and vms are intact too, working fine and fast. ahh never turned on caching on the array, the esx might not come up right away, u need to go to setup first to make it work and restart and then you can go to array setup (hp array F8) and turned off caching. ?then esx finally boot up. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160803/7ad621b2/attachment.html>