Just upgraded my test 3 node distributed-replica 9x2 glusterfs to 11.0 and it was a bit rough. After upgrading the 1st node, gluster volume status showed only the bricks on node 1, and gluster peer status showed node1 rejecting node 2 & 3. After upgrading node2, and then node3, node 3 remained rejected. I followed the docs for resolving the rejected peer, i.e. clean out /var/lib/glusterd other than .info file and was able to peer probe and get node 3 back into the cluster. However, the fuse glusterfs client is now oddly reporting the volume is only 1.1TB, versus the 2.5TB before(9x280GB disks). Also, glusterfsd's seem to crash under load testing just as much as 10, and it created unhealable files which I'd never seen on 10, and only resolved it by rm -rf on the whole testing directory tree.