Hi, We run gluster as storage solution for our Owncloud-based sync and share service. At the moment we have about 30 million files in the system which addup to a little more than 30TB. Most of these files are as you may expect very small, i.e. in the 100KB ball park. For about a year everything ran perfectly fine. We run 3.6.2 by the way. Now we are trying to commission new hardware. We have done this by adding the new nodes to our cluster and using the add-brick and remove-brick procedure to get the data to the new nodes. In a week we have migrated only 8.5TB this way. What are we doing wrong here? Is there a way to improve the gluster performance on small files? I have another question. If you want to setup a gluster that will contain lots of very small files. What would be a good practice to set things up in terms configuration, sizes of bricks related tot memory and number of cores, number of brick per node etc.? Best regards and thanks in advance, Ron