Excerpts from Strahil Nikolov's message of 2023-03-21 00:27:58 +0000:> Generally,the recommended approach is to have? 4TB disks and no more > than 10-12 per HW RAID.what kind of raid configuration and brick size do you recommend here?> Of course , it's not always possible but a > resync of a failed 14 TB drive will take eons.right, that is my concern too. but with raid you tend to get even larger bricks. i have the impression that a full brick replacement is to be avoided. on the other hand i saw recommendations to reset a brick when the filesystem is damaged. isn't that the equivalent of a full brick replacement?> What kind of workload do you have ?the primary data is photos. we get an average of 50000 new files per day, with a peak if 7 to 8 times as much during christmas. gluster has always been able to keep up with that, only when raid resync or checks happen the server load sometimes increases to cause issues. that is, our primary failure point is hardware. years ago we had problems with bad disks, now it is the backplane, causing us to rebuild gluster from scratch now for the third time. i was really hoping to build a system that just grows and doesn't force us to move the whole (continuously growing) dataset to new servers. how should we build servers to actually last? greetings, martin.
Hi, Am Di., 21. M?rz 2023 um 23:36 Uhr schrieb Martin B?hr <mbaehr+gluster at realss.com>:> the primary data is photos. we get an average of 50000 new files per > day, with a peak if 7 to 8 times as much during christmas. > > gluster has always been able to keep up with that, only when raid resync > or checks happen the server load sometimes increases to cause issues.Interesting, we have a similar workload: hundreds of millions of images, small files, and especially on weekends with high traffic the load+iowait is really heavy. Or if a hdd fails, or during a raid check. our hardware: 10x 10TB hdds -> 5x raid1, each raid1 is a brick, replicate 3 setup. About 40TB of data. Well, the bricks are bigger than recommended... Sooner or later we will have to migrate that stuff, and use nvme for that, either 3.5TB or bigger ones. Those should be faster... *fingerscrossed* regards, Hubert