Displaying 4 results from an estimated 4 matches for "gls2".
Did you mean:
gfs2
2023 Mar 26
1
hardware issues and new server advice
...ith more disks for the raids, use several (not the built-in ones) controllers.
Well, we have to take what our provider (hetzner) offers - SATA hdds
or sata|nvme ssds.
Volume Name: workdata
Type: Distributed-Replicate
Number of Bricks: 5 x 3 = 15
Bricks:
Brick1: gls1:/gluster/md3/workdata
Brick2: gls2:/gluster/md3/workdata
Brick3: gls3:/gluster/md3/workdata
Brick4: gls1:/gluster/md4/workdata
Brick5: gls2:/gluster/md4/workdata
Brick6: gls3:/gluster/md4/workdata
etc.
Below are the volume settings.
Each brick is a sw raid1 (made out of 10TB hdds). file access to the
backends is pretty slow, even w...
2005 Jan 20
1
Windows Front end-crash error
Dear List:
First, many thanks to those who offered assistance while I constructed
code for the simulation. I think I now have code that resolves most of
the issues I encountered with memory.
While the code works perfectly for smallish datasets with small sample
sizes, it arouses a windows-based error with samples of 5,000 and 250
datasets. The error is a dialogue box with the following:
"R
2023 Mar 30
2
Performance: lots of small files, hdd, nvme etc.
...d might be better.
current state:
- servers with 10TB hdds
- 2 hdds build up a sw raid1
- each raid1 is a brick
- so 5 bricks per server
- Volume info (complete below):
Volume Name: workdata
Type: Distributed-Replicate
Number of Bricks: 5 x 3 = 15
Bricks:
Brick1: gls1:/gluster/md3/workdata
Brick2: gls2:/gluster/md3/workdata
Brick3: gls3:/gluster/md3/workdata
Brick4: gls1:/gluster/md4/workdata
Brick5: gls2:/gluster/md4/workdata
Brick6: gls3:/gluster/md4/workdata
etc.
- workload: the (un)famous "lots of small files" setting
- currently 70% of the of the volume is used: ~32TB
- file size:...
2023 Mar 24
2
hardware issues and new server advice
Actually,
pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data.
I would choose? LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several? (not the built-in ones) controllers.
@Martin,
in order to get a more reliable setup, you will have to