Displaying 4 results from an estimated 4 matches for "gls3".
Did you mean:
gls
2023 Mar 26
1
hardware issues and new server advice
...everal (not the built-in ones) controllers.
Well, we have to take what our provider (hetzner) offers - SATA hdds
or sata|nvme ssds.
Volume Name: workdata
Type: Distributed-Replicate
Number of Bricks: 5 x 3 = 15
Bricks:
Brick1: gls1:/gluster/md3/workdata
Brick2: gls2:/gluster/md3/workdata
Brick3: gls3:/gluster/md3/workdata
Brick4: gls1:/gluster/md4/workdata
Brick5: gls2:/gluster/md4/workdata
Brick6: gls3:/gluster/md4/workdata
etc.
Below are the volume settings.
Each brick is a sw raid1 (made out of 10TB hdds). file access to the
backends is pretty slow, even with low system load (which reaches...
2006 Mar 16
2
DIfference between weights options in lm GLm and gls.
...ormula = ys ~ Xs - 1)
Coefficients:
Xs Xsx
2.687 6.528
Degrees of Freedom: 1243 Total (i.e. Null); 1241 Residual
Null Deviance: 4490000
Residual Deviance: 506700 AIC: 11000
With weights, the glm did not give the same results as lm why?
Also for gls, I use varFixed here.
> gls3
Generalized least squares fit by REML
Model: y ~ x
Data: NULL
Log-restricted-likelihood: -3737.392
Coefficients:
(Intercept) x
0.03104214 7.31032540
Variance function:
Structure: fixed weights
Formula: ~W
Degrees of freedom: 1243 total; 1241 residual
Residual standard error: 4...
2023 Mar 30
2
Performance: lots of small files, hdd, nvme etc.
...- servers with 10TB hdds
- 2 hdds build up a sw raid1
- each raid1 is a brick
- so 5 bricks per server
- Volume info (complete below):
Volume Name: workdata
Type: Distributed-Replicate
Number of Bricks: 5 x 3 = 15
Bricks:
Brick1: gls1:/gluster/md3/workdata
Brick2: gls2:/gluster/md3/workdata
Brick3: gls3:/gluster/md3/workdata
Brick4: gls1:/gluster/md4/workdata
Brick5: gls2:/gluster/md4/workdata
Brick6: gls3:/gluster/md4/workdata
etc.
- workload: the (un)famous "lots of small files" setting
- currently 70% of the of the volume is used: ~32TB
- file size: few KB up to 1MB
- so there are hu...
2023 Mar 24
2
hardware issues and new server advice
Actually,
pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data.
I would choose? LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several? (not the built-in ones) controllers.
@Martin,
in order to get a more reliable setup, you will have to