Displaying 4 results from an estimated 4 matches for "60tb".
Did you mean:
60kb
2017 Dec 28
1
Adding larger bricks to an existing volume
I have a 10x2 distributed replica volume running gluster3.8.
Each of my bricks is about 60TB in size. ( 6TB drives Raid 6 10+2 )
I am running of storage so I intend on adding servers with larger 8Tb
drives.
My new bricks will be 80TB in size. I will make sure the replica to the
larger brick will match in size.
Will gluster place more files on the larger bricks? Or will I have wasted
spac...
2012 Feb 22
2
What about ZFS ?
Hi everybody,
I'm looking for information about using GlusterFS with ZFS. I got information that talked about a sort of incompatibility between the both technologies because of unsupported xattr feature in ZFS.
What are the latest news about this ?
Thank you in advance.
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
...sks (10TB hdd)? The heal processes on the 5xraid1-scenario
> seems faster. Just out of curiosity...
It should be, since the bricks are smaller. But given you're using a
replica 3 I don't understand why you're also using RAID1: for each 10T
of user-facing capacity you're keeping 60TB of data on disks.
I'd ditch local RAIDs to double the space available. Unless you
desperately need the extra read performance.
> Options Reconfigured:I'll have a look at the options you use. Maybe something can be useful
in our case. Tks :)
--
Diego Zuccato
DIFA - Dip. di Fisica e A...
2023 Mar 30
2
Performance: lots of small files, hdd, nvme etc.
Hello there,
as Strahil suggested a separate thread might be better.
current state:
- servers with 10TB hdds
- 2 hdds build up a sw raid1
- each raid1 is a brick
- so 5 bricks per server
- Volume info (complete below):
Volume Name: workdata
Type: Distributed-Replicate
Number of Bricks: 5 x 3 = 15
Bricks:
Brick1: gls1:/gluster/md3/workdata
Brick2: gls2:/gluster/md3/workdata
Brick3: