search for: 14tb

Displaying 20 results from an estimated 24 matches for "14tb".

Did you mean: 10tb
2018 Jan 09
2
Creating cluster replica on 2 nodes 2 bricks each.
Hello We are trying to setup Gluster for our project/scratch storage HPC machine using a replicated mode with 2 nodes, 2 bricks each (14tb each). Our goal is to be able to have a replicated system between node 1 and 2 (A bricks) and add an additional 2 bricks (B bricks) from the 2 nodes. so we can have a total of 28tb replicated mode. Node 1 [ (Brick A) (Brick B) ] Node 2 [ (Brick A) (Brick B) ] ----------------------------------...
2018 Jan 10
0
Creating cluster replica on 2 nodes 2 bricks each.
...e output of the *gluster volume info* command. Thanks, Nithya On 9 January 2018 at 23:06, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Hello > > We are trying to setup Gluster for our project/scratch storage HPC machine > using a replicated mode with 2 nodes, 2 bricks each (14tb each). > > Our goal is to be able to have a replicated system between node 1 and 2 (A > bricks) and add an additional 2 bricks (B bricks) from the 2 nodes. so we > can have a total of 28tb replicated mode. > > Node 1 [ (Brick A) (Brick B) ] > Node 2 [ (Brick A) (Brick B) ] &gt...
2018 Apr 07
0
Turn off replication
...gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#expanding-volumes Regards, Karthik On Fri, Apr 6, 2018 at 11:39 PM, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Hi Karthik > > this is our configuration, is 2x2 =4 , they are all replicated , each > brick has 14tb. we have 2 nodes A and B, each one with brick 1 and 2. > > Node A (replicated A1 (14tb) and B1 (14tb) ) same with node B (Replicated > A2 (14tb) and B2 (14tb)). > > Do you think we need to degrade the node first before removing it. i > believe the same copy of data is on all 4 br...
2018 Jan 10
2
Creating cluster replica on 2 nodes 2 bricks each.
...anks, > Nithya > > On 9 January 2018 at 23:06, Jose Sanchez <josesanc at carc.unm.edu <mailto:josesanc at carc.unm.edu>> wrote: > Hello > > We are trying to setup Gluster for our project/scratch storage HPC machine using a replicated mode with 2 nodes, 2 bricks each (14tb each). > > Our goal is to be able to have a replicated system between node 1 and 2 (A bricks) and add an additional 2 bricks (B bricks) from the 2 nodes. so we can have a total of 28tb replicated mode. > > Node 1 [ (Brick A) (Brick B) ] > Node 2 [ (Brick A) (Brick B) ] > ----...
2018 Apr 12
2
Turn off replication
...e volume. The previous remove-brick command will make the volume plain distribute. Then simply adding the bricks without specifying any "#" will expand the volume as a plain distribute volue. > > Im planning on moving with this changes in few days. At this point each > brick has 14tb and adding bricks 1 from node A and B, i have a total of > 28tb, After doing all the process, (removing and adding bricks) I should be > able to see a total of 56Tb right ? > Yes after all these you will have 56TB in total. After adding the bricks, do volume rebalance, so that the data whi...
2014 May 28
3
The state of xfs on CentOS 6?
We're looking at getting an HBR (that's a technical term, honkin' big RAID). What I'm considering is, rather than chopping it up into 14TB or 16TB filesystems, of using xfs for really big filesystems. The question that's come up is: what's the state of xfs on CentOS6? I've seen a number of older threads seeing problems with it - has that mostly been resolved? How does it work if we have some *huge* files, and lots and lots...
2023 Mar 18
1
hardware issues and new server advice
...ster replication instead. (by compensating with three replicas per brick instead of two) our options are: 6 of these: AMD Ryzen 5 Pro 3600 - 6c/12t - 3.6GHz/4.2GHz 32GB - 128GB RAM 4 or 6 ? 6TB HDD SATA 6Gbit/s or three of these: AMD Ryzen 7 Pro 3700 - 8c/16t - 3.6GHz/4.4GHz 32GB - 128GB RAM 6? 14TB HDD SAS 6Gbit/s i would configure 5 bricks on each server (leaving one disk as a hot spare) the engineers prefer the second option due to the architecture and SAS disks. it is also cheaper. i am concerned that 14TB disks will take to long to heal if one ever has to be replaced and would favor th...
2018 Jan 11
0
Creating cluster replica on 2 nodes 2 bricks each.
...nd. > > Thanks, > Nithya > > On 9 January 2018 at 23:06, Jose Sanchez <josesanc at carc.unm.edu> wrote: > >> Hello >> >> We are trying to setup Gluster for our project/scratch storage HPC >> machine using a replicated mode with 2 nodes, 2 bricks each (14tb each). >> >> Our goal is to be able to have a replicated system between node 1 and 2 >> (A bricks) and add an additional 2 bricks (B bricks) from the 2 nodes. so >> we can have a total of 28tb replicated mode. >> >> Node 1 [ (Brick A) (Brick B) ] >> Node 2...
2018 Apr 25
0
Turn off replication
...to the volume. The previous remove-brick command will make the volume plain distribute. Then simply adding the bricks without specifying any "#" will expand the volume as a plain distribute volue. > > Im planning on moving with this changes in few days. At this point each brick has 14tb and adding bricks 1 from node A and B, i have a total of 28tb, After doing all the process, (removing and adding bricks) I should be able to see a total of 56Tb right ? > Yes after all these you will have 56TB in total. > After adding the bricks, do volume rebalance, so that the data which we...
2018 Apr 25
2
Turn off replication
...olume. The previous remove-brick command will make the volume plain distribute. Then simply adding the bricks without specifying any "#" will expand the volume as a plain distribute volue. >> >> Im planning on moving with this changes in few days. At this point each brick has 14tb and adding bricks 1 from node A and B, i have a total of 28tb, After doing all the process, (removing and adding bricks) I should be able to see a total of 56Tb right ? >> Yes after all these you will have 56TB in total. >> After adding the bricks, do volume rebalance, so that the data...
2018 Apr 27
0
Turn off replication
...brick command will make the volume > plain distribute. Then simply adding the bricks without specifying any "#" > will expand the volume as a plain distribute volue. >> >> >> Im planning on moving with this changes in few days. At this point each >> brick has 14tb and adding bricks 1 from node A and B, i have a total of >> 28tb, After doing all the process, (removing and adding bricks) I should be >> able to see a total of 56Tb right ? > > Yes after all these you will have 56TB in total. > After adding the bricks, do volume rebalance, so...
2018 Apr 30
2
Turn off replication
...e volume >> plain distribute. Then simply adding the bricks without specifying any "#" >> will expand the volume as a plain distribute volue. >>> >>> >>> Im planning on moving with this changes in few days. At this point each >>> brick has 14tb and adding bricks 1 from node A and B, i have a total of >>> 28tb, After doing all the process, (removing and adding bricks) I should be >>> able to see a total of 56Tb right ? >> >> Yes after all these you will have 56TB in total. >> After adding the bricks, do...
2018 Jan 15
1
Creating cluster replica on 2 nodes 2 bricks each.
...atch >> > > > Now when i try to mount it , i still get only 14 tb and not 28? Am i doing > something wrong? also when I start/stop services, cluster goes back to > replicated mode from distributed-replicate > > If the fuse mount sees only 2 bricks , that would explain the 14TB. > > gluster01ib:/scratch 14T 34M 14T 1% /mnt/gluster_test > > ?? Gluster mount log file ?? > > [2018-01-11 16:06:44.963043] I [MSGID: 114046] > [client-handshake.c:1216:client_setvolume_cbk] 0-scratch-client-1: > Connected to scratch-client-1, attached to remote vol...
2018 May 02
0
Turn off replication
...ous remove-brick command will make the volume > plain distribute. Then simply adding the bricks without specifying any "#" > will expand the volume as a plain distribute volue. > > > > Im planning on moving with this changes in few days. At this point each > brick has 14tb and adding bricks 1 from node A and B, i have a total of > 28tb, After doing all the process, (removing and adding bricks) I should be > able to see a total of 56Tb right ? > > > Yes after all these you will have 56TB in total. > After adding the bricks, do volume rebalance, so th...
2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
...t;> >> On 9 January 2018 at 23:06, Jose Sanchez <josesanc at carc.unm.edu <mailto:josesanc at carc.unm.edu>> wrote: >> Hello >> >> We are trying to setup Gluster for our project/scratch storage HPC machine using a replicated mode with 2 nodes, 2 bricks each (14tb each). >> >> Our goal is to be able to have a replicated system between node 1 and 2 (A bricks) and add an additional 2 bricks (B bricks) from the 2 nodes. so we can have a total of 28tb replicated mode. >> >> Node 1 [ (Brick A) (Brick B) ] >> Node 2 [ (Brick A) (...
2018 Jan 12
0
Creating cluster replica on 2 nodes 2 bricks each.
...rick4: gluster02ib:/gdata/brick2/scratch > Now when i try to mount it , i still get only 14 tb and not 28? Am i doing something wrong? also when I start/stop services, cluster goes back to replicated mode from distributed-replicate If the fuse mount sees only 2 bricks , that would explain the 14TB. gluster01ib:/scratch 14T 34M 14T 1% /mnt/gluster_test ?? Gluster mount log file ?? [2018-01-11 16:06:44.963043] I [MSGID: 114046] [client-handshake.c:1216:client_setvolume_cbk] 0-scratch-client-1: Connected to scratch-client-1, attached to remote volume '/gdata/brick1/scratch'....
2013 Nov 04
5
[OT] Building a new backup server
Guys, I was thrown a cheap OEM-server with a 120 GB SSD and 10 x 4 TB SATA-disks for the data-backup to build a backup server. It's built around an Asus Z87-A that seems to have problems with anything Linux unfortunately. Anyway, BackupPC is my preferred backup-solution, so I went ahead to install another favourite, CentOS 6.4 - and failed. The raid controller is a Highpoint RocketRAID
2012 May 25
1
Dedup FS on 5.8
Hey folks, I have a 14TB disk array that I want to use for rsnapshot backups, and am considering putting a dedup FS onto it. I know I've got about a TB of duplication, at least. And it is not easy to remove manually. Google lands me LessFS and SDFS as the prime candidates. thanks, -Alan -- ?Don't eat anything...
2013 Feb 06
0
I/O hanging while hosting Postgres database
I'm seeing a condition on FreeBSD 9.1 (built October 24th) where I/O seems to hang on any local zpools after several hours of hosting a large-ish Postgres database. The database occupies about 14TB of a 38TB zpool with a single SSD ZIL. The OS is on a ZFS boot disk. The system also has 24GB of physical memory. Smartmon tools reports no errors on any disks attached to the system, and IPMI reports all temperatures, CPU voltages, and fan speeds are normal. The database has been gradually increa...
2013 Nov 14
4
First Time Setting up RAID
Arch = x86_64 CentOS-6.4 We have a cold server with 32Gb RAM and 8 x 3TB SATA drives mounted in hotswap cells. The intended purpose of this system is as an ERP application and DBMS host. The ERP application will likely eventually have web access but at the moment only dedicated client applications can connect to it. I am researching how to best set this system up for use as a production host