search for: 50tb

Displaying 20 results from an estimated 34 matches for "50tb".

Did you mean: 50gb
2018 May 26
2
glustefs as vmware datastore in production
> Hi, > > Does anyone have glusterfs as vmware datastore working in production in a > real world case? How to serve the glusterfs cluster? As iscsi, NFS? > > Hi, I am using glusterfs 3.10.x for VMware ESXi 5.5 NFS DataStore. Our Environment is - 4 node supermicro server (each 50TB, NL SAS 4TB used, LSI 9260-8i) - Totally 100TB service volume - 10G Storage Network and Service network (for NFS) - VMware / Linux / IBM AIX Clients Currently, VM images on GlusterFS is used for Data(Backup) not OS Image. There is no glusterfs issue in 10 months. Best Regards -------------- next...
2018 Apr 04
2
Expand distributed replicated volume with new set of smaller bricks
We currently have a 3 node gluster setup each has a 100TB brick (total 300TB, usable 100TB due to replica factor 3) We would like to expand the existing volume by adding another 3 nodes, but each will only have a 50TB brick. I think this is possible, but will it affect gluster performance and if so, by how much. Assuming we run a rebalance with force option, will this distribute the existing data proportionally? I.e., if the current 100TB volume has 60 TB, will it distribute 20TB to the new set of servers? Than...
2018 Jan 31
4
df does not show full volume capacity after update to 3.12.4
...date to CentOS 7.4 and gluster 3.12.4, ?df? correctly showed the size for the volume as 233TB. After the update, we added 2 bricks with 1 on each server, but the output of ?df? still only listed 233TB for the volume. We added 2 more bricks, again with 1 on each server. The output of ?df? now shows 350TB, but the aggregate of 8 ? 59TB bricks should be ~466TB. Configuration 2: A distributed, replicated volume with 9 bricks on each server for a total of ~350TB of storage. After the server update to RHEL 6.9 and gluster 3.12.4, the volume now shows as having 50TB with ?df?. No changes were made to th...
2012 Nov 14
3
Using local writes with gluster for temporary storage
Hi, We have a cluster with 130 compute nodes with an NAS-type central storage under gluster (3 bricks, ~50TB). When we run large number of ocean models we can run into bottlenecks with many jobs trying to write to our central storage. It was suggested to us that we could also used gluster to unite the disks on the compute nodes into a single "disk" in which files would be written locally. Then...
2013 Oct 21
1
DFS share: free space?
Hi, is it possible, to use DFS and show the correct values of free space? I set up a DFS-share located on filesystem1 (size 50GB) and linked shares of another server to this share (msdfs:<fs>\share): share1: size 110TB share2: size 50TB share3: size 20TB But connecting to the DFS-share, the disk size of this network drive ist 50GB. Unfortunately files larger than 50GB can not be copied to the network drive . Any ideas? Thanks and best, Alex
2018 May 28
0
glustefs as vmware datastore in production
...usterfs as vmware datastore working in > production in a real world case? How to serve the glusterfs > cluster? As iscsi, NFS? > > > Hi, > > I am using glusterfs 3.10.x for VMware ESXi 5.5 NFS DataStore. > > Our Environment is > - 4 node supermicro server (each 50TB, NL SAS 4TB used, LSI 9260-8i) > - Totally 100TB service volume > - 10G Storage Network and Service network (for NFS) > - VMware / Linux / IBM AIX Clients > > Currently, VM images on GlusterFS is used for Data(Backup) not OS Image. > > There is no glusterfs issue in 10 months....
2018 Apr 04
0
Expand distributed replicated volume with new set of smaller bricks
...4 April 2018 at 11:36, Anh Vo <vtqanh at gmail.com> wrote: > We currently have a 3 node gluster setup each has a 100TB brick (total > 300TB, usable 100TB due to replica factor 3) > We would like to expand the existing volume by adding another 3 nodes, but > each will only have a 50TB brick. I think this is possible, but will it > affect gluster performance and if so, by how much. Assuming we run a > rebalance with force option, will this distribute the existing data > proportionally? I.e., if the current 100TB volume has 60 TB, will it > distribute 20TB to the new s...
2012 Nov 19
4
"upstream" Storage Server fully OSS?
...goes down all your clients (listeners) know it * they know it NOW * they know how long it takes to get it back up * High Availability as the primary concern * ability to administrate via web interface or similar by non-Linux-savvy IT staff. * ability to grow file system from 2-3TB to 20-50TB by simply adding disks and/or adding 'bricks' * clients will all be Windows computers, so files accessible by CIFS * critical application is read-only * prefer a system that would continue serving files even if the network goes down (but have not found such a system yet for Windows client...
2012 Aug 26
1
cluster.min-free-disk not working
Further to my last email, I've been trying to find out why GlusterFS is favouring one brick over another. In pretty much all of my tests gluster is favouring the MOST full brick to write to. This is not a good thing when the most full brick has less than 200GB free and I need to write a huge file to it. I've set cluster.min-free-disk on the volume, and it doesn't seem to have an
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
...nd gluster 3.12.4, ?df? correctly showed > the size for the volume as 233TB. After the update, we added 2 bricks with > 1 on each server, but the output of ?df? still only listed 233TB for the > volume. We added 2 more bricks, again with 1 on each server. The output of > ?df? now shows 350TB, but the aggregate of 8 ? 59TB bricks should be ~466TB. > > > > Configuration 2: A distributed, replicated volume with 9 bricks on each > server for a total of ~350TB of storage. After the server update to RHEL > 6.9 and gluster 3.12.4, the volume now shows as having 50TB with ?df...
2010 Oct 19
8
Balancing LVOL fill?
Hi all I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about speed? Three of the VDEVs are quite full, as indicated below. VDEV #3 (the one with the spare active) just spent some 72 hours resilvering a 2...
2018 Jan 31
1
df does not show full volume capacity after update to 3.12.4
...4, ?df? correctly showed >> the size for the volume as 233TB. After the update, we added 2 bricks with >> 1 on each server, but the output of ?df? still only listed 233TB for the >> volume. We added 2 more bricks, again with 1 on each server. The output of >> ?df? now shows 350TB, but the aggregate of 8 ? 59TB bricks should be ~466TB. >> >> >> >> Configuration 2: A distributed, replicated volume with 9 bricks on each >> server for a total of ~350TB of storage. After the server update to RHEL >> 6.9 and gluster 3.12.4, the volume now shows...
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
...date to CentOS 7.4 and gluster 3.12.4, ?df? correctly showed the size for the volume as 233TB. After the update, we added 2 bricks with 1 on each server, but the output of ?df? still only listed 233TB for the volume. We added 2 more bricks, again with 1 on each server. The output of ?df? now shows 350TB, but the aggregate of 8 ? 59TB bricks should be ~466TB. > > Configuration 2: A distributed, replicated volume with 9 bricks on each server for a total of ~350TB of storage. After the server update to RHEL 6.9 and gluster 3.12.4, the volume now shows as having 50TB with ?df?. No changes were...
2018 Jan 31
1
df does not show full volume capacity after update to 3.12.4
...date to CentOS 7.4 and gluster 3.12.4, ?df? correctly showed the size for the volume as 233TB. After the update, we added 2 bricks with 1 on each server, but the output of ?df? still only listed 233TB for the volume. We added 2 more bricks, again with 1 on each server. The output of ?df? now shows 350TB, but the aggregate of 8 ? 59TB bricks should be ~466TB. Configuration 2: A distributed, replicated volume with 9 bricks on each server for a total of ~350TB of storage. After the server update to RHEL 6.9 and gluster 3.12.4, the volume now shows as having 50TB with ?df?. No changes were made to th...
2008 Mar 26
3
HW experience
Hi, we would like to establish a small Lustre instance and for the OST planning to use standard Dell PE1950 servers (2x QuadCore + 16 GB Ram) and for the disk a JBOD (MD1000) steered by the PE1950 internal Raid controller (Raid-6). Any experience (good or bad) with such a config ? thanxs, Martin
2020 Jul 02
5
[OT] Bacula offsite replication
Il 01/07/20 17:13, Leroy Tennison ha scritto: > I realize this shouldn't happen, the file is a tgz and isn't being modified while being transmitted. This has happened maybe three times this year and unfortunately I've just had to deal with it rather than invest the time to do the research. > > > Harriscomputer > > Leroy Tennison > Network Information/Cyber
2020 Jul 02
0
[OT] Bacula offsite replication
I setup drbd to replicate a ~50TB backuppc hive to the DR copy, an identical box in a different DC on the same campus, with approximately gigE speeds, and ran this for a year or two. It worked well enough but required babysitting from time to time. Both nodes were mdraid lvm logical volumes formatted as a single huge xfs on cen...
2020 Jul 02
1
[OT] Bacula offsite replication
...John, thank you for your answer, I already take in consideration DRBD but I need some test before start. Reading you seems that this solution is not anymore available. What do you use for this? Thank you in advance. Il 02/07/20 10:43, John Pierce ha scritto: > I setup drbd to replicate a ~50TB backuppc hive to the DR copy, an > identical box in a different DC on the same campus, with approximately gigE > speeds, and ran this for a year or two. It worked well enough but > required babysitting from time to time. Both nodes were mdraid lvm > logical volumes formatted as a si...
2015 Jan 29
1
sizing samba environment
Hello, I like migrate and consolidate a lot of Windows and samba files servers to a new samba installation. Following situation: ---------------------- At the moment are working about 3000 users in two sites with a amount of 30TB data. Over the next years, it may grow up until 50TB data volume. Planned Hardware: ---------------------- The new disk storage is connected via FC (8GBit/s) to a site redundant SAN. The storage itself uses a online mirror. Planned at the moment is to use 4 servers (two Servers at each site) with each 64GByte RAM and 2x Intel Xeon E5-2630 V2 (6 core...
2017 Sep 13
0
Basic questions
...-s? The distributed setup will go as large as 300 TB overall. File sezes are 10MB-1,5GB. Are there any recommendations for performance tuning on volumes? I will not use tiering with ssd. The network is going to be 10Gbps. Any advice on this topic is highly appreciated. Will start with 1 server and 50TB disks on HW raid10 and add servers as data goes larger. -- Best regards, Roman. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170913/06ed2e56/attachment.html>