search for: 100tb

Displaying 20 results from an estimated 37 matches for "100tb".

Did you mean: 10tb
2018 Apr 04
2
Expand distributed replicated volume with new set of smaller bricks
We currently have a 3 node gluster setup each has a 100TB brick (total 300TB, usable 100TB due to replica factor 3) We would like to expand the existing volume by adding another 3 nodes, but each will only have a 50TB brick. I think this is possible, but will it affect gluster performance and if so, by how much. Assuming we run a rebalance with force opti...
2018 Apr 04
0
Expand distributed replicated volume with new set of smaller bricks
...ce enabled for the volume and run rebalance with the start force option. Which version of gluster are you running (we fixed a bug around this a while ago)? Regards, Nithya On 4 April 2018 at 11:36, Anh Vo <vtqanh at gmail.com> wrote: > We currently have a 3 node gluster setup each has a 100TB brick (total > 300TB, usable 100TB due to replica factor 3) > We would like to expand the existing volume by adding another 3 nodes, but > each will only have a 50TB brick. I think this is possible, but will it > affect gluster performance and if so, by how much. Assuming we run a >...
2008 Jun 21
5
recommendations for copying large filesystems
I need to copy over 100TB of data from one server to another via network. What is the best option to do this? I am planning to use rsync but is there a better tool or better way of doing this? For example, I plan on doing rsync -azv /largefs /targetfs /targetfs is a NFS mounted filesystem. Any thoughts? TIA ------------...
2008 Jun 02
2
RE: Largish filesystems [was Re: XFS install issue]
...2min down). After RH9 I switched to Centos. The system that I am currently configuring with 7+ TB of storage is one of the smaller storage servers for our systems. Using the same configuration with more drives we are planning several 20TB+ systems. For the work we do a single file system over 100TB is not unreasonable. We will be replacing a 80TB SAN system based on StorNext with a Isilon system with 10G network connections. If there was a way to create a Linux (Centos) 100TB ? 500TB or larger clustered file system with the nodes connected via infiniband that was easily manageable with...
2010 Dec 11
8
What NAS device(s) do you use? And why?
...s around but they do what it says on the box. My only real gripe with them is the lack of decent scalability. TheCus devices seems to be rather powerful as well, and you can stack upto 5 units together. But that's where the line stops. I'm now looking for something that could scale beyond 100TB on one device (not necessarily one unit though) and find it frustrating that most NAS's come in 1U or 2U at most. Maybe I'm just not shopping around enough, or maybe I prefer to well known brands, I don't know. So, what do you use? How well does it work for you? And, how reliable /...
2018 May 26
2
glustefs as vmware datastore in production
...fs as vmware datastore working in production in a > real world case? How to serve the glusterfs cluster? As iscsi, NFS? > > Hi, I am using glusterfs 3.10.x for VMware ESXi 5.5 NFS DataStore. Our Environment is - 4 node supermicro server (each 50TB, NL SAS 4TB used, LSI 9260-8i) - Totally 100TB service volume - 10G Storage Network and Service network (for NFS) - VMware / Linux / IBM AIX Clients Currently, VM images on GlusterFS is used for Data(Backup) not OS Image. There is no glusterfs issue in 10 months. Best Regards -------------- next part -------------- An HTML attachment was scr...
2011 Jun 07
2
Disk free space, quotas and GPFS
...ry share is in a fileset of it's own including all the users home directories. All the filesets have a quota attached to them. What I would like is to have the disk size and usage reported by windows to be quota limit and usage for the fileset, rather than for the entire file system as at over 100TB it is somewhat misleading. I thought I would be able to use the dfree command option of smb.conf to report the correct information gathered through a script of some description. Unfortunately even with a simple shell script that echos a couple of numbers is owned by root and has permissions 700 do...
2018 May 28
0
glustefs as vmware datastore in production
...duction in a real world case? How to serve the glusterfs > cluster? As iscsi, NFS? > > > Hi, > > I am using glusterfs 3.10.x for VMware ESXi 5.5 NFS DataStore. > > Our Environment is > - 4 node supermicro server (each 50TB, NL SAS 4TB used, LSI 9260-8i) > - Totally 100TB service volume > - 10G Storage Network and Service network (for NFS) > - VMware / Linux / IBM AIX Clients > > Currently, VM images on GlusterFS is used for Data(Backup) not OS Image. > > There is no glusterfs issue in 10 months. > > Best Regards > > > > > ---...
2010 Dec 11
9
What NAS device(s) do you use? And why?
...ut they do what it says on the box. My only real gripe with them is the lack of decent scalability. TheCus devices seems to be rather powerful as well, and you can stack upto 5 units together. But that''s where the line stops. I''m now looking for something that could scale beyond 100TB on one device (not necessarily one unit though) and find it frustrating that most NAS''s come in 1U or 2U at most. Maybe I''m just not shopping around enough, or maybe I prefer to well known brands, I don''t know. So, what do you use? How well does it work for you? And,...
2010 Dec 11
9
What NAS device(s) do you use? And why?
...ut they do what it says on the box. My only real gripe with them is the lack of decent scalability. TheCus devices seems to be rather powerful as well, and you can stack upto 5 units together. But that''s where the line stops. I''m now looking for something that could scale beyond 100TB on one device (not necessarily one unit though) and find it frustrating that most NAS''s come in 1U or 2U at most. Maybe I''m just not shopping around enough, or maybe I prefer to well known brands, I don''t know. So, what do you use? How well does it work for you? And,...
2004 Dec 06
1
Maximum ext3 file system size ??
...dated or it is still 4TB for 2.6.* kernel? The site at http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html says that it is 4TB yet, but I would like to know if it is possible to create and use stable & easy-to-fix (or at least as stable & easy-to-fix as ext3) file systems as big as 100TB for 32 bit Linux architecture? Any experience and suggestions are greatly appreciated. Thanks. Q: What is the largest possible size of an ext3 filesystem and of files on ext3? inspired by Andreas Dilger, suggested by Christian Kujau: Ext3 can support files up to 1TB. With a 2.4 kernel the...
2010 Dec 21
5
relationship between ARC and page cache
One thing I''ve been confused about for a long time is the relationship between ZFS, the ARC, and the page cache. We have an application that''s a quasi-database. It reads files by mmap()ing them. (writes are done via write()). We''re talking 100TB of data in files that are 100k->50G in size (the files have headers to tell the app what segment to map, so mapped chunks are in the 100k->50M range, though sometimes it''s sequential.) I found it confusing that we ended up having to allocate a ton of swap to back anon pages behind...
2009 Nov 20
13
Data balance across vdevs
I''m migrating to ZFS and Solaris for cluster computing storage, and have some completely static data sets that need to be as fast as possible. One of the scenarios I''m testing is the addition of vdevs to a pool. Starting out, I populated a pool that had 4 vdevs. Then, I added 3 more vdevs and would like to balance this data across the pool for performance. The data may be
2017 Jun 23
2
seeding my georeplication
I have a ~600tb distributed gluster volume that I want to start using geo replication on. The current volume is on 6 100tb bricks on 2 servers My plan is: 1) copy each of the bricks to a new arrays on the servers locally 2) move the new arrays to the new servers 3) create the volume on the new servers using the arrays 4) fix the layout on the new volume 5) start georeplication (which should be relatively small as mos...
2014 Feb 28
6
suggestions for large filesystem server setup (n * 100 TB)
Hi, over time the requirements and possibilities regarding filesystems changed for our users. currently I'm faced with the question: What might be a good way to provide one big filesystem for a few users which could also be enlarged; backuping the data is not the question. Big in that context is up to couple of 100 TB may be. O.K. I could install one hardware raid with e.g. N big drives
2018 Jun 19
2
[virtio-dev] Re: [PATCH v33 2/4] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
...uge guests become common in the future, we can easily tweak this API to fill hints into scattered buffer (e.g. several 4MB arrays passed to this API) instead of one as in this version. > > This limitation doesn't cause any issue from functionality perspective. For the extreme case like a 100TB guest live migration which is theoretically possible today, this optimization helps skip 2TB of its free memory. This result is that it may reduce only 2% live migration time, but still better than not skipping the 2TB (if not using the feature). Not clearly better, no, since you are slowing the g...
2009 Nov 17
13
ZFS storage server hardware
Hi, I know (from the zfs-discuss archives and other places [1,2,3,4]) that a lot of people are looking to use zfs as a storage server in the 10-100TB range. I''m in the same boat, but I''ve found that hardware choice is the biggest issue. I''m struggling to find something which will work nicely under solaris and which meets my expectations in terms of hardware. Because of the compatibility issues, I though I should a...
2018 Jun 18
2
[PATCH v33 2/4] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
On Sat, Jun 16, 2018 at 01:09:44AM +0000, Wang, Wei W wrote: > On Friday, June 15, 2018 10:29 PM, Michael S. Tsirkin wrote: > > On Fri, Jun 15, 2018 at 02:11:23PM +0000, Wang, Wei W wrote: > > > On Friday, June 15, 2018 7:42 PM, Michael S. Tsirkin wrote: > > > > On Fri, Jun 15, 2018 at 12:43:11PM +0800, Wei Wang wrote: > > > > > Negotiation of the
2019 Feb 26
0
estimated number of years to TBW math
My nvme drive has a warranty of 100TBW. If I divide "Data Units Written:" by "Power On Hours:" I get the average written per hour. If I divide 100 000 (100TB) that by 24 by 365 I get the number of years to get to 100TBW. Right? This is strictly the time to TBW, what things fail because of age? > $ sudo smartct...
2015 May 07
0
Backup PC or other solution
...een more or less constant size for a year or two now as it deletes the oldest backups. I don't think there's an option to delete based on volume free space, its age based, so you adjust the retention age to suit. the compression and dedup works so well it amazes me, that I have about 100TB worth of incremental backups stored on 6TB of actual disk. My backup servers actually have 32TB after raid 6+0, but only 20TB is currently allocated to the backuppc data volume, so I can grow the /data volume if needed. -- john r pierce, recycling bits in santa cruz