search for: 20tb

Displaying 20 results from an estimated 42 matches for "20tb".

Did you mean: 20gb
2008 Feb 06
2
strategy/technology to backup 20TB or more user's data
Hi Friends, I am currently using Samba on Centos 4.4 as a domain member of AD 2003 with each user having a quota of 2GB(no of users is around 2,000). Now the management wants to increase the quota to 10GB with this there will be more than 20TB of data to be backup weekly which will take lots of hours. Currently Veritas backup software is used to backup data on tapes. There is a concept of snapshots of Samba with LVM where snapshots of samba are taken at the given interval but so far haven't found any good article or how-to on that a...
2013 Jun 27
1
15 min pause during boot - Setting up logical volume management
Hi all, I rebooted a server having a 20TB XFS volume under LVM and wait about 15 min to boot. It stays at; Setting up logical volume management For 15 min then proceeds to boot fine. During this time, I see the 14 disks of the 20TB volume flashing quickly as though being read. Nothing in my logs to indicate bad behavior. I am running...
2018 Apr 04
2
Expand distributed replicated volume with new set of smaller bricks
...nodes, but each will only have a 50TB brick. I think this is possible, but will it affect gluster performance and if so, by how much. Assuming we run a rebalance with force option, will this distribute the existing data proportionally? I.e., if the current 100TB volume has 60 TB, will it distribute 20TB to the new set of servers? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180403/a45072a5/attachment.html>
2013 Apr 29
1
Replicated and Non Replicated Bricks on Same Partition
Gluster-Users, We currently have a 30 node Gluster Distributed-Replicate 15 x 2 filesystem. Each node has a ~20TB xfs filesystem mounted to /data and the bricks live on /data/brick. We have been very happy with this setup, but are now collecting more data that doesn't need to be replicated because it can be easily regenerated. Most of the data lives on our replicated volume and is starting to waste s...
2008 Feb 06
2
Is Samba Shadowcopying can be used in Production Environement with more than 20 TB of data
Hi Friends, I am currently using Samba on Centos 4.4 as a domain member of AD 2003 with each user having a quota of 2GB(no of users is around 2,000). Now the management wants to increase the quota to 10GB with this there will be more than 20TB of data to be backup weekly which will take lots of hours. Currently Veritas backup software is used to backup data on tapes. There is a concept of snapshots of Samba with LVM where snapshots of samba are taken at the given interval but so far haven't found any good article or how-to on that an...
2012 May 23
5
biggest disk partition on 5.8?
Hey folks, I have a Sun J4400 SAS1 disk array with 24 x 1T drives in it connected to a Sunfire x2250 running 5.8 ( 64 bit ) I used 'arcconf' to create a big RAID60 out of (see below). But then I mount it and it is way too small This should be about 20TB : [root at solexa1 StorMan]# df -h /dev/sdb1 Filesystem Size Used Avail Use% Mounted on /dev/sdb1 186G 60M 176G 1% /mnt/J4400-1 Here is how I created it : ./arcconf create 1 logicaldrive name J4400-1-RAID60 max 60 0 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0...
2017 Aug 24
2
AArch64 buildbots and PR33972
I'd like to mention that test does not allocate 30TB, it allocates 1TB, the rest, ~20TB, is reserved (but not actually used) for ASan shadow memory, it should not be a problem by itself. The test on your bot failed because it tried to reserve 27TB of memory, which is more than set by ulimit earlier in this test. I do not immediately see why it wants to reserve that much shadow for AA...
2003 Jun 02
2
Any LARGE production Sambas?
Hi list, A customer was asking - is anyone doing LARGE Samba servers in production - in the range of 10-20TB or more? If so, what type of issues come up as the data sizes grow large? What architectures work well? Is there a recommended maximum TB/server? Thanks. -Mike MacIsaac, IBM mikemac at us.ibm.com (845) 433-7061
2012 May 02
1
File size diff between NFS mount and local disk
Hi all, I never really paid attention to this but a file on an NFS mount is showing 64M in size, but when copying the file to a local drive, it shows 2.5MB in size. My NFS server is hardware Raided with a volume stripe size of 128K were the volume size is 20TB. My NFS clients are the same distro as the server being Centos. Is this due to my stripe size? Nuggets are appreciated. - aurf
2013 Oct 21
1
DFS share: free space?
Hi, is it possible, to use DFS and show the correct values of free space? I set up a DFS-share located on filesystem1 (size 50GB) and linked shares of another server to this share (msdfs:<fs>\share): share1: size 110TB share2: size 50TB share3: size 20TB But connecting to the DFS-share, the disk size of this network drive ist 50GB. Unfortunately files larger than 50GB can not be copied to the network drive . Any ideas? Thanks and best, Alex
2018 Apr 04
0
Expand distributed replicated volume with new set of smaller bricks
...only have a 50TB brick. I think this is possible, but will it > affect gluster performance and if so, by how much. Assuming we run a > rebalance with force option, will this distribute the existing data > proportionally? I.e., if the current 100TB volume has 60 TB, will it > distribute 20TB to the new set of servers? > > Thanks > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users > -------------- next part -------------- An HTML attachment was sc...
2012 May 03
1
File size diff on local disk vs NFS share
...his but a file on an NFS mount is >>>> showing 64M in size, but when copying the file to a local drive, it >>>> shows 2.5MB in size. >>>> >>>> My NFS server is hardware Raided with a volume stripe size of 128K >>>> were the volume size is 20TB, my local disk is about 500GB. >>>> >>>> Is this due to my stripe size? >>>> >>>> Nuggets are appreciated. > >> By the way, this is only across NFS as when ssh'd into the server, the file size shows 2.5M, same as the clients when its l...
2006 Sep 19
4
Disk Layout for New Storage Server
We are implementing a ZFS storage server (NAS) to replace a NetApp box. I have a Sun server with two dual Ultra320 PCIX cards connected to 4 shelves of 12 500GB disks each, yielding a total of 24TB of raw storage. I''m kicking around the different ways to carve this space up, balancing storage space with data integrity. The layout that I have come to think is the best for me is to
2017 Aug 30
2
Shares not accessible when using FQDN
Hi Rowland, The reason is long to explain but shortly it was about huge amount of data ~20TB stored on that server with unix user ID (comming from a S3/LDAP setup). On a DC mode it seems unix ID are in use instead of idmap id. CNAME is in added indeed. Regarding the migration as said we came from S3/LDAP and go to 4.6. The entire future structure is not fixed yet but at this time we have...
2017 Oct 18
2
rsync ingest to new storage environment
All, I am seeding a new storage environment (Glusterfs on XFS) and would like to gather advise on best practices. This data is primarily all media data, so not good with compression. I currently have made one pass on at 20TB directory tree into the environment as: - nfs mount from old storage to new storage - rsync -av /old/storage/* /new/storage/directory Once the directories and files were on the new storage, I did: - chown -R root:root - chmod -R 774 I'll need to do a couple more sync's prior to full cut...
2015 May 07
2
Backup PC or other solution
...figure it it is nice. But to make a configuration work for >> the >> first time is really challenging (says one who still managed to configure >> it > > I've been using BackupPC to backup about 25-30 servers and VMs for a > couple years now. My backup server has a 20TB raid dedicated to > BackupPC, using XFS on LVM, on CentOS 6.latest... That backup raid is > mirrored to an identical server in a seperate building via drbd for > disaster recovery. I keep 12+ months of monthly full backups, and 30+ > days of daily incrementals. The deduplicated and...
2013 Dec 09
3
Gluster infrastructure question
...addition i'll get double of HDD capacity. 2) I've heard a talk about glusterFS and out scaling. The main point was if more bricks are in use, the scale out process will take a long time. The problem was/is the Hash-Algo. So I'm asking me how is it if I've one very big brick (Raid10 20TB on each server) or I've much more bricks, what's faster and is there any issues? Is there any experiences ? 3) Failover of a HDD is for a raid controller with HotSpare HDD not a big deal. Glusterfs will rebuild automatically if a brick fails and there are no data present, this action will...
2009 Nov 17
14
X45xx storage vs 7xxx Unified storage
We are looking at adding to our storage. We would like ~20TB-30 TB. we have ~ 200 nodes (1100 cores) to feed data to using nfs, and we are looking for high reliability, good performance (up to at least 350 MBytes /second over 10 GigE connection) and large capacity. For the X45xx (aka thumper) capacity and performanance seem to be there (we have 3 now) Ho...
2015 May 06
2
Backup PC or other solution
On Wed, May 6, 2015 2:46 pm, m.roth at 5-cent.us wrote: > Alessandro Baggi wrote: >> I list, >> I'm new with backup ops and I'm searching a good system to accomplish >> this >> work. I know that on centos there are bacula and amanda but they are >> too >> tape oriented. Another is that they are very powerfull but more complex. >> I >>
2007 Oct 09
7
ZFS 60 second pause times to read 1K
Every day we see pause times of sometime 60 seconds to read 1K of a file for local reads as well as NFS in a test setup. We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have the files system mounted over v5 krb5 NFS and accessed directly. The pool is a 20TB pool and is using . There are three filesystems, backup, test and home. Test has about 20 million files and uses 4TB. These files range from 100B to 200MB. Test has a cron job to take snapshots every 15 minutes from 1m on the hour. Every 15min at 2min on the hour a cron batch job runs to zfs send/r...