search for: 5tb

Displaying 20 results from an estimated 55 matches for "5tb".

Did you mean: 1tb
2006 Jul 19
3
create very large file system
Suse Linux Enterprise Server 9 SP3 I've tried to create a large 5TB file system using both reiserfs and ext3 and both have failed. I end up with only a 1.5TB file system. Does anyone know why this doesn't work, what to do to fix it? Others have suggested that only XFS or JFS will work. Is this so? Thanks, -Mark
2003 Oct 21
1
Is anyone replicating .5TB or higher?
Greetings! I've heard about using rsync to replicate data across the WAN, but need to know if anyone is using it on a large scale. I have a client who is contemplating consolidating Windows file/print servers into a Linux partition on an iSeries. The show stopper is whether rsync (or any replication product) can and will replicate a) at the file level, and b)a database approaching .6TB
2015 Mar 27
2
Weird issue - STATUS_DISK_FULL
Hi All, I've a share on a btrfs volume that is reporting that there isn't enough disk space for a backup. Which is odd as the file is about 300GB in size and df -h reports 5TB free. Any ideas? testparm -s Load smb config files from /etc/samba/smb.conf Processing section "[BACKUP00]" Loaded services file OK. Server role: ROLE_STANDALONE [global] map to guest = Bad User log file = /var/log/samba/log.%m max xmit = 65535 deadtime =...
2012 Aug 19
2
LVM overhead? Does it cripple I/O?
For a high-performance system (64-cores, 512GB RAM, 5TB local disk, 110TB NFS-mounted storage) is there any advantage of dropping lvm and mounting partitions directly? We're not planning on changing partition sizes, but if we did we'd probably do a full rebuild. Has anyone done performance testing to show that lvm isn't crippling I/O? Thanx...
2012 Sep 29
1
quota severe performace issue help
...fsd/test_replica2_dis4 Brick6: bjzw.miaoyan.cluster1.node6.qiyi.domain:/mnt/xfsd/test_replica2_dis4 Brick7: bjzw.miaoyan.cluster1.node7.qiyi.domain:/mnt/xfsd/test_replica2_dis4 Brick8: bjzw.miaoyan.cluster1.node8.qiyi.domain:/mnt/xfsd/test_replica2_dis4 Options Reconfigured: features.limit-usage: /:5TB features.quota: enable -sh-4.1$ sudo dd if=/dev/zero of=iotest/testfff2 bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 40.3801 s, 26.0 MB/s ______________________________________________________________________ __________________quota disabled_____________...
2006 Aug 17
3
OSX file creation problems
I am running a REHL 4.3 server with samba RPMs: 3.0.10-1.4e.9 When an os X client connects to our samba server it can copy files from the server an overwrite other files that already exist. However when it comes time to create a file, it fails, where this process works correctly from a windows system. In addition, I can go to where the volume is mounted on the os X (10.3.7) machine and copy
2009 Apr 03
10
btrfs for enterprise raid arrays
...rtial area and maybe can optimize the locality of frequently used blocks to optimize performance. Another thing is that some arrays have the capability to "thin-provision" volumes. In the back-end on the physical layer the array configures, let say, a 1 TB volume and virtually provisions 5TB to the host. On writes it dynamically allocates more pages in the pool up to the 5TB point. Now if for some reason large holes occur on the volume, maybe a couple of ISO images that have been deleted, what normally happens is just some pointers in the inodes get deleted so from an array perspective...
2017 Dec 11
2
How large the Arbiter node?
Hi, I see gluster now recommends the use of an arbiter brick in "replica 2" situations. How large should this brick be? I understand only metadata is to be stored. Let's say total storage usage will be 5TB of mixed size files. How large should such a brick be? -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro
2006 Sep 19
4
Disk Layout for New Storage Server
...to 4 shelves of 12 500GB disks each, yielding a total of 24TB of raw storage. I''m kicking around the different ways to carve this space up, balancing storage space with data integrity. The layout that I have come to think is the best for me is to create a raidz2 pool for each shelf (i.e. 5TB per shelf of usable storage) and stripe across the shelves. This would let me lose up to two drives per shelf and still be operational. My only concern with this is if a shelf fails (SCSI card failure, etc) the whole system is down, however the MTBF for a SCSI card is WAAAAY higher than the MTBF...
2005 Sep 14
3
CentOS + GFS + EtherDrive
...shelf. Has anyone else tried such a setup? Is GFS stable enough to use in a production environment? There is a build of GFS 6.1 at http://rpm.karan.org/el4/csgfs/. Has anyone used this? Is it stable? Will I run into any problems if I use CentOS 3.5 with it's GFS package to access a shared 5TB filesystem? I've been googling on this stuff off and on for a month now. I've found a bunch of conflicting and confusing information, but no clear answers. I've been kind of hoping that the new GFS 6.1 packages would be released while I was waiting... :) I appreciate any suggestion...
2014 Jul 15
1
fts solr database size
Hi, Could anyone share any numbers about real life solr database size/cpu/memory usage for certain amounts of messages ? We have now over 5TB of maildirs (about 5 000-6 000 concurrent imap clients) and I'm trying to guess how much hardware might be needed. -- Michal
2013 Mar 05
1
Maildir or Mdbox and expunge messages.
...th expunge messages on Mdbox over strace (see at tail of message). As I can see dovecot process opens old storage m.* file, reads it content, opens new temporary file, writes in this one content and rename this new one to m.(*+1). How fast this algorithm works on system with about 10000 users and 5Tb data? I will use mdbox_rotate_interval for delayed expunge but I think that simple delete file in Maildir must be faster than expunge from Mdbox. Please, tell me about real experience work with Mdbox on big loaded systems. ++++++++++++++++ 0.000017 open("/var/vmail/example.org/user/sto...
2008 Jan 06
1
DRBD NFS load issues
My NFS setup is a heartbeat setup on two servers running Active/Passive DRBD. The NFS servers themselves are 1x 2 core Opterons with 8G ram and 5TB space with 16 drives and a 3ware controller. They're connected to a HP procurve switch with bonded ethernet. The sync-rates between the two DRBD nodes seem to safely reach 200Mbps or better. The processors on the active NFS servers run with a load of 0.2, so it seems mighty healthy. Until I do...
2012 Feb 01
2
Doubts about dsync, mdbox, SIS
...145M 113M 33M 78% /usr/local/atmail/users very little of this is compressed (zlib plugin enabled during christmas). I'm surprised that the destination server is so large, was expecting zlib and mdbox and SIS would compress it down to much less than what we're seeing (12TB -> 5TB): $ df -h /srv/mailbackup Filesystem Size Used Avail Use% Mounted on /dev/mapper/mailbackupvg-mailbackuplv 5.7T 4.8T 882G 85% /srv/mailbackup Lots and lots of the attachement storage is duplicated into identical files, instead of hard linked. When running...
2011 Jun 03
1
Suggestions welcome for expanded single array and redundancy
...forward. What we want to achieve is Add more servers (for the sake of the exercise specs will be identical) Keep the same level of redundancy N+2 Increase storage If 2 more servers were installed (5 x 1TB) - (2 redundant) = 3TB Available If 4 more servers were installed (7 x 1TB) - (2 redundant) = 5TB Available and so on Is this possible with glusterfs and any suggestions on how to achieve this. Please keep in mind we are expecting data growth and would like to keep adding servers to increase storage. Regards Stewart
2011 May 20
13
XCP bandwidth management
...ning nicely and would like to use it in production. However I''m struggling with the concept of bandwidth management. It seems like such a common problem that everyone must have, but I can''t find any clear direction in which to go. The dedicated host I am using (Hetzner) gives me a 5TB monthly bandwidth quota which needs to be shared between all the VMs on the XCP. Ideally I would like something to automatically manage the bandwidth such that each VM is capable of using the full 100mbps speed of the connection, but will be throttled back if the throughput is sustained, so we hav...
2008 Nov 26
9
ZPool and Filesystem Sizing - Best Practices?
...orld? 2. What is the ''logical corruption boundry'' for a zfs system - the filesystem or the zpool? 3. Are there scenarios apart from latency sensitive applications (e.g. Oracle logs) that warrant separate zpools? One of our first uses for our shiney new server is to hold about 5TB of data which logically belongs on one share/mount, but has historically been partitioned up into 500GB pieces. The owner of the data is keen to see it available in one place, and we (as the infrastructure team) are debating whether it''s a sensible thing to allow. Thanks for any advic...
2017 Dec 11
0
How large the Arbiter node?
...17, at 17:43, Nux! <nux at li.nux.ro> wrote: > > Hi, > > I see gluster now recommends the use of an arbiter brick in "replica 2" situations. > How large should this brick be? I understand only metadata is to be stored. > Let's say total storage usage will be 5TB of mixed size files. How large should such a brick be? > > -- > Sent from the Delta quadrant using Borg technology! > > Nux! > www.nux.ro > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.glus...
2017 Feb 28
0
Index queue
Hi, can i, somehow, list mailboxes which are to be indexed by indexer-worker (=index queue?) ? How can i know what part of all mailboxes was indexed so far? Are there any statistics about Solr data dir size based on emails amount? For example, we have about 5TB of emails, what should i except about index size in Solr? Thank you. azur
2015 Jan 29
1
sizing samba environment
...TO-Collection/cfgsmarts.html So each server gets (almost) the same configuration. For decrease the recover time (fschk , date restore from backup), the idea is, to split the directory configured in "share path" into different mount points (partition) in "smaller" volumes like 3-5TB. Additional advantage is, that if one file system crashes, not all data is impacted... My Questions: ---------------------- Are there any sizing guides around there or does anyone can help me? I need help to make the decisions like: * What file system is suitable. I think about ext4 or xfs. (sta...