Displaying 19 results from an estimated 19 matches for "30tb".
Did you mean:
30gb
2011 Jan 11
4
ext4 or XFS
Hi all,
I've a 30TB hardware based RAID array.
Wondering what you all thought of using ext4 over XFS.
I've been a big XFS fan for years as I'm an Irix transplant but would
like your opinions.
This 30TB drive will be an NFS exported asset for my users housing
home dirs and other frequently accessed files...
2017 Aug 24
2
AArch64 buildbots and PR33972
I'd like to mention that test does not allocate 30TB, it allocates 1TB, the
rest, ~20TB, is reserved (but not actually used) for ASan shadow memory, it
should not be a problem by itself.
The test on your bot failed because it tried to reserve 27TB of memory,
which is more than set by ulimit earlier in this test. I do not immediately
see why it wants...
2017 Aug 24
2
AArch64 buildbots and PR33972
Hi all,
It turns out we lost coverage of the release configuration on the
AArch64 buildbots for a while. This is because the machine that we
were running the clang-cmake-aarch64-full bot on became unstable
(random shutdowns after a day or two of building).
Embarrassingly enough, we didn't notice this until now because of a
bug in our internal bot monitoring page which listed the
2009 Jul 21
5
File Size Limit - Why/How?
Hello Samba Lists:
I am trying to read a 22TB file from a system running OpenSuSE 10.3/x64
(Using whatever version of Samba came out with 10.3/x64). The file is on a
30TB XFS volume. I'm connecting over 10GBit Ethernet from a Windows Server
2003/X64 client. If I try to read the 22TB file, I get the message "Access
Denied", but if I try to read 100GB files from the same volume, they read
with no problems - please help...Note that I'm not trying to w...
2009 Jun 01
7
Does zpool clear delete corrupted files
...Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 09 June 2006
Here''s an (almost) disaster scenario that came to life over the past
week. We have a very large zpool containing over 30TB, composed
(foolishly) of three concatenated iSCSI SAN devices. There''s no
redundancy in this pool at the zfs level. We are actually in the
process of migrating this to a x4540 + j4500 setup, but since the
x4540 is part of the existing pool, we need to mirror it, then detach
it...
2019 Nov 09
0
Sudden, dramatic performance drops with Glusterfs
...wing two bricks, one on each server, started, and apparently healthy.
>>
>> How much is your zpool full ? Usually when it gets too full, the ZFS performance drops seriosly.
>
> The zpool is only at about 30% usage. It's a new server setup.
> We have about 10TB of data on a 30TB volume (made up of two 30TB ZFS raidz2 bricks, each residing on different servers, via a 10GB dedicated Ethernet connection.)
>>
>> Try to rsync a file directly to one of the bricks, then to the other brick (don't forget to remove the files after that, as gluster will not know abou...
2010 Oct 19
8
Balancing LVOL fill?
Hi all
I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about speed? Three of the VDEVs are quite full, as indicated below. VDEV #3 (the one with the spare active) just spent some 72 hours resilvering a 2TB drive. Now, those green drives s...
2011 Oct 17
1
Need help with optimizing GlusterFS for Apache
Our webserver is configured as such:
The actual website files, php, html ,css and so on. Or on a dedicated
non-glusterfs ext4 partition.
However, the website access Videos and especially image files on a
gluster mounted directory.
The write performance for our backend gluster storage is not that
important. Since it only comes into play when someone uploads a video or
image.
However, the files
2010 Jul 31
6
Need suggestion for domain controller
Hi,
I wish to establish domain controller based on Centos 5.x.I am
considering below setups.
1) Samba PDC
2) OpenLDAP
3) Combination of Samba PDC + LDAP
I am confused to select one among above.Can anyone please suggest me?
2011 Sep 04
1
Rsync with direct I/O
...sn't support this. How hard would it be
to implement this? Is it trivial enough to just change the calls in the
code with sed? I think this can significantly reduce CPU usage and
increase I/O speed when dealing with fast storage solutions. It can make
a huge difference when say transferring 30TB of data.
Here are some tests I did. So far the only thing I know of that can use
direct I/O is dd which showed a huge improvement. CPU usage went from
very high down to under 15% and the speed almost doubled.
Is there anything else I can do to tune rsync to give me better speed
and/or less CPU...
2015 Jan 29
1
sizing samba environment
Hello,
I like migrate and consolidate a lot of Windows and samba files
servers to a new samba installation.
Following situation:
----------------------
At the moment are working about 3000 users in two sites with a amount
of 30TB data.
Over the next years, it may grow up until 50TB data volume.
Planned Hardware:
----------------------
The new disk storage is connected via FC (8GBit/s) to a site redundant
SAN. The storage itself uses a online mirror. Planned at the moment is
to use 4 servers (two Servers at each site) with...
2012 Jul 09
2
Storage Resource RAID & Disk type
Are there any best practice recommendations on RAID & disk type for the shared storage resource for a pool? I''m adding shelves to a HP MSA2000 G2, 24 drives total, minus hot spare(s). I can use 600GB SAS drives (15k rpm), 1TB or 2TB Midline SAS drives (7200rpm). I need about 3TB usable for roughly 30 VM guests, 100GB each, web servers so I/O needs are nominal.
I''m also
2012 Jun 12
1
Nagios hostgroup collation
Hi everyone,
I am reconsidering how I am using the Puppet nagios functionality, at
the moment I am creating one service for each check on each host. A
lot of them are identical, and would be better tied to hostgroups to
simplify my config. Namely, I have about 5,000 checks in there now
which will go up to about 20K over the next month, and it''s taking
about 5-10 minutes for a Puppet
2008 Feb 06
2
strategy/technology to backup 20TB or more user's data
Hi Friends,
I am currently using Samba on Centos 4.4 as a domain member of AD 2003
with each user having a quota of 2GB(no of users is around 2,000). Now
the management wants to increase the quota to 10GB with this there
will be more than 20TB of data to be backup weekly which will take
lots of hours. Currently Veritas backup software is used to backup
data on tapes.
There is a concept of
2012 Feb 07
1
Recommendations for busy static web server replacement
Hi all
after being a silent reader for some time and not very successful in getting
good performance out of our test set-up, I'm finally getting to the list with
questions.
Right now, we are operating a web server serving out 4MB files for a
distributed computing project. Data is requested from all over the world at a
rate of about 650k to 800k downloads a day. Each data file is usually
2012 Sep 27
6
11TB ext4 filesystem - filesystem alternatives?
Hi All.
I have a CentOS server:
CentOS 5.6 x86_64
2.6.18-238.12.1.el5.centos.plus
e4fsprogs-1.41.12-2.el5.x86_64
which has a 11TB ext4 filesystem. I have problems with running fsck on it
and would like to change the filesystem because I do not like the
possibility of running long fsck on it, it's a production machine. Also I
have some problems with running fsck (not enough RAM, problem with
2008 Nov 06
45
''zfs recv'' is very slow
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
hi,
i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i''m
using ''zfs send -i'' to replicate changes on A to B. however, the ''zfs recv'' on
B is running extremely slowly. if i run the zfs send on A and redirect output
to a file, it sends at 2MB/sec. but when i use ''zfs send
2007 Sep 19
53
enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS
...implementation. I have done a fair amount of reading and
perusal of the mailing list archives, but I apologize in advance if I ask
anything I should have already found in a FAQ or other repository.
Basically, we are looking to provide initially 5 TB of usable storage,
potentially scaling up to 25-30TB of usable storage after successful
initial deployment. We would have approximately 50,000 user home
directories and perhaps 1000 shared group storage directories. Access to
this storage would be via NFSv4 for our UNIX infrastructure, and CIFS for
those annoying Windows systems you just can''...
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to