Displaying 20 results from an estimated 65 matches for "6tb".
Did you mean:
6b
2008 Jul 24
6
6TB SCSI RAID vs. Centos
I have an Infortrend RAID box I'd like to see as one big 6TB partition,
but I only can get 2.2TB partitions to work. I was trying to do this
with an Adaptec controller but apparently they are only (any of them) 48
bits wide. Does anybody have a working system for SCSI/Centos over
2.2TB?
Milt Mallory
Topix.com
650-461-8316
Always consider the issues...
2012 Dec 01
3
6Tb Database with ZFS
Hello,
Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I
want to set arc_max parameter so ZFS cant use all my system''s memory, but i
dont know how much i should set, do you think 24Gb will be enough for a 6Tb
database? obviously the more the better but i cant set too much memory.
Have someone implem...
2012 Apr 19
1
centos 6.2; mount 6TB OSX formatted FW
Hi all,
Trying to mount an FW800 6TB volumes.
The logs say;
cannot find hfs+ superblock
and
volumes larger then 2TB are not supported yet
Is my case really because of the >2TB volume?
Thanks in advance,
- aurf
2006 Dec 01
1
maintain 6TB filesystem + fsck
i posted on rhel list about proper creating of 6tb ext3 filesystem and
tuning here.......http://www.redhat.com/archives/nahant-list/2006-November/msg00239.html
i am reading lots of ext3 links like......
http://www.redhat.com/support/wpapers/redhat/ext3/
http://lists.centos.org/pipermail/centos/2005-September/052533.html
http://batleth.sapienti-sat....
2011 Dec 28
3
Btrfs: blocked for more than 120 seconds, made worse by 3.2 rc7
.... When
i used 3.2rc6 it happened randomly on both machines after 50-500gb of
writes. with rc7 it happens after much less writes, probably 10gb or so,
but only on machine 1 for the time being. machine 2 has not crashed yet
after 200gb of writes and I am still testing that.
machine 1: btrfs on a 6tb sparse file, mounted as loop, on a xfs
filesystem that lies on a 10TB md raid5. mount options
compress=zlib,compress-force
machine 2: btrfs over md raid 5 (4x2TB)=5.5TB filesystem. mount options
compress=zlib,compress-force
pastebins:
machine1:
3.2rc7 http://pastebin.com/u583G7jK
3.2rc6 http:...
2015 Mar 31
3
Hdd maximum size
Server is a Lenovo RV-340 E2420 build 70AB001VUX 8go
Support for Sata-3 6gbps and Raid-5
Did someone can tell if it can handle 6tb hard drives?
--
Michel Donais
2004 Oct 26
4
Release of centos-3.3 ISP bill
It turns out that the release of Centos-3.3 was so popular, that it threw us
way over the threshold of our ISP's, and now we are stuck with a _very_ large
bill (as in an estimated 6TB of transfers). While in one hand I am ecstatic
that we are so successful, but on the other hand, that is coming out of the
developers pockets. The developers should be the last ones footing these
bills (and this one was very large).
You can help. Please consider a donation for each of the systems...
2017 Jan 20
6
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi,
Does anyone have experiences about ARC-1883I SAS controller with CentOS7?
I am planning to have RAID1 setup and I am wondering if I should use
the controller's RAID functionality which has 2GB cache or should I go
with JBOD + Linux software RAID?
The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034
7200rpm SAS/12Gbit 128 MB
If hardware RAID is preferred, the controller's cache could be updated
to 4GB and I wonder how much performance gain this would give me?
Thanks!
Peter
2023 Mar 18
1
hardware issues and new server advice
hi,
our current servers are suffering from a weird hardware issue that
forces us to start over.
in short we have two servers with 15 disks at 6TB each, divided into
three raid5 arrays for three bricks per server at 22TB per brick.
each brick on one server is replicated to a brick on the second server.
the hardware issue is that somewhere in the backplane random I/O errors
happen when the system is under load. these cause the raid to fail
dis...
2003 Oct 27
2
Can Samba export 2TB+ filesystems?
Does Samba have any max filesytem limitations.
In particular can both 2.2.8 and 3.0 support 2TB+ filesystems.
For now, I am thinking of 6TB max, so I don't need to know about
Petabytes or Exabytes.
The other side of the question, is can Win9x, Win2K, etc. work with
filesystems over 2 TB.
If the above is in a FAQ somewhere, a url would be great.
Greg
--
Greg Freemyer
2008 Jun 18
1
mkfs.ocfs2: double free or corruption
...21000 rw-p b7e00000 00:00 0
b7e21000-b7f00000 ---p b7e21000 00:00 0
b7f13000-b7f15000 rw-p b7f13000 00:00 0
bff36000-bff4b000 rw-p bff36000 00:00 0 [stack]
Aborted
=================================================================================
I'm trying to give OCFS2 format to a 6TB iSCSI partition/device, this is
the system information:
- ocfs2-tools-1.2.7-1.el5, official RPM package from Oracle.
- Self compiled OCFS2 v1.3.9-0.1 under a CentOS 5 box:
./configure --with-kernel=/lib/modules/`uname -r`/build
--enable-vfs-rename-patch-override=yes
make
make install
- Running...
2006 Apr 11
3
ext3 filesystem corruption
...In general...
We are running RedHat WS 3 Update 6, 2.4.21-40.2.ELsmp or
2.4.21-37.ELsmp
We have a small SAN system that looks like this
3 NFS servers each containing 2 Qlocic hba's connected to 2
qlogic switches
connected to an nstor (now xyratex) 6TB raid system containing
2 (active-active) controllers.
On the first 2 occasions one of the controllers was failed over.
On a 3rd occasion both SAN switches lost power, and the hosts and raid lost communication.
On all occasions the qlocic failover driver tried to start up on the alterna...
2018 Jan 02
0
Dbox and NVMe drives
Hi all,
We're looking at replacing our existing NFS setup and using local storage
on multiple nodes - using Director to send clients to the correct node.
We have about 10,000 mailboxes using about 6TB at the moment, a mix of IMAP
and POP, probably 65% IMAP.
I'm thinking of moving from Maildir to Dbox; using 2TB NVMe drives in RAID1
as primary storage and 4x 6TB SATA disks in RAID10 for altstorage and
probably smaller SATA SSDs in RAID1 for the indexes.
I guess my question is, have other pe...
2007 Jun 28
1
TermGenerator and SimpleStopper
Hi,
I'm using SimpleStopper with TermGenerator in a Python indexing
script, in an attempt to keep my index size down (currently 30K per
doc, and I have 200 million docs to index, which I think implies
6TB.) However, unprefixed (positional?) terms are not affected by
the stopper, though Z-prefixed terms are.
I assume this is intentional for phrase queries, but I need to reduce
my index size drastically. Is it possible to generate positional
terms, filtered with a stoplist, and not generate th...
2003 Oct 21
1
Is anyone replicating .5TB or higher?
...if anyone is using it on a large scale.
I have a client who is contemplating consolidating Windows file/print servers
into a Linux partition on an iSeries. The show stopper is whether rsync (or
any replication product) can and will replicate a) at the file level, and b)a
database approaching .6TB in size. Automation and accuracy are key.
Thanks in advance for any information!
Laura
__________________________________
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search
http://shopping.yahoo.com
2014 Feb 07
1
Raid on centos
Ok I've a HP mircoserver that I'm building up.
It's got 4 bays for be used for data that I'm considering setup up woth
softwere raid (mdadm)
I've 2 x 2TB 2 x 2.5 TB and 2 x 1TB, I'm leaning towards usig the 4 2.x TB
is a raid 5 array to get 6TB.
Now the data is on the 2.5TB disks currently.
So the plan so far.
Building the array as a degraded raid 5 with the 2 x 2TB disks that are
emply.
Copy the data from one of the 2.5TB disks on to array.
Add now empty 2.5TB disk to the Array and wait for it to rebuild.
Copy contents of remaining 2....
2014 Jul 30
1
Shrinking a RAID array
My google-fu appears to be weak today...
I currently have 8*4Tb in a RAID6.
So far I'm only using 6Tb
PV VG Fmt Attr PSize PFree Used
/dev/md6 Large lvm2 a-- 21.83t 15.37t 6.46t
Let's say I wanted to remove 2 of these disks from the array and
shrink it down to a 6*4Tb
How would I do this?
--
rgds
Stephen
2017 Dec 28
1
Adding larger bricks to an existing volume
I have a 10x2 distributed replica volume running gluster3.8.
Each of my bricks is about 60TB in size. ( 6TB drives Raid 6 10+2 )
I am running of storage so I intend on adding servers with larger 8Tb
drives.
My new bricks will be 80TB in size. I will make sure the replica to the
larger brick will match in size.
Will gluster place more files on the larger bricks? Or will I have wasted
space?
In other wor...
2007 Feb 23
2
iSCSI, windows, & local linux access
Hello all,
I am looking to build a larger array (6TB) using CentOS 4.4 to archive
data to. We want to have the Windows server mount this array as a local
drive so we were looking at iSCSI to do it. I have played with it in
the past and gotten it to work in this combo, but I have a question
about access to the data on the local (Centos) machine....
2013 Nov 14
4
First Time Setting up RAID
...e a bunch of questions to ask and I probably have a bunch more
that I should but do not know enough yet to ask.
>From what I have read it appears that the system disk must use RAID 1 if it
uses RAID at all. Is this the case? If so, is there any benefit to be had by
taking two of the 8 drives (6Tb) solely to hold the OS and boot partition?
Should these two drives be pulled and replaced with two smaller ones or should
we bother with RAID for the boot disk at all?
Given that one or two drive bays will be given over to the OS what should be
the configuration of the remaining six? It appears...