Displaying 20 results from an estimated 20 matches for "9tb".
Did you mean:
9b
2005 Sep 11
3
mkfs.ext3 on a 9TB volume
Hello,
I have:
CentOS4.1 x86_64
directly-attached Infortrend 9TB array QLogic HBA seen as sdb
GPT label created in parted
I want one single 9TB ext3 partition.
I am experiencing crazy behavior from mke2fs / mkfs.ext3 (tried both).
If I create partitions in parted up to approx 4,100,000 MB in parted,
mkfs.ext3 works great. It lists the right number of blocks a...
2008 Dec 28
4
ZFS on Linux
...5.2 systems. The data I am
storing is very large text files where each file can range from 10M to
20G. I am very interested on the compression feature of ZFS, and it
seems no other native Linux FS supports it.
My question are: Is ZFS stable? How does it scale for very large
filesytems, ie, 2TB to 9TB? How is the performance of fuse? I plan to
use it on my archive server first, so data reliability is very
important
Any thoughts or ideas?
TIA
2011 Jul 18
1
Cannot install through (U)EFI
Hi all,
I've a little problem with CentOS 6 and EFI on Dell Poweredge R510...
My Logical disk (hardware raid) is a little bit greated than 9TB, so i must use EFI in order to see the whole disk space and boot on it, but my box don't want to boot on CentOS 6 x86_64 Install DVD when i'm in EFI boot mode.
someone has successfully installed CentOS 6 x86_64 in EFI mode ?
Regards.
2011 Jun 17
1
gluster fuse disk state problem (reproducible )
.../2011-June/007980.html
and
http://gluster.org/pipermail/gluster-users/2011-May/007697.html
Recently I installed Glusterfs 3.2.0 on four workstations which share a
total of 28TB of disk-space between them for batch processing of FMRI
and DTI data. Had the same setup between three workstations and 9TB with
Glusterfs 3.0.4 for over a year, but the hard-disk pool was to small.
The three "old" workstations from the old cluster and two more will also
access the 28TB pool.
OS: Scientific Linux 6
Glusterfs 3.2.0 and 3.2.1 compiled from source rpm
I've been testing the new setup for ove...
2006 Nov 26
1
ext3 4TB fs limit on amd64 (FAQ?)
Hi,
I've a question about the max. ext3 FS size. The ext3 FAQ explains that
the limit is 4TB.
http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html
| Ext3 can support files up to 1TB. With a 2.4 kernel the filesystem size
is | limited by the maximal block device size, which is 2TB. In 2.6 the
maximum | (32-bit CPU) limit is of block devices is 16TB, but ext3
supports only up | to 4TB.
2011 Apr 01
15
btrfs balancing start - and stop?
Hi,
My company is testing btrfs (kernel 2.6.38) on a slave MySQL database
server with a 195Gb filesystem (of which about 123Gb is used). So far,
we''re quite impressed with the performance. Our database loads are high,
and if filesystem performance wasn''t good, MySQL replication wouldn''t
be able to keep up and the slave latency would begin to climb. This
though, is
2018 Jul 10
0
Geo replication manual rsync
Hi all,
I have setup a gluster system with geo replication (Centos 7, gluster 3.12).
I have moved about 30 TB to the cluster.
It seems that it goes realy show for the data to be syncronized to geo replication.
It has been active for weeks and still just 9TB has ended up on the slave side.
I pause the replication once a day and make a snapshot with a script.
Does this slow things up?
Is it possible to pause replication and do a manual rsync, or does this disturb the geo sync when it is resumed?
Thanks!
Best regards
Marcus
################
Marcus Pe...
2008 Jul 18
0
nfs high load issues
...r rates averaging around 55MB/sec
using scp, so I don't think I have a network/wiring issue. The nics are
intel 82541GI nics on a Supermicro serverboard X6DH8G. I am not
currently bonding, although I want to if the bandwidth will actually
increase.
Both server's filesystems are ext3, 9TB, running raid 50 on a 3Ware
9550SX raid card. I have performed the tuning procedures listed on
3Ware's website. The "third" server has an XFS filesystem, and the same
problem exists.
I am using the e1000 driver (default on install). The servers are 64
bit, and have 2 gigs of m...
2010 Mar 25
1
Kickstart 8TB partition limit?
I found a kickstart installation with
part pv.100000 --size=1 --grow
volgroup vol0 pv.100000
creates a partition with a size of 8TB even though more than 9TB is available.
I need to go in manually with gdisk to destroy the partition and recreate it
with all available space.
No filesystem is specified be cause want to use xfs, which kickstart does not
support out of the box. This is under 5.2, but the 5.3/5.4 relnotes do not
indicate that this prob...
2011 Jun 06
2
Re: New btrfsck status
Chris Mason on 10 Feb 13:17:
> Excerpts from Ben Gamari''s message of 2011-02-09 21:52:20 -0500:
> > Over the last several months there have been many claims regarding
> > the release of the rewritten btrfsck. Unfortunately, despite
> > numerous claims that it will be released Real Soon Now(c), I have
> > yet to see even a repository with preliminary code. Did I
2008 May 02
4
ext3 filesystems larger than 8TB
Greetings.
I am trying to create a 10TB (approx) ext3 filesystem. I am able to
successfully create the partition using parted, but when I try to use
mkfs.ext3, I get an error stating there is an 8TB limit for ext3
filesystems.
I looked at the specs for 5 on the "upstream" vendor's website, and they
indicate that there is a 16TB limit on ext3.
Has anyone been able to create
2013 Nov 04
5
[OT] Building a new backup server
Guys,
I was thrown a cheap OEM-server with a 120 GB SSD and 10 x 4 TB SATA-disks for
the data-backup to build a backup server. It's built around an Asus Z87-A that
seems to have problems with anything Linux unfortunately.
Anyway, BackupPC is my preferred backup-solution, so I went ahead to install
another favourite, CentOS 6.4 - and failed.
The raid controller is a Highpoint RocketRAID
2012 Jun 17
26
Recommendation for home NAS external JBOD
...is not the most performant solution but this is a home nas storing tons of pictures and videos only. And I could use the internal disks for backup purposes.
Any suggestion for components are greatly appreciated.
And before you ask: Currently I have 3TB net. 6 TB net would be the minimum target. 9TB sounds nicer. So if you have 512b HD recommendations with 2/3TB each or a good JBOD suggestion, please let me know!
Kind regards,
JP
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20120617/f2966b...
2009 Aug 28
4
Setting up large (12.5 TB) filesystem howto?
Hi,
I'm trying to set up an iscsi 12.5 TB storage for some data backup.
Doing so, I had some difficulties to find the right tool, maybe it's
also a question of the system settings...
The server is a 32Bit CentOS 5.3 with the recent updates. Ths iscsi
connection can be establised.
fdisk and parted fail to create any information on the device or fail
completely.
using the lvm tools
2013 Nov 14
4
First Time Setting up RAID
Arch = x86_64
CentOS-6.4
We have a cold server with 32Gb RAM and 8 x 3TB SATA drives mounted in hotswap
cells. The intended purpose of this system is as an ERP application and DBMS
host. The ERP application will likely eventually have web access but at the
moment only dedicated client applications can connect to it.
I am researching how to best set this system up for use as a production host
2013 May 01
9
Best Practice - Partition, or not?
Hello
If I want to manage a complete disk with btrfs, what''s the "Best Practice"?
Would it be best to create the btrfs filesystem on "/dev/sdb", or would it be
better to create just one partition from start to end and then do "mkfs.btrfs
/dev/sdb1"?
Would the same recomendation hold true, if we''re talking about huge disks,
like 4TB or so?
2012 Jun 12
15
Recovery of RAIDZ with broken label(s)
...the 5 drives are "ONLINE".
Can anyone please point me to a next step?
I can also make the solaris machine available via SSH if some wonderful
person wants to poke around. If I lose the data that''s ok, but it''d be nice
to know all avenues were tried before I delete the 9TB of images (I need the
space...)
Many thanks,
Scott
zfs-list at thismonkey dot com
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to
2010 Apr 07
53
ZFS RaidZ recommendation
I have been searching this forum and just about every ZFS document i can find trying to find the answer to my questions. But i believe the answer i am looking for is not going to be documented and is probably best learned from experience.
This is my first time playing around with open solaris and ZFS. I am in the midst of replacing my home based filed server. This server hosts all of my media
2009 Jul 23
1
[PATCH server] changes required for fedora rawhide inclusion.
...>;X(r*@#yAM?)fEBj9RAz6Gzh<qBDu}igPuS8;w
z2R%0uENkL-Pgxl1Ba_%pdiB#S&ty1kLTMO^@{RgE6;Yhk|5fRha at 6kyh%#qc1f$lP
zlV-0chk4Vaycy;GOo*aS*u#}d(<&nqXvQ0zA-UqrQY2a30^XZe6>n4*3OK$e=Wp*a
z`$`RAWnnPwhCS=MOra6jv16JYMWr-0s_BwRd5)<)zc3^-5<Z;^xXM|dkFsd$adANo
z!RSQWL9ij*j`1Z!)^m8zkNWe`gfHv9tBQ=>EkXS0p&kb^K1WkXKtaYXNm(kN2w1D%
z7oceiow$Qu{kISCO_0@#3eOG>+9fAN6z(grLq|ol(3NJL4A!KBHbrfvRCh@<v(VYI
zOX!CEv=Ts&rWviy9&J3N5HZ<MrEn?O&V|j|1^<v#+QO$=hB(qZ3p9<gsnt%_ at 3s)y
zoYAI-z50n>`|1{Bduo?7yXflU4$<u51akCFgu!l at f0QJQmonNXcD~x__HykGVMZ?A
z{dmV0w#h{TGlQAwtP~...