search for: 8tb

Displaying 20 results from an estimated 102 matches for "8tb".

Did you mean: 8b
2008 May 02
4
ext3 filesystems larger than 8TB
Greetings. I am trying to create a 10TB (approx) ext3 filesystem. I am able to successfully create the partition using parted, but when I try to use mkfs.ext3, I get an error stating there is an 8TB limit for ext3 filesystems. I looked at the specs for 5 on the "upstream" vendor's website, and they indicate that there is a 16TB limit on ext3. Has anyone been able to create a ext3 filesystem larger than 8TB? If ext3 isn't an option, has anyone used the kmod-xfs-smp.i686...
2004 Aug 10
3
rsync to a destination > 8TB problem...
...ing /etc for now): rsync: writefd_unbuffered failed to write 4 bytes: phase "unknown": Broken pipe rsync error: error in rsync protocol data stream (code 12) at io.c(836) When I encountered the error, I suspected a problem with the filesystem size, so I started testing with 2TB, 4TB, 8TB filesystems. (I started with 2TB and grew the LV and resized the reiserfs each time.) Just under 8TB (7.45TB) everything worked fine. After going just over 8TB (8.45TB) I got the error again. Just for kicks, I tried syncing a single new file and that worked. I then tried syncing a single n...
2006 Oct 03
1
16TB ext3 mainstream - when?
Are we likely to see patches to allow 16TB ext3 in the mainstream kernel any time soon? I am working with a storage box that has 16x750GB drives RAID5-ed together to create a potential 10.5TB of potential storage. But because ext3 is limited to 8TB I am forced to split into 2 smaller ext3 filesystems which is really cumbersome for my app. Any ideas anybody?
2008 May 05
0
OT- Re: ext3 filesystems larger than 8TB
.... Is there a court tv mailing list out there? -Ross ----- Original Message ----- From: centos-bounces at centos.org <centos-bounces at centos.org> To: CentOS mailing list <centos at centos.org> Sent: Mon May 05 18:24:40 2008 Subject: Re: [CentOS] OT- Re: ext3 filesystems larger than 8TB Ross S. W. Walker wrote: > No doubt! > > The worse part is I don't believe it was premeditated. I think she came > over to drop off the kids and told him oh by the way I'm taking the > children to live with me in Russia, at that point he went into a fit of > anger and thr...
2010 Mar 25
1
Kickstart 8TB partition limit?
I found a kickstart installation with part pv.100000 --size=1 --grow volgroup vol0 pv.100000 creates a partition with a size of 8TB even though more than 9TB is available. I need to go in manually with gdisk to destroy the partition and recreate it with all available space. No filesystem is specified be cause want to use xfs, which kickstart does not support out of the box. This is under 5.2, but the 5.3/5.4 relnotes do no...
2008 May 05
0
Way OT Re: OT- Re: ext3 filesystems larger than 8TB
...se, but still less than first degree. -Ross ----- Original Message ----- From: centos-bounces at centos.org <centos-bounces at centos.org> To: centos at centos.org <centos at centos.org> Sent: Mon May 05 18:41:10 2008 Subject: [CentOS] Way OT Re: OT- Re: ext3 filesystems larger than 8TB on 5-5-2008 3:24 PM John R Pierce spake the following: > Ross S. W. Walker wrote: >> No doubt! >> >> The worse part is I don't believe it was premeditated. I think she came >> over to drop off the kids and told him oh by the way I'm taking the >> children t...
2007 May 17
2
RFC: Tuning ext3
...reserve no blocks for the super-user. [NoSysFiles][MaxStor] D. Create using -E stride=N where N matches the underlying RAID. [GenNFSPerf] E. Use a kernel >= 2.6.19 (patches for extents and 48-bit support, requires Ubuntu 7.04 feisty or Fedora Core 7 or custom kernel) to allow filesystems > 8TB on Intel/AMD chips. [BigFS] F. Use an external journal on a separate high-RPM drive. [GenNFSPerf] G. Use a large journal. mkfs -J size=8192 [GenNFSPerf] H. Mount using -o orlov to use the Orlov block allocator (default, requires 2.6 kernel). Minimizes seeks by clustering files together. No risk....
2020 Sep 11
2
Copying TBs -> error -> work around
...em/storage, not with rsync. > > rsync (or the workload) is simply triggering the problem. Thanks for the response . . Hmm . . but the drive that goes read-only is being read FROM not TO . . it is hard to see how that should be an issue? The backstory is that a relatively recent internal 8TB Seagate Barracuda had its 7.2TB sda5 (home) partition corrupted - which itself was suspicious but not impossible of course - so I had to switch temporarily to an external USB 4TB drive (which was a backup drive and was already up-to-date) for /home. So now this exercise is rsyncing back to a N...
2012 Sep 13
5
Partition large disk
Hi, I have a 24TB RAID6 disk with a GPT partition table on it. I need to partition it into 2 partitions one of 16TB and 1 of 8TB to put ext4 filesystems on both. But I really need to do this remotely. ( if I can get to the site I could use gparted ) Now fdisk doesn't understand GPT partition tables and pat
2017 Jul 31
1
RECOMMENDED CONFIGURATIONS - DISPERSED VOLUME
...ratio. Yet RH recommends 8:3 or 8:4 in this case: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Recommended-Configuration_Dispersed.html My goal is to create 2PT volume, and going with 10:2 vs 8:3/4 saves a few bricks. With 10:2 I'll use 312 8TB bricks and with 8:3 it's 396 8TB bricks (36 8:3 slices to evenly distribute between all servers/bricks) As I see it 8:3/4 vs 10:2 gives more data redundancy (3 servers vs 2 servers can be offline), but is critical with 12 nodes? Nodes are new and under warranty, it's unlikely I will lose 3...
2012 Aug 26
1
cluster.min-free-disk not working
Further to my last email, I've been trying to find out why GlusterFS is favouring one brick over another. In pretty much all of my tests gluster is favouring the MOST full brick to write to. This is not a good thing when the most full brick has less than 200GB free and I need to write a huge file to it. I've set cluster.min-free-disk on the volume, and it doesn't seem to have an
2006 Mar 18
1
ext3 - max filesystem size
Hi all, I am working with a pc cluster, running redhat el 4, on opteron cpus. we have several bigger RAID systems locally attached to the fileservers; now I would like to create a big striped filesystem with around 15TB. ext3 unfortunatelly only supports filesystem size up to 8TB, do you have an idea if / when this border will be increased ? I already found some discussions on LKML about it ? Which FS would be a goof alternative ? AFAIK xfs is not suported by redhat el 4 ... thanks for any hint, cheers alex
2020 May 12
4
CentOS7 and NFS
Hi, I need some help with NFSv4 setup/tuning. I have a dedicated nfs server (2 x E5-2620? 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and 16x 8TB HDD) used by two servers and a small cluster (400 cores). All the servers are running CentOS 7, the cluster is running CentOS6. Time to time on the server I get: ?kernel: NFSD: client xxx.xxx.xxx.xxx testing state ID with incorrect client ID And the client xxx.xxx.xxx.xxx freeze whith:...
2020 Nov 06
2
ssacli start rebuild?
...errors would have to affect just the disk which is mirroring the disk that failed, this being a RAID 1+0. But if the RAID is striped across all the disks, that could be any or all of them. The array is still in production and still works, so it should just rebuild. Now the plan is to use another 8TB disk once it arrives, make a new RAID 1 with the two new disks and copy the data over. The remaining 4TB disks can then be used to make a new array. Learn from this that it can be a bad idea to use a RAID 0 for backups and that least one generation of backups must be on redundant storage ...
2008 Feb 07
2
Centos 5.1 ext3 filesystem limit
Centos 5.1 documentation states that the supported ext3 filesystem limit is 16TB, yet I have a 9.5TB partition that is claimed to be too large: mke2fs 1.39 (29-May-2006) mke2fs: Filesystem too large. No more than 2**31-1 blocks (8TB using a blocksize of 4k) are currently supported. Am I missing something? > uname -a Linux fileserver.sharcnet.ca 2.6.18-53.1.6.el5 #1 SMP Wed Jan 23 11:28:47 EST 2008 x86_64 x86_64 x86_64 GNU/Linux -- Gary Molenkamp SHARCNET Systems Administrator University of Western Ontario gary at s...
2017 Nov 13
0
Prevent total volume size reduction
...and the GLusterFS volume will immediately appear bigger, up to the size of the smallest brick. Now, I had a problem on my setup, long story short, an LVM bug has forcibly unmounted the volumes on which my bricks are running, while gluster was being used. The problem is that instead of having a 8TB file system mounted on /mnt/bricks/vmstore the server suddently found an empty /mnt/bricks/vmstore pointing on / of this server (20GB) After 3 hours during which Gluster complained about missing files on node1 (but continuing to serve files from node2 transparently), it decided to start healin...
2008 Jun 17
4
maximum MDT inode count
For future filesystem compatibility, we are wondering if there are any Lustre MDT filesystems in existence that have 2B or more total inodes? This is fairly unlikely, because it would require an MDT filesystem that is > 8TB in size (which isn''t even supported yet) and/or has been formatted with specific options to increase the total number of inodes. This can be checked with "dumpe2fs -h /dev/{mdtdev} | grep ''Inode count''" on the MDT. If there are issues with privacy, please e...
2008 Jun 17
4
maximum MDT inode count
For future filesystem compatibility, we are wondering if there are any Lustre MDT filesystems in existence that have 2B or more total inodes? This is fairly unlikely, because it would require an MDT filesystem that is > 8TB in size (which isn''t even supported yet) and/or has been formatted with specific options to increase the total number of inodes. This can be checked with "dumpe2fs -h /dev/{mdtdev} | grep ''Inode count''" on the MDT. If there are issues with privacy, please e...
2009 Aug 28
4
Setting up large (12.5 TB) filesystem howto?
...tors auto - currently set to 256 Block device 253:4 But, I can't create a filesystem on it: mkfs.ext3 -m 2 -j -O dir_index -v -b 4096 -L iscsi2lvol0 /dev/mapper/VolGroup02-lvol0 mke2fs 1.39 (29-May-2006) mkfs.ext3: Filesystem too large. No more than 2**31-1 blocks (8TB using a blocksize of 4k) are currently supported. The limits information provided by red hat say, that RH EL 5.1 supports 16 TB filesystems: http://www.redhat.com/rhel/compare/ -> Maximum filesystem size (Ext3): 16TB in 5.1 Using a block size of 8192 gives a warning, that this size is to l...
2008 Feb 13
1
Re: Disk partitions and LVM limits - SUMMARY
...people reported file systems of 80TB. Things to watch out for: - Make sure the driver you are using or the storage itself don't restrict you from making big partitions or file systems. - fdisk creates partitions up to 2.1TB in size. Use "parted" instead. - RHEL5 supports up to 8TB ext3 file system. To create bigger then 8TB use option "-F" and 4K blocks. Your options are: - If the storage is connected to a RAID controller you can use the controller to create smaller logical partitions. Then combine them with LVM. - If you really want to partition the drive u...