Displaying 12 results from an estimated 12 matches for "32tb".
Did you mean:
2tb
2020 May 13
2
CentOS 7 - xfs shrink & expand
...t but ran into a problem.? It came up fine in
single-user/maintenance mode. ? The mount command shows all of the
mounted file systems, but after I 'chroot /sysroot', the mount failed
(with some problem with mtab, sorry don't have the exact error
message).? So I couldn't mount my 32TB RAID (where the xfsdump file was).
On 5/13/2020 12:48 AM, Simon Matter via CentOS wrote:
> Hi,
>
>> I'm having some difficulty finding a method to shrink my /home to expand
>> my /.? They both correspond to LVMs.? It is my understanding that one
>> cannot shrink a xfs f...
2020 May 13
4
CentOS 7 - xfs shrink & expand
..., I'm running into a problem where /home? needs to be "unused".? If
tried going in to "maintance mode", but I ran into a problem with the
mount command (after issuing a 'chroot /sysroot').? I then tried using
SystemRescueCD to boot to, but it wouldn't mount my 32TB RAID USB drive
(something about too big).
Any thoughts or suggestions?
-Frank
2020 May 13
2
CentOS 7 - xfs shrink & expand
...me up fine in
>> single-user/maintenance mode. ? The mount command shows all of the
>> mounted file systems, but after I 'chroot /sysroot', the mount failed
>> (with some problem with mtab, sorry don't have the exact error
>> message).? So I couldn't mount my 32TB RAID (where the xfsdump file was).
> I think you misunderstood what I meant. You appear to have booted into
> rescue mode, but that's not what I meant. What I meant is good old single
> user mode. The state you'll get with "telinit 1" or with "s" or "1"...
2020 May 13
0
CentOS 7 - xfs shrink & expand
...problem.? It came up fine in
> single-user/maintenance mode. ? The mount command shows all of the
> mounted file systems, but after I 'chroot /sysroot', the mount failed
> (with some problem with mtab, sorry don't have the exact error
> message).? So I couldn't mount my 32TB RAID (where the xfsdump file was).
I think you misunderstood what I meant. You appear to have booted into
rescue mode, but that's not what I meant. What I meant is good old single
user mode. The state you'll get with "telinit 1" or with "s" or "1" as a
kernel...
2006 Apr 19
1
Max filesystem size for ext3 using Adaptec RAID 5 on 64 bit CentOS
...e of each RAID 5 array should be to match
the OS we are using. We are currently running CentOS 4.3 64 bit. We
have planned a 2 TB RAID 5 array for testing but we will need to set
up several larger ones for production. I have poked around and I see
people mention limits like 2TB max file size and 32TB max filesystem
size. Many of these were mention 2.4/2,5 kernels and neither specified
32 bit vs 64 bit.
Can someone please provide the following:
max file size for 2.6.9 64 bit CentOS kernel
max partition/fielsystem size for the same.
Thanks
Doug
--
What profits a man if he gains the whole wor...
2006 Oct 03
1
16TB ext3 mainstream - when?
Are we likely to see patches to allow 16TB ext3 in the mainstream
kernel any time soon?
I am working with a storage box that has 16x750GB drives RAID5-ed together
to create a potential 10.5TB of potential storage. But because ext3 is
limited to
8TB I am forced to split into 2 smaller ext3 filesystems which is really
cumbersome
for my app.
Any ideas anybody?
2015 May 07
0
Backup PC or other solution
...;t think there's an option to delete based on volume
free space, its age based, so you adjust the retention age to suit.
the compression and dedup works so well it amazes me, that I have about
100TB worth of incremental backups stored on 6TB of actual disk. My
backup servers actually have 32TB after raid 6+0, but only 20TB is
currently allocated to the backuppc data volume, so I can grow the /data
volume if needed.
--
john r pierce, recycling bits in santa cruz
2020 May 13
0
CentOS 7 - xfs shrink & expand
...nning into a problem where /home? needs to be "unused".? If
> tried going in to "maintance mode", but I ran into a problem with the
> mount command (after issuing a 'chroot /sysroot').? I then tried using
> SystemRescueCD to boot to, but it wouldn't mount my 32TB RAID USB drive
> (something about too big).
>
> Any thoughts or suggestions?
What is the problem if you boot directly into maintenance mode? Then it
should be possible to backup home to a remote destination, unmount /home,
remove the home LV, expand /, recreate home and mount it, restore...
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them.
Would the standard/recommended approach be to make each drive its own
filesystem, and export 24 separate bricks, server1:/data1 ..
server1:/data24 ? Making a distributed replicated volume between this and
another server would then have to list all 48 drives individually.
At the other extreme, I could put all 24 drives into some
2001 Aug 30
3
Question about file system capacity
Dear list,
pardon my ignorance on ext2/3fs. Is the ext2fs' 4Tb file system
and 2GB maximum file size limit still true in ext3fs?
Thanks for your reply,
Gotze
__________________________________________________
Do You Yahoo!?
Get email alerts & NEW webcam video instant messaging with Yahoo! Messenger
http://im.yahoo.com
2015 May 07
2
Backup PC or other solution
Il 07/05/2015 00:47, John R Pierce ha scritto:
> On 5/6/2015 1:34 PM, Valeri Galtsev wrote:
>> My assistant liked backuppc. It is OK and will do decent job for really
>> small number of machines (thinking 3-4 IMHO). I run bacula which has
>> close
>> to a hundred of clients; all is stored in files on RAID units, no tapes.
>> Once you configure it it is nice. But to
2023 Mar 30
2
Performance: lots of small files, hdd, nvme etc.
...data
Brick2: gls2:/gluster/md3/workdata
Brick3: gls3:/gluster/md3/workdata
Brick4: gls1:/gluster/md4/workdata
Brick5: gls2:/gluster/md4/workdata
Brick6: gls3:/gluster/md4/workdata
etc.
- workload: the (un)famous "lots of small files" setting
- currently 70% of the of the volume is used: ~32TB
- file size: few KB up to 1MB
- so there are hundreds of millions of files (and millions of directories)
- each image has an ID
- under the base dir the IDs are split into 3 digits
- dir structure: /basedir/(000-999)/(000-999)/ID/[lotsoffileshere]
- example for ID 123456789: /basedir/123/456/123456...