Displaying 5 results from an estimated 5 matches for "45t".
Did you mean:
45
2001 Oct 24
0
??? ???? ????. ???? ???!!!
45T ???? ????!!!function openWin() { winObj = window.open("http://www.koagagu.com/segero/form.html", "????", "width=650,height=270,status=no,toolbar=no,directories=no,menubar=no,location=no,resizable=no,left=20,top=20");} 45T ???? ???? ????
?????. ???? ????? ????...
2014 Jan 21
2
XFS : Taking the plunge
...hey would be
sensible but I now believe I should have specified the "inode64" mount
option to avoid all the inodes will being stuck in the first TB.
The filesystem however is at 87% and does not seem to have had any
issues/problems.
> df -h | grep raid
/dev/sda 51T 45T 6.7T 87% /raidstor
Another question is could I now safely remount with the "inode64" option
or will this cause problems in the future? I read this below in the XFS
FAQ but wondered if it have been fixed (backported?) into el6.4?
""Starting from kernel 2.6.35, you can try a...
2018 Jan 31
1
df does not show full volume capacity after update to 3.12.4
...o believe everything is there. The total for used space in the df ?h of the mountpoint it 83T, roughly half what is used.
Relevant lines from df ?h on server-A:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 59T 42T 17T 72% /bricks/data_A1
/dev/sdb1 59T 45T 14T 77% /bricks/data_A2
/dev/sdd1 59T 39M 59T 1% /bricks/data_A4
/dev/sdc1 59T 1.9T 57T 4% /bricks/data_A3
server-A:/dataeng 350T 83T 268T 24% /dataeng
And on server-B:
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 59T 34T 25T...
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
We noticed something similar.
Out of interest, does du -sh . show the same size?
--
Sam McLeod (protoporpoise on IRC)
https://smcleod.net
https://twitter.com/s_mcleod
Words are my own opinions and do not necessarily represent those of my employer or partners.
> On 31 Jan 2018, at 12:47 pm, Freer, Eva B. <freereb at ornl.gov> wrote:
>
> After OS update to CentOS 7.4 or RedHat
2018 Jan 31
4
df does not show full volume capacity after update to 3.12.4
After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, the ?df? command shows only part of the available space on the mount point for multi-brick volumes. All nodes are at 3.12.4. This occurs on both servers and clients.
We have 2 different server configurations.
Configuration 1: A distributed volume of 8 bricks with 4 on each server. The initial configuration had 4 bricks of