Displaying 8 results from an estimated 8 matches for "162g".
Did you mean:
1626
2007 Sep 04
2
shrink LV with ext3 filesystem
...backup? I do not have physical access to the server.
Specs:
Dell PE SC1430 with a 5/i RAID Controller
one RAID 1 array from the Dell RAID controller
2 partitons (boot and LVM)
1 VG
3 LV (swap, /var (/dev/VolGroup00/LogVol02) and / formatted with ext3)
/dev/mapper/VolGroup00-LogVol02 211G 39G 162G 20% /var
Thank you in advance.
Thomas Antony
2009 Apr 27
5
Wine and /home partition woes on Gentoo/amd64
I apologize if the question has been posed before, but I did not find any similar posts by browsing or searching through the forums. Here it goes:
I have been using Gentoo Linux for a year now, and never have I had a problem I couldn't solve. However, not long ago I bought a 1 TB hard disk which I divided into 7 partitions: /boot, (swap), /, /var, /tmp, /usr and /home (yes, that's FreeBSD
2023 Jul 04
1
remove_me files building up
...2.0G 456K 1.9G 1% /var/lib/glusterd
/dev/sdd1 15G 12G 3.5G 78% /data/glusterfs/gv1/brick3
/dev/sdc1 15G 2.6G 13G 18% /data/glusterfs/gv1/brick1
/dev/sde1 15G 14G 1.8G 89% /data/glusterfs/gv1/brick2
uk1-prod-gfs-01:/gv1 300G 139G 162G 47% /mnt/gfs
tmpfs 796M 0 796M 0% /run/user/1004
Something I forgot to mention in my initial message is that the opversion was upgraded from 70200 to 100000, which seems as though it could have been a trigger for the issue as well.
Thanks,
Liam Smith????
Linux Systems Sup...
2023 Jul 04
1
remove_me files building up
...sdb1 ? ? ? ? ? ? 2.0G ?456K ?1.9G ? 1% /var/lib/glusterd/dev/sdd1 ? ? ? ? ? ? ?15G ? 12G ?3.5G ?78% /data/glusterfs/gv1/brick3/dev/sdc1 ? ? ? ? ? ? ?15G ?2.6G ? 13G ?18% /data/glusterfs/gv1/brick1/dev/sde1 ? ? ? ? ? ? ?15G ? 14G ?1.8G ?89% /data/glusterfs/gv1/brick2uk1-prod-gfs-01:/gv1 ?300G ?139G ?162G ?47% /mnt/gfstmpfs ? ? ? ? ? ? ? ? 796M ? ? 0 ?796M ? 0% /run/user/1004
Something I forgot to mention in my initial message is that the opversion was upgraded from 70200 to 100000, which seems as though it could have been a trigger for the issue as well.
Thanks,
|
| ? |
|
|
|
| Liam?Smith????...
2023 Jul 04
1
remove_me files building up
...2.0G 456K 1.9G 1% /var/lib/glusterd
/dev/sdd1 15G 12G 3.5G 78% /data/glusterfs/gv1/brick3
/dev/sdc1 15G 2.6G 13G 18% /data/glusterfs/gv1/brick1
/dev/sde1 15G 14G 1.8G 89% /data/glusterfs/gv1/brick2
uk1-prod-gfs-01:/gv1 300G 139G 162G 47% /mnt/gfs
tmpfs 796M 0 796M 0% /run/user/1004
Something I forgot to mention in my initial message is that the opversion was upgraded from 70200 to 100000, which seems as though it could have been a trigger for the issue as well.
Thanks,
Liam Smith????
Linux Systems Sup...
2023 Jul 04
1
remove_me files building up
...sdb1 ? ? ? ? ? ? 2.0G ?456K ?1.9G ? 1% /var/lib/glusterd/dev/sdd1 ? ? ? ? ? ? ?15G ? 12G ?3.5G ?78% /data/glusterfs/gv1/brick3/dev/sdc1 ? ? ? ? ? ? ?15G ?2.6G ? 13G ?18% /data/glusterfs/gv1/brick1/dev/sde1 ? ? ? ? ? ? ?15G ? 14G ?1.8G ?89% /data/glusterfs/gv1/brick2uk1-prod-gfs-01:/gv1 ?300G ?139G ?162G ?47% /mnt/gfstmpfs ? ? ? ? ? ? ? ? 796M ? ? 0 ?796M ? 0% /run/user/1004
Something I forgot to mention in my initial message is that the opversion was upgraded from 70200 to 100000, which seems as though it could have been a trigger for the issue as well.
Thanks,
|
| ? |
|
|
|
| Liam?Smith????...
2023 Jul 05
1
remove_me files building up
...2.0G 456K 1.9G 1% /var/lib/glusterd
/dev/sdd1 15G 12G 3.5G 78% /data/glusterfs/gv1/brick3
/dev/sdc1 15G 2.6G 13G 18% /data/glusterfs/gv1/brick1
/dev/sde1 15G 14G 1.8G 89% /data/glusterfs/gv1/brick2
uk1-prod-gfs-01:/gv1 300G 139G 162G 47% /mnt/gfs
tmpfs 796M 0 796M 0% /run/user/1004
Something I forgot to mention in my initial message is that the opversion was upgraded from 70200 to 100000, which seems as though it could have been a trigger for the issue as well.
Thanks,
Liam Smith????
Linux Systems Sup...
2023 Jul 03
1
remove_me files building up
Hi,
you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ?
Best Regards,Strahil Nikolov?
On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's