Duh, my /var filesystem filled up on ONE of the nodes.?
/run/gluster/shared_storage shows the maximum of the usage of /var on
any of the nodes, makes sense actually
On 2017-11-13 10:41, Pam Patterson wrote:>
> Hello list,
>
> I recently enabled shared storage on a working cluster with nfs-ganesha
> and am just storing my ganesha.conf file there so that all 4 nodes can
> access it(baby steps).? It was all working great for a couple of weeks
> until I was alerted that /run/gluster/shared_storage was full, see
> below.? There was no warning; it went from fine to critical overnight.
>
> Filesystem???????????????????????? Size? Used Avail Use% Mounted on
> /dev/md125????????????????????????? 50G? 102M?? 47G?? 1% /
> devtmpfs??????????????????????????? 32G???? 0?? 32G?? 0% /dev
> tmpfs?????????????????????????????? 32G???? 0?? 32G?? 0% /dev/shm
> tmpfs?????????????????????????????? 32G?? 17M?? 32G?? 1% /run
> tmpfs?????????????????????????????? 32G???? 0?? 32G?? 0% /sys/fs/cgroup
> /dev/md124????????????????????????? 59G? 1.6G?? 55G?? 3% /usr
> /dev/md153p2??????????????????????? 13T?? 34M?? 13T?? 1% /glusterfs/a4/b2
> /dev/md151p1??????????????????????? 13T?? 34M?? 13T?? 1% /glusterfs/a2/b1
> /dev/md151p2??????????????????????? 13T?? 34M?? 13T?? 1% /glusterfs/a2/b2
> /dev/md152p1??????????????????????? 26T? 4.4T?? 22T? 17% /glusterfs/a3/b1
> /dev/md122????????????????????????? 20G? 6.1G?? 13G? 33% /var
> /dev/md126???????????????????????? 976M? 233M? 677M? 26% /boot
> /dev/md150p1??????????????????????? 13T? 1.1T?? 12T?? 9% /glusterfs/a1/b1
> /dev/md150p2??????????????????????? 13T? 6.7T? 6.2T? 52% /glusterfs/a1/b2
> /dev/md123???????????????????????? 1.7T?? 77M? 1.6T?? 1% /home
> /dev/md153p1??????????????????????? 13T? 1.8T?? 11T? 14% /glusterfs/a4/b1
> localhost:/gluster_shared_storage?? 20G?? 20G???? 0 100%
> /run/gluster/shared_storage
> tmpfs????????????????????????????? 6.3G???? 0? 6.3G?? 0% /run/user/1000
>
> There is only one file there
>
> # ls -la /run/gluster/shared_storage/nfs-ganesha/
> total 20
> drwxr-xr-x 2 root root? 4096 Nov? 7 13:57 .
> drwxr-xr-x 4 root root? 4096 Nov 12 13:44 ..
> -rw-r--r-- 1 root root 11562 Nov? 7 13:57 ganesha.conf
>
> # du -sh /run/gluster/shared_storage
> 20K??? /run/gluster/shared_storage
>
> When I go look on the bricksthemselves, I see some other
> filesin.gluster, but certainlynot 20GB worth (412K, 20Kand 408K). Does
> gluster think it is really full or is this just an artifactof the shared
> storage process?
>
> More info:
>
> CentOS7 fully patched, glusterfs-3.12.1-2.el7.x86_64
>
> Volume Name: gluster_shared_storage
> Type: Replicate
> Volume ID: cc1fd307-a2bb-4901-a6f9-d92b0f52a65f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: ace-storage-4n3:/var/lib/glusterd/ss_brick
> Brick2: ace-storage-4n4:/var/lib/glusterd/ss_brick
> Brick3: ace-storage-4n1:/var/lib/glusterd/ss_brick
> Options Reconfigured:
> nfs.disable: ON
> cluster.enable-shared-storage: enable
>
> Thanks for any insights!
>
> Pam
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
--
Pam Patterson
Linux Systems Administrator, MCIN
Room NW-130, 3801 Rue University
Montreal, QC, H3A 2B4
+1 514-398-6644 x2494