Praveen George
2017-May-03 20:33 UTC
[Gluster-users] postgresql is unable to create a table in gluster volume
Hi Team, We?ve been intermittently seeing issues where postgresql is unable to create a table, or some info is missing. Postgresql logs the following error: ERROR: ?unexpected data beyond EOF in block 53 of relation base/16384/12009HINT: ?This has been seen to occur with buggy kernels; consider updating your system. We are using the k8s PV/PVC to bind the volumes to the containers and using the gluster plugin to mount the volumes on the worker nodes and take it into the containers. The issue occurs regardless of whether the ?k8s spec specifies mounting of the pv using the pv provider or mount the gluster volume directly. Just to check if the issue is with the glusterfs client, we mount the volume using NFS (NFS on the client talking to gluster on the master), the issue doesn?t occur. However, with the NFS client talking directly to _one_ of the gluster masters; this means that if that master fails, it will not failover to the other gluster master - we thus lose gluster HA if we go this route.? Anyone faced this issue, is there any fix already available for the same. Gluster version is 3.7.20 and k8s is 1.5.2. ThanksPraveen -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170503/b09c9cda/attachment.html>
Jiffin Tony Thottan
2017-May-04 07:19 UTC
[Gluster-users] postgresql is unable to create a table in gluster volume
On 04/05/17 02:03, Praveen George wrote:> Hi Team, > > We?ve been intermittently seeing issues where postgresql is unable to > create a table, or some info is missing. > > Postgresql logs the following error: > > ERROR: unexpected data beyond EOF in block 53 of relation > base/16384/12009 > HINT: This has been seen to occur with buggy kernels; consider > updating your system. > > We are using the k8s PV/PVC to bind the volumes to the containers and > using the gluster plugin to mount the volumes on the worker nodes and > take it into the containers. > > The issue occurs regardless of whether the k8s spec specifies > mounting of the pv using the pv provider or mount the gluster volume > directly. > > Just to check if the issue is with the glusterfs client, we mount the > volume using NFS (NFS on the client talking to gluster on the master), > the issue doesn?t occur. However, with the NFS client talking directly > to _one_ of the gluster masters; this means that if that master fails, > it will not failover to the other gluster master - we thus lose > gluster HA if we go this route. >If you are interested there are HA solutions available with NFS. It depends on NFS solution which u are trying, if it is gluster nfs(integrated nfs server with gluster) then use ctdb and for NFS-Ganesha , we already have an integrated solution with pacemaker/corosync Please update ur gluster version since it EOLed, you don't receive any more update for that version. -- Jiffin> Anyone faced this issue, is there any fix already available for the > same. Gluster version is 3.7.20 and k8s is 1.5.2. > > Thanks > Praveen > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170504/9b9995fe/attachment.html>