John Strunk
2017-Sep-08 13:53 UTC
[Gluster-users] Redis db permission issue while running GitLab in Kubernetes with Gluster
Getting this answer back on the list in case anyone else is trying to share storage. Thanks for the docs pointer, Tanner. -John On Thu, Sep 7, 2017 at 6:50 PM, Tanner Bruce <tanner.bruce at farmersedge.ca> wrote:> You can set a security context on your pod to set the guid as needed: > https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ > > > This should do what you need > > > Tanner > Configure a Security Context for a Pod or Container ... > <https://kubernetes.io/docs/tasks/configure-pod-container/security-context/> > kubernetes.io > Edit This Page. Configure a Security Context for a Pod or Container. A > security context defines privilege and access control settings for a Pod or > Container. > > ------------------------------ > *From:* gluster-users-bounces at gluster.org <gluster-users-bounces@ > gluster.org> on behalf of John Strunk <jstrunk at redhat.com> > *Sent:* September 7, 2017 2:28:50 PM > *To:* Gaurav Chhabra > *Cc:* gluster-users at gluster.org > *Subject:* Re: [Gluster-users] Redis db permission issue while running > GitLab in Kubernetes with Gluster > > I don't think this is a gluster problem... > > Each container is going to have its own notion of user ids, hence the > mystery uid 1000 in the redis container. I suspect that if you exec into > the gitlab container, it may be the one running as 1000 (guessing based on > the file names). If you want to share volumes across containers, you're > going to have to do something explicitly to make sure each of them (with > their own uid/gid) can read/write the volume, for example by sharing the > same gid across all containers. > > I'm going to suggest not sharing the same volume across all 3 containers > unless they need shared access to the data. > > -John > > > On Thu, Sep 7, 2017 at 12:13 PM, Gaurav Chhabra <varuag.chhabra at gmail.com> > wrote: > >> Hello, >> >> >> I am trying to setup GitLab, Redis and PostgreSQL containers in >> Kubernetes using Gluster for persistence. GlusterFS nodes are setup on >> machines (CentOS) external to Kubernetes cluster (running on RancherOS >> host). Issue is that when GitLab tries starting up, the login page doesn't >> load. It's a fresh setup and not something that stopped working now. >> >> root at gitlab-2797053212-ph4j8:/var/log/gitlab/gitlab# tail -50 sidekiq.log >> ... >> 2017-09-07T11:53:03.099Z 547 TID-1fdf1k ERROR: Error fetching job: ERR Error running script (call to f_7b91ed9f4cba40689cea7172d1fd3e08b2efd8c9): @user_script:7: @user_script: 7: -MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error. >> 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/redis-3.3.3/lib/redis/client.rb:121:in `call' >> 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/peek-redis-1.2.0/lib/peek/views/redis.rb:9:in `call' >> 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/redis-3.3.3/lib/redis.rb:2399:in `block in _eval' >> 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/redis-3.3.3/lib/redis.rb:58:in `block in synchronize' >> 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /usr/lib/ruby/2.3.0/monitor.rb:214:in `mon_synchronize' >> 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/redis-3.3.3/lib/redis.rb:58:in `synchronize' >> ... >> >> So i checked for Redis container logs. >> >> [root at node-a ~]# docker logs -f 67d44f585705 >> ... >> ... >> [1] 07 Sep 14:43:48.140 # Background saving error >> [1] 07 Sep 14:43:54.048 * 1 changes in 900 seconds. Saving... >> [1] 07 Sep 14:43:54.048 * Background saving started by pid 2437 >> [2437] 07 Sep 14:43:54.053 # Failed opening .rdb for saving: Permission denied >> ... >> >> Checked online for this issue and then noticed the following permissions >> and owner details *inside*of Redis pod: >> >> [root at node-a ~]# docker exec -it 67d44f585705 bash >> groups: cannot find name for group ID 2000 >> root at redis-2138096053-0mlx4:/# ls -ld /var/lib/redis/ >> drwxr-sr-x 12 1000 1000 8192 Sep 7 11:51 /var/lib/redis/ >> root at redis-2138096053-0mlx4:/# >> root at redis-2138096053-0mlx4:/# ls -l /var/lib/redis/ >> total 22 >> drwxr-sr-x 2 1000 1000 6 Sep 6 10:37 backups >> drwxr-sr-x 2 1000 1000 6 Sep 6 10:37 builds >> drwxr-sr-x 2 redis redis 6 Sep 6 10:14 data >> -rw-r--r-- 1 redis redis 13050 Sep 7 11:51 dump.rdb >> -rwxr-xr-x 1 redis redis 21 Sep 5 11:00 index.html >> drwxrws--- 2 1000 1000 6 Sep 6 10:37 repositories >> drwxr-sr-x 5 1000 1000 55 Sep 6 10:37 shared >> drwxr-sr-x 2 root root 8192 Sep 6 10:37 ssh >> drwxr-sr-x 3 redis redis 70 Sep 7 10:20 tmp >> drwx--S--- 2 1000 1000 6 Sep 6 10:37 uploads >> root at redis-2138096053-0mlx4:/# >> root at redis-2138096053-0mlx4:/# grep 1000 /etc/passwd >> root at redis-2138096053-0mlx4:/# >> >> Ran following and all looked fine. >> >> root at redis-2138096053-0mlx4:/# chown redis:redis -R /var/lib/redis/ >> >> However, when i deleted and ran the GitLab deployment YAML again, the >> permissions inside the Redis container *again* got skewed up. I am not >> sure whether Gluster is messing up with the Redis file/folders permissions >> but i can't think of any other reason >> ?except for mount?. >> >> One thing i would like to highlight is that all the three containers are >> using the *same* PVC >> >> - name: gluster-vol1 >> persistentVolumeClaim: >> claimName: gluster-dyn-pvc >> >> Above is common for all three. What differs is shown below: >> >> a) postgresql-deployment.yaml >> >> volumeMounts: >> - name: gluster-vol1 >> mountPath: /var/lib/postgresql >> >> b) redisio-deployment.yaml >> >> volumeMounts: >> - name: gluster-vol1 >> mountPath: /var/lib/redis >> >> c) gitlab-deployment.yaml >> >> volumeMounts: >> - name: gluster-vol1 >> mountPath: /home/git/data >> >> Any suggestion? Also, i >> ? guess this is not >> the right way to use the same PVC/Storage Class for all three containers >> ? because i just noticed that all contents reside in the same dir inside >> Gluster nodes. >> >> I know there are many things involved besides Gluster so this may not be >> _the_ right forum but amongst all, my gut feeling is that Gluster might be >> the reason for the permission issue. >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users >> > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170908/26eb14b5/attachment.html>
Gaurav Chhabra
2017-Sep-08 13:59 UTC
[Gluster-users] Redis db permission issue while running GitLab in Kubernetes with Gluster
You were right John. After you mentioned about the file names, i checked the listing again and yes, the uid 1000 does belongs to 'git' user present on the GitLab container. Actually the long listing i mentioned in my first mail had all contents mapped from GitLab, Redis and PostgreSQL in one single GlusterFS volume. I was able to fix this by creating 3 separate volumes and mapping /var/lib/postgresql, /var/lib/redis and /home/git/data to their respective PersistentVolumeClaim (PVC), which i created in Kubernetes. After this was done, the Redis db permission issue went away. Seems having all the above three paths mapped to single volume (which anyways wasn't the right thing to start with but since i wasn't aware of how things work in GlusterFS) in GlusterFS was messing up things somehow. Thanks for your help! I really appreciate it. :) Regards, Gaurav PS: While i was about to hit the 'Send' button, i saw your latest update. I guess, i will not be needing the security-context thing for now although i still have one last issue which came up an hour ago. Almost certain it's not related to Gluster :) Thank you! On Fri, Sep 8, 2017 at 7:23 PM, John Strunk <jstrunk at redhat.com> wrote:> Getting this answer back on the list in case anyone else is trying to > share storage. > > Thanks for the docs pointer, Tanner. > > -John > > On Thu, Sep 7, 2017 at 6:50 PM, Tanner Bruce <tanner.bruce at farmersedge.ca> > wrote: > >> You can set a security context on your pod to set the guid as needed: >> https://kubernetes.io/docs/tasks/configure-pod-conta >> iner/security-context/ >> >> >> This should do what you need >> >> >> Tanner >> Configure a Security Context for a Pod or Container ... >> <https://kubernetes.io/docs/tasks/configure-pod-container/security-context/> >> kubernetes.io >> Edit This Page. Configure a Security Context for a Pod or Container. A >> security context defines privilege and access control settings for a Pod or >> Container. >> >> ------------------------------ >> *From:* gluster-users-bounces at gluster.org <gluster-users-bounces at gluster >> .org> on behalf of John Strunk <jstrunk at redhat.com> >> *Sent:* September 7, 2017 2:28:50 PM >> *To:* Gaurav Chhabra >> *Cc:* gluster-users at gluster.org >> *Subject:* Re: [Gluster-users] Redis db permission issue while running >> GitLab in Kubernetes with Gluster >> >> I don't think this is a gluster problem... >> >> Each container is going to have its own notion of user ids, hence the >> mystery uid 1000 in the redis container. I suspect that if you exec into >> the gitlab container, it may be the one running as 1000 (guessing based on >> the file names). If you want to share volumes across containers, you're >> going to have to do something explicitly to make sure each of them (with >> their own uid/gid) can read/write the volume, for example by sharing the >> same gid across all containers. >> >> I'm going to suggest not sharing the same volume across all 3 containers >> unless they need shared access to the data. >> >> -John >> >> >> On Thu, Sep 7, 2017 at 12:13 PM, Gaurav Chhabra <varuag.chhabra at gmail.com >> > wrote: >> >>> Hello, >>> >>> >>> I am trying to setup GitLab, Redis and PostgreSQL containers in >>> Kubernetes using Gluster for persistence. GlusterFS nodes are setup on >>> machines (CentOS) external to Kubernetes cluster (running on RancherOS >>> host). Issue is that when GitLab tries starting up, the login page doesn't >>> load. It's a fresh setup and not something that stopped working now. >>> >>> root at gitlab-2797053212-ph4j8:/var/log/gitlab/gitlab# tail -50 sidekiq.log >>> ... >>> 2017-09-07T11:53:03.099Z 547 TID-1fdf1k ERROR: Error fetching job: ERR Error running script (call to f_7b91ed9f4cba40689cea7172d1fd3e08b2efd8c9): @user_script:7: @user_script: 7: -MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error. >>> 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/redis-3.3.3/lib/redis/client.rb:121:in `call' >>> 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/peek-redis-1.2.0/lib/peek/views/redis.rb:9:in `call' >>> 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/redis-3.3.3/lib/redis.rb:2399:in `block in _eval' >>> 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/redis-3.3.3/lib/redis.rb:58:in `block in synchronize' >>> 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /usr/lib/ruby/2.3.0/monitor.rb:214:in `mon_synchronize' >>> 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle/ruby/2.3.0/gems/redis-3.3.3/lib/redis.rb:58:in `synchronize' >>> ... >>> >>> So i checked for Redis container logs. >>> >>> [root at node-a ~]# docker logs -f 67d44f585705 >>> ... >>> ... >>> [1] 07 Sep 14:43:48.140 # Background saving error >>> [1] 07 Sep 14:43:54.048 * 1 changes in 900 seconds. Saving... >>> [1] 07 Sep 14:43:54.048 * Background saving started by pid 2437 >>> [2437] 07 Sep 14:43:54.053 # Failed opening .rdb for saving: Permission denied >>> ... >>> >>> Checked online for this issue and then noticed the following permissions >>> and owner details *inside*of Redis pod: >>> >>> [root at node-a ~]# docker exec -it 67d44f585705 bash >>> groups: cannot find name for group ID 2000 >>> root at redis-2138096053-0mlx4:/# ls -ld /var/lib/redis/ >>> drwxr-sr-x 12 1000 1000 8192 Sep 7 11:51 /var/lib/redis/ >>> root at redis-2138096053-0mlx4:/# >>> root at redis-2138096053-0mlx4:/# ls -l /var/lib/redis/ >>> total 22 >>> drwxr-sr-x 2 1000 1000 6 Sep 6 10:37 backups >>> drwxr-sr-x 2 1000 1000 6 Sep 6 10:37 builds >>> drwxr-sr-x 2 redis redis 6 Sep 6 10:14 data >>> -rw-r--r-- 1 redis redis 13050 Sep 7 11:51 dump.rdb >>> -rwxr-xr-x 1 redis redis 21 Sep 5 11:00 index.html >>> drwxrws--- 2 1000 1000 6 Sep 6 10:37 repositories >>> drwxr-sr-x 5 1000 1000 55 Sep 6 10:37 shared >>> drwxr-sr-x 2 root root 8192 Sep 6 10:37 ssh >>> drwxr-sr-x 3 redis redis 70 Sep 7 10:20 tmp >>> drwx--S--- 2 1000 1000 6 Sep 6 10:37 uploads >>> root at redis-2138096053-0mlx4:/# >>> root at redis-2138096053-0mlx4:/# grep 1000 /etc/passwd >>> root at redis-2138096053-0mlx4:/# >>> >>> Ran following and all looked fine. >>> >>> root at redis-2138096053-0mlx4:/# chown redis:redis -R /var/lib/redis/ >>> >>> However, when i deleted and ran the GitLab deployment YAML again, the >>> permissions inside the Redis container *again* got skewed up. I am not >>> sure whether Gluster is messing up with the Redis file/folders permissions >>> but i can't think of any other reason >>> ?except for mount?. >>> >>> One thing i would like to highlight is that all the three containers are >>> using the *same* PVC >>> >>> - name: gluster-vol1 >>> persistentVolumeClaim: >>> claimName: gluster-dyn-pvc >>> >>> Above is common for all three. What differs is shown below: >>> >>> a) postgresql-deployment.yaml >>> >>> volumeMounts: >>> - name: gluster-vol1 >>> mountPath: /var/lib/postgresql >>> >>> b) redisio-deployment.yaml >>> >>> volumeMounts: >>> - name: gluster-vol1 >>> mountPath: /var/lib/redis >>> >>> c) gitlab-deployment.yaml >>> >>> volumeMounts: >>> - name: gluster-vol1 >>> mountPath: /home/git/data >>> >>> Any suggestion? Also, i >>> ? guess this is not >>> the right way to use the same PVC/Storage Class for all three >>> containers >>> ? because i just noticed that all contents reside in the same dir inside >>> Gluster nodes. >>> >>> I know there are many things involved besides Gluster so this may not be >>> _the_ right forum but amongst all, my gut feeling is that Gluster might be >>> the reason for the permission issue. >>> >>> >>> _______________________________________________ >>> Gluster-users mailing list >>> Gluster-users at gluster.org >>> http://lists.gluster.org/mailman/listinfo/gluster-users >>> >> >> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170908/13cf91f9/attachment.html>