Displaying 20 results from an estimated 33 matches for "k8s".
Did you mean:
k8
2017 Aug 10
2
Kubernetes v1.7.3 and GlusterFS Plugin
short copy from a kubectl describe pod...
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
FailedMount (combined from similar events): MountVolume.SetUp failed for
volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs: mount
failed: mount failed: exit status 1
Mounting command: mount
Mounting arguments: 159.100.242.235:vol_7a312f6...
2020 Jan 07
4
[Bug 1396] New: When rule with 3 concat elements are added, nft list shows only 2
...NEW
Severity: critical
Priority: P5
Component: nft
Assignee: pablo at netfilter.org
Reporter: sbezverk at cisco.com
table ip ipv4table {
map cluster-ip-services-set {
type inet_proto . ipv4_addr . inet_service : verdict
}
chain k8s-nat-mark-masq {
ip protocol . ip daddr vmap @cluster-ip-services-set
}
chain k8s-nat-do-mark-masq {
meta mark set 0x00004000 return
}
}
the command to add rule to k8s-nat-mark-masq chain is:
sudo nft add rule ipv4table k8s-nat-mark-masq ip protocol . ip daddr . th dpo...
2017 Aug 10
2
Kubernetes v1.7.3 and GlusterFS Plugin
...ail.com>
> wrote:
>
>>
>> short copy from a kubectl describe pod...
>>
>> Events:
>> FirstSeen LastSeen Count From SubObjectPath Type Reason Message
>> --------- -------- ----- ---- ------------- -------- ------ -------
>> 5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>> FailedMount (combined from similar events): MountVolume.SetUp failed for
>> volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs: mount
>> failed: mount failed: exit status 1
>> Mounting command: mount
>> Mo...
2017 Aug 10
2
Kubernetes v1.7.3 and GlusterFS Plugin
Hi all,
I am testing K8s 1.7.3 together with GlusterFS and have some issues
is this correct?
- Kubernetes v1.7.3 ships with a GlusterFS Plugin that depends on
GlusterFS 3.11
- GlusterFS 3.11 is not recommended for production. So 3.10 should be used
This actually means no K8s 1.7.x version, right?
Or is there anything els...
2017 Aug 10
2
Kubernetes v1.7.3 and GlusterFS Plugin
...t;> short copy from a kubectl describe pod...
>>>>
>>>> Events:
>>>> FirstSeen LastSeen Count From SubObjectPath Type Reason Message
>>>> --------- -------- ----- ---- ------------- -------- ------ -------
>>>> 5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>>>> FailedMount (combined from similar events): MountVolume.SetUp failed
>>>> for volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs: mount
>>>> failed: mount failed: exit status 1
>>>> Mou...
2017 Aug 10
0
Kubernetes v1.7.3 and GlusterFS Plugin
...hristopher Schmidt <fakod666 at gmail.com>
wrote:
>
> short copy from a kubectl describe pod...
>
> Events:
> FirstSeen LastSeen Count From SubObjectPath Type Reason Message
> --------- -------- ----- ---- ------------- -------- ------ -------
> 5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
> FailedMount (combined from similar events): MountVolume.SetUp failed for
> volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs: mount
> failed: mount failed: exit status 1
> Mounting command: mount
> Mounting arguments: 15...
2017 Aug 10
0
Kubernetes v1.7.3 and GlusterFS Plugin
...>>>
>>> short copy from a kubectl describe pod...
>>>
>>> Events:
>>> FirstSeen LastSeen Count From SubObjectPath Type Reason Message
>>> --------- -------- ----- ---- ------------- -------- ------ -------
>>> 5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>>> FailedMount (combined from similar events): MountVolume.SetUp failed
>>> for volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs:
>>> mount failed: mount failed: exit status 1
>>> Mounting command: m...
2017 Aug 10
0
Kubernetes v1.7.3 and GlusterFS Plugin
Are you seeing issue or error message which says auto_unmount option is not
valid ?
Can you please let me the issue you are seeing with 1.7.3 ?
--Humble
On Thu, Aug 10, 2017 at 3:33 PM, Christopher Schmidt <fakod666 at gmail.com>
wrote:
> Hi all,
>
> I am testing K8s 1.7.3 together with GlusterFS and have some issues
>
> is this correct?
> - Kubernetes v1.7.3 ships with a GlusterFS Plugin that depends on
> GlusterFS 3.11
> - GlusterFS 3.11 is not recommended for production. So 3.10 should be used
>
> This actually means no K8s 1.7.x version...
2017 Aug 10
0
Kubernetes v1.7.3 and GlusterFS Plugin
...om a kubectl describe pod...
>>>>>
>>>>> Events:
>>>>> FirstSeen LastSeen Count From SubObjectPath Type Reason Message
>>>>> --------- -------- ----- ---- ------------- -------- ------ -------
>>>>> 5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>>>>> FailedMount (combined from similar events): MountVolume.SetUp failed
>>>>> for volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs:
>>>>> mount failed: mount failed: exit status 1
>>...
2017 Aug 10
1
Kubernetes v1.7.3 and GlusterFS Plugin
...e pod...
>>>>>>
>>>>>> Events:
>>>>>> FirstSeen LastSeen Count From SubObjectPath Type Reason Message
>>>>>> --------- -------- ----- ---- ------------- -------- ------ -------
>>>>>> 5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning
>>>>>> FailedMount (combined from similar events): MountVolume.SetUp failed
>>>>>> for volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs: mount
>>>>>> failed: mount failed: exit statu...
2019 Apr 24
9
Are linux distros redundant?
I just realised that I haven't touched a centos/redhat machine in more than
a couple of years. Everything I do now is Kubernetes based or using cloud
services (or k8s cloud services).
What about it listeroons? Is your fleet of centos boxes ever expanding or
are you just taking care of a single java 6 jboss application that takes
care of the companies widget stocks?
How are your jobs changing?
Cheers,
Andrew
2019 Apr 24
3
Are linux distros redundant?
> What OS are your k8s clusters running on? How about your cloud
> providers? Mine are on RHEL and CentOS.
>
I don't know. We use fully managed services from Google. I think its coreOS.
> --
> Jonathan Billings <billings at negate.org>
> _______________________________________________
>...
2017 Nov 08
2
glusterfs brick server use too high memory
Hi all,
I'm glad to add glusterfs community.
I have a glusterfs cluster:
Nodes: 4
System: Centos7.1
Glusterfs: 3.8.9
Each Node:
CPU: 48 core
Mem: 128GB
Disk: 1*4T
There is one Distributed Replicated volume. There are ~160 k8s pods as clients connecting to glusterfs. But, the memory of glusterfsd process is too high, gradually increase to 100G every node.
Then, I reboot the glusterfsd process. But the memory increase during approximate a week.
How can I debug the problem?
Thanks.
-------------- next part -------------...
2019 Apr 24
0
Are linux distros redundant?
On Wed, Apr 24, 2019 at 02:42:19PM +0200, Andrew Holway wrote:
> I just realised that I haven't touched a centos/redhat machine in more than
> a couple of years. Everything I do now is Kubernetes based or using cloud
> services (or k8s cloud services).
>
> What about it listeroons? Is your fleet of centos boxes ever expanding or
> are you just taking care of a single java 6 jboss application that takes
> care of the companies widget stocks?
What OS are your k8s clusters running on? How about your cloud
providers?...
2020 Feb 04
2
[Bug 1405] New: Possible a bug in n libnftables deserializer. [invalid type]
...netfilter.org
Reporter: sbezverk at cisco.com
When I add update rule for a map, nft command does not fail but shows [invalid
type]
table ip kube-nfproxy-v4 {
map sticky-set-svc-M53CN2XYVUHRQ7UB {
type ipv4_addr : integer
size 65535
timeout 6m
}
chain k8s-nfproxy-sep-TMVEFT7EX55F4T62 {
update @sticky-set-svc-M53CN2XYVUHRQ7UB { ip saddr : 0x2 [invalid type]
}
}
}
Here is the command I use to add update rule:
sudo nft add rule kube-nfproxy-v4 k8s-nfproxy-sep-TMVEFT7EX55F4T62 update
@sticky-set-svc-M53CN2XYVUHRQ7UB { ip saddr timeout 30s :...
2019 Apr 24
5
Are linux distros redundant?
> Andrew Holway wrote:
>> I just realised that I haven't touched a centos/redhat machine in more
>> than a couple of years. Everything I do now is Kubernetes based or using
>> cloud services (or k8s cloud services).
>>
>> What about it listeroons? Is your fleet of centos boxes ever expanding
>> or
>> are you just taking care of a single java 6 jboss application that
>> takes
>> care of the companies widget stocks?
>>
>> How are your jobs changi...
2017 Nov 09
0
glusterfs brick server use too high memory
...wrote:
> Hi all,
> I'm glad to add glusterfs community.
>
> I have a glusterfs cluster:
> Nodes: 4
> System: Centos7.1
> Glusterfs: 3.8.9
> Each Node:
> CPU: 48 core
> Mem: 128GB
> Disk: 1*4T
>
> There is one Distributed Replicated volume. There are ~160 k8s pods as
> clients connecting to glusterfs. But, the memory of glusterfsd process is
> too high, gradually increase to 100G every node.
> Then, I reboot the glusterfsd process. But the memory increase during
> approximate a week.
> How can I debug the problem?
>
> Hi,
Please t...
2005 Jul 21
1
Install Problems Centos 4.1
Dear All,
I have a K8S-MX Asus Athlon 64 Motherboard with a 754 pin 3000+ CPU,
which I cam trying to install 4.1 Centos 64 bit.
The problem seems to arise when installing onto Mirrored disks, I have
noticed that from Centos 4 onwards it tries to rebuild the arrays as it
installs which slows the whole process right d...
2017 Nov 09
1
glusterfs brick server use too high memory
...mber 2017 at 17:16, Yao Guotao <yaoguo_tao at 163.com> wrote:
Hi all,
I'm glad to add glusterfs community.
I have a glusterfs cluster:
Nodes: 4
System: Centos7.1
Glusterfs: 3.8.9
Each Node:
CPU: 48 core
Mem: 128GB
Disk: 1*4T
There is one Distributed Replicated volume. There are ~160 k8s pods as clients connecting to glusterfs. But, the memory of glusterfsd process is too high, gradually increase to 100G every node.
Then, I reboot the glusterfsd process. But the memory increase during approximate a week.
How can I debug the problem?
Hi,
Please take statedumps at intervals (a m...
2017 Jul 27
2
gluster-heketi-kubernetes
Hi Talur,
I've successfully got Gluster deployed as a DaemonSet using k8s spec
file glusterfs-daemonset.json from
https://github.com/heketi/heketi/tree/master/extras/kubernetes
but then when I try deploying heketi using heketi-deployment.json spec
file, I end up with a CrashLoopBackOff pod.
# kubectl get pods
NAME READY STATUS...