1
4
2
9

cross-posted from: https://lemmy.ml/post/20234044

Do you know about using Kubernetes Debug containers? They're really useful for troubleshooting well-built, locked-down images that are running in your cluster. I was thinking it would be nice if k9s had this feature, and lo and behold, it has a plugin! I just had to add that snippet to my ${HOME}/.config/k9s/plugins.yaml, run k9s, find the pod, press enter to get into the pod's containers, select a container, and press Shift-D. The debug-container plugin uses the nicolaka/netshoot image, which has a bunch of useful tools on it. Easy debugging in k9s!

3
6
4
17
5
3
6
2
7
4
8
8
9
22
10
5
submitted 8 months ago* (last edited 8 months ago) by Sheldan@programming.dev to c/kubernetes@programming.dev

I recently got recommended this project, to have a more natively connected CI/CD (I would probably be more interested in the CI part, as I already have argo-cd running) And it seems very interesting, and the development seems okayish active. The only thing that I am curious about (and why I made this post, besides maybe making more people aware that it exists), is how active the Tekton hub (https://hub.tekton.dev/) is.

So, maybe somebody here has some information on that. I am not using Tekton (yet), but I read somewhere in the documentation, that this hub is supposed to be the place to get re-usable components, but seeing the actual activity on there turned me off from the project a little bit, because a lot of things are in version 0.1 and have been last updated 1 or 2 years ago. Maybe that issue only exists, because I am not logged in, but that certainly looks weird.

So, do you have any experience with Tekton? How do you feel about it?

11
5
12
11
13
8
14
7
15
6
16
1
17
10
18
3
19
2

One of biggest problems of #kubernetes is complexity.
@thockin on #KubeCon keynote shares his insights. I've seen that time and again with my users, as well as on our Logz.io DevOps Pulse yearly survey.
Maintainers aren't the end users of
@kubernetes , which doesn't help.

20
1

#KubeCon #ObservabilityDay? It’s time to talk about the unspoken challenges of #monitoring #Kubernetes: the bloat of metric data, the high churn rate of pod metrics, configuration complexity, and so much more. https://horovits.medium.com/f30c58722541
#observability #devops #SRE @kubernetes @linuxfoundation

21
1
22
2

It’s time to talk about the unspoken challenges of monitoring #Kubernetes: the bloat of metric data, the high churn rate of pod metrics, configuration complexity, and so much more.
https://horovits.medium.com/f30c58722541
#kubecon @kubernetes #k8s #monitoring #observability #devops #SRE @victoriametrics

23
1
submitted 1 year ago* (last edited 1 year ago) by z3r0_Geek@lemmy.zip to c/kubernetes@programming.dev

cross-posted from: https://lemmy.zip/post/3942293

We need to deploy a Kubernetes cluster at v1.27. We need that version because it comes with a particular feature gate that we need and it was moved to beta and set enabled by default from that version.

Is there any way to check which feature gates are enabled/disabled in a particular GKE and EKS cluster version without having to check the kubelet configuration inside a deployed cluster node? I don't want to deploy a cluster just to check this.

I've check both GKE and EKS changelogs and docs, but I couldn't see a list of enabled/disabled feature gates list.

Thanks in advance!

24
1

I installed K3s for some hobby projects over the weekend and, so far, I have been very impressed with it.

This got me thinking, that it could be a nice cheap alternative to setting up an EKS cluster on AWS -- something I found to be both expensive and painful for the availability that we needed.

Is anybody using K3s in production? Is it OK under load? How have upgrades and compatibility been?

25
1

Is anyone using the minio-operator? I'm hesitant because I can't find a lot of documentation on how to recover from cluster outages or partial disk failures.

view more: next ›

Kubernetes

1 readers
4 users here now

founded 1 year ago
MODERATORS