1
7
submitted 1 year ago by zephyrvs@lemmy.ml to c/kubernetes@lemmy.ml

I've been working with Kubernetes since 2015 and I've mangled with handcrafted manifests including almost duplicate manifests for staging/production environments, played around with stuff like Cue, built lots glue (mostly shell script) to automate manifest-handling and -generation and I also enjoy parts of Kustomize. When Helm started to appear it seemed like a terrible hack, especially since it came with the Tiller-dependency to handle Helm-managed state-changes inside of the clusters. And while they dropped Tiller (thankfully), I still haven't made my peace with Helm.

Go-templating it awful to read, a lot of Helm charts don't really work out of the box, charts can be fed values that aren't shown via helm show values ./chart, debugging HelmChart $namespace/$release-$chartname is not ready involves going over multiple logs spread over different parts of the cluster and I could go on and on. And yet, almost every project that goes beyond offering Dockerfile+docker-compose.yaml just releases a Helm Chart for their app.

Am I the only one who is annoyed by Helm? Have I been using it wrongly? Is there something I've been missing?

In case you're a Helm maintainer: Please don't take it personally, my issue is not with the people behind Helm!

2
5
submitted 1 year ago by androidul@lemmy.ml to c/kubernetes@lemmy.ml
3
5
submitted 1 year ago by lemmyng@beehaw.org to c/kubernetes@lemmy.ml

The KBOM project provides an initial specification in JSON and has been constructed for extensibilty across various cloud service providers (CSPs) as well as DIY Kubernetes.

4
1
submitted 1 year ago* (last edited 1 year ago) by jlsalvador@lemmy.ml to c/kubernetes@lemmy.ml

Hello world!

I want to release to internet my custom immutable rolling-release extreme-simple Linux distribution for Kubernetes deployments.

I was using this distribution for about the last 6 years on production environments (currently used by a few startups and two country's public services). I really think that it could be stable enough to be public published to internet before 2024.

I'm asking for advice before the public release, as licensing, community building, etc.

A few specs about the distribution:

  • Rolling release. Just one file (currently less than ~40Mb) that can be bootable from BIOS or UEFI (+secure boot) environments. You can replace this file by the next release or use the included toolkit to upgrade the distribution (reboot/kexec it). Mostly automated distribution releases by each third-party releases (Linux, Systemd, Containerd, KubeAdm, etc).

  • HTTP setup. The initial setup could be configured with a YAML file written anywhere in a FAT32 partition or through a local website installer. You can install the distribution or configure KubeAdm (control-plane & worker) from the terminal or the local website.

  • Simple, KISS. Everything must be simple for the user, this must be the most important aspect for the distribution. Just upstream software to run a production ready Kubernetes cluster.

  • No money-driven. This distribution must be public, and it must allow to be forked at any time by anyone.

A bit of background:

I was using CoreOS before Redhat bought them. I like the immutable distro and A/B release aspect of the CoreOS distribution. After the Redhat acquisition, the CoreOS distribution was over-bloated. I switched to use my own distribution, built with Buildroot. A few years later, I setup the most basic framework to create a Kubernetes cluster without any headache. It's mostly automated (bots checking for new third-party releases as Linux, Systemd, Containerd, KubeAdm, etc; building, testing & signing each release). I already know that building a distribution is too expensive, because of that I programmed few bots that made this job for me. Now days, I only improve the toolkits, and approve the Git requests from thats bots.

Thank you for your time!

5
1
submitted 1 year ago by DaEagle@lemmy.ml to c/kubernetes@lemmy.ml

Looking for the best way to learn kubernetes, given that I have plenty of years of engineering (Java, python) and a solid experience with AWS.

any format works - paid/free courses, working through articles, getting started guides, etc..

6
2
Understanding Kubernetes Pods (routerhan.medium.com)
submitted 1 year ago by ccunix@lemmy.ml to c/kubernetes@lemmy.ml

For benefit of anyone who needs to go back to the basics. Certainly a need I sense in the Kubernetes community around me.

7
3

Tried it out in the past couple of days to manage k8s volumes and backups on s3 and it works surprisingly well out of the box. Context: k3s running on multiple raspberry pi

8
1
submitted 2 years ago by logrus1@lemmy.ml to c/kubernetes@lemmy.ml

CNCF has posted their playlist from all the talks at the 2022 conference in Detroit

9
2
submitted 2 years ago by strubbl@lemmy.ml to c/kubernetes@lemmy.ml
10
3
submitted 3 years ago by RawHawk@lemmy.ml to c/kubernetes@lemmy.ml

Kubernetes Ingress Controllers Compared. warning takes you to Google docs

Kubernetes

407 readers
1 users here now

Welcome to Kubernets community. The CNCF graduated project.

founded 3 years ago
MODERATORS