this post was submitted on 24 May 2024
394 points (98.0% liked)

Programmer Humor

35596 readers
102 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] flamingo_pinyata@sopuli.xyz 31 points 11 months ago (9 children)

Good luck connecting to each of the 36 pods and grepping the file over and over again

[–] whodatdair@lemmy.blahaj.zone 11 points 11 months ago* (last edited 11 months ago)

for X in $(seq -f host%02g 1 9); do echo $X; ssh -q $X “grep the shit”; done

:)

But yeah fair, I do actually use a big data stack for log monitoring and searching… it’s just way more usable haha

[–] Semi-Hemi-Demigod@kbin.social 9 points 11 months ago

Just write a bash script to loop over them.

[–] keyez@lemmy.world 8 points 11 months ago

You can run the logs command against a label so it will match all 36 pods

[–] NovaPrime@lemmy.ml 6 points 11 months ago

Stern has been around for ever. You could also just use a shared label selector with kubectl logs and then grep from there. You make it sound difficult if not impossible, but it's not. Combine it with egrep and you can pretty much do anything you want right there on the CLI

[–] brokenlcd@feddit.it 5 points 11 months ago (1 children)

I don't know how k8s works; but if there is a way to execute just one command in a container and then exit out of it like chroot; wouldn't it be possible to just use xargs with a list of the container names?

[–] zeluko@kbin.social 9 points 11 months ago

yeah, just use kubectl and pipe stuff around with bash to make it work, pretty easy

[–] SeattleRain@lemmy.world 4 points 11 months ago

This is what I was thinking. And you can't really graph out things over time on a graph which is really critical for a lot of workflows.

I get that Splunk and Elastic or unwieldy beasts that take way too much maintenance for what they provide for many orgs but to think grep is replacement is kinda crazy.

[–] marcos@lemmy.world 4 points 11 months ago* (last edited 11 months ago) (1 children)

Let me introduce you to syslogd.

But well, it's probably overkill, and you almost certainly just need to log on a shared volume.

[–] dan@upvote.au 1 points 11 months ago

Syslog isn't really overkill IMO. It's pretty easy to configure it to log to a remote server, and to split particular log types or sources into different files. It's a decent abstraction - your app that logs to syslog doesn't have to know where the logs are going.

[–] FrederikNJS@lemm.ee 4 points 11 months ago* (last edited 11 months ago)

Since you are talking about pods, you are obviously emitting all your logs on stdout and stderr, and you have of course also labeled your pods nicely, so grepping all 36 gods is as easy as kubectl logs -l <label-key>=<label-value> | grep <search-term>

[–] sunshine@lemmy.ml 2 points 11 months ago

That's why tmux has synchronize-panes!