3

As in, when I watched YouTube tutorials, I often see YouTubers have a small widget on their desktop giving them an overview of their ram usage, security level, etc. What apps do you all use to track this?

top 50 comments
sorted by: hot top controversial new old
[-] Theon@alien.top 2 points 11 months ago

Netdata, I've meant to look into Grafana but it always seemed way too overcomplicated and heavy for my purposes. Maybe one day, though...

load more comments (1 replies)
[-] Mother_Construction2@alien.top 2 points 11 months ago

I know that it needs a fix when my dad complaining that he can’t watch TV and the rolling door doesn’t open in the morning.

[-] AstrologicalMob@alien.top 2 points 11 months ago

I currently use thr classic "Hu seems slow, checks basic things like disk usage and process CPU/RAM usage I'll do a reboot to fix it for now".

[-] dibu28@alien.top 1 points 11 months ago

Windows Server? )

[-] Nagashitw@alien.top 1 points 11 months ago

This is me. Can't hurt to just do a reboot

[-] HCharlesB@alien.top 2 points 11 months ago

Checkmk (Raw - free version.) Some setup aspects are a bit annoying (wants to monitor every last ZFS dataset and takes too long to 'ignore' them one by one.) It does alert me to things that could cause issues, like the boot partition almost full. I run it in a Docker container on my (primarily) file server.

[-] TheDeepTech@alien.top 1 points 11 months ago

I use this as well! Works well and has built in intelligence for thresholds.

[-] Dizzybro@alien.top 2 points 11 months ago

The fastest way? Probably netdata

[-] SadanielsVD@alien.top 2 points 11 months ago

This. If you have more servers you can also get them all connected to a single UI where you can see all the Infos at once. With netdata cloud

load more comments (4 replies)
load more comments (2 replies)
[-] Olleye@alien.top 1 points 11 months ago

Use PRTG, up until 100 sensors it’s free.

Best Monitoring tool ever ☝🏻🙂

[-] Do_TheEvolution@alien.top 1 points 11 months ago

Prometheus + Grafana + Loki

It is bit difficult at start, but really in the end you can monitor and get notification on anything thats happening on your system.

[-] Nasach@alien.top 1 points 11 months ago

I use net data for both dashboards and alerts. Works great and easy to setup.

[-] opensrcdev@alien.top 1 points 11 months ago

InfluxDB metrics server and Telegraf agent to collect metrics

[-] bobbarker4444@alien.top 1 points 11 months ago

I just check the proxmox dashboard every now and then. Honestly if everything is working I'm not too worried about exact ram levels at any given moment

[-] Pesfreak92@alien.top 1 points 11 months ago

Uptime Kuma and Grafana. Uptime Kuna to monitor if a service is up and running and Grafana to monitor the host like CPU, RAM, SSD usage etc.

[-] Reasonable-Ladder300@alien.top 1 points 11 months ago

Same here, also have some autoscaling mechanisms set up in docker swarm to scale certain services in case the load is high

load more comments (1 replies)
[-] squadfi@alien.top 1 points 11 months ago

I personally use Influxdb , telegraf and grafana

[-] LNDN91@alien.top 1 points 11 months ago

Rainmeter if it's directly on their desktop/background.

[-] talent_deprived@alien.top 1 points 11 months ago

I use sar for historical, my own scripts running under cron on the hosts for specific things I'm interested in keeping an eye on and my on scripts under cron on my monitoring machines for alerting me when something's wrong. I don't use a dashboard.

[-] basicallybasshead@alien.top 1 points 11 months ago

Zabbix. Aslo for Windows, it could be Rainmeter https://www.rainmeter.net/ or HWiNFO https://www.hwinfo.com/. For Linux, Conky.

[-] weilah_@alien.top 1 points 11 months ago

Uptime Kuma for my services Netdata + Prometheus + Grafana for server health (alerts and visualization)

[-] Majestic-Contract-42@alien.top 1 points 11 months ago

If one of my users ever complained about anything I would possibly look into it, otherwise it all works so I don't waste life energy on that.

[-] chuchodavids@alien.top 1 points 11 months ago

None. There is no need for a performance monitor for my home lab. I just have an alert if one of my main three services is down. That is all i need.

[-] thekrautboy@alien.top 1 points 11 months ago

Just to make sure: You are aware that a search option here exists, yes? And you keep refusing to use it for whatever reason?

[-] Cylian91460@alien.top 1 points 11 months ago

I use btop, I use arch btw

[-] dinosaurdynasty@alien.top 1 points 11 months ago

I don't find it valuable so I don't. (Maybe run top as needed.)

[-] ElevenNotes@alien.top 1 points 11 months ago

Netdata, monitoring a few thousand servers (virtual) that way.

[-] jln_brtn@alien.top 1 points 11 months ago

Nobody mentioned htop 🤔

[-] dibu28@alien.top 1 points 11 months ago
[-] thekrautboy@alien.top 1 points 11 months ago

htop is a selfhosted service?

load more comments (1 replies)
[-] The_Axelander@alien.top 1 points 11 months ago

I use checkmk with notifications to a telegram bot

[-] borouhin@alien.top 1 points 11 months ago

Alerts are much more important than fancy dashboards. You won't be staring at your dashboard 24/7 and you probably won't be staring at it when bad things happen.

Creating your alert set not easy. Ideally, every problem you encounter should be preceded by corresponding alert, and no alert should be false positive (require no action). So if you either have a problem without being alerted from your monitoring, or get an alert which requires no action - you should sit down and think carefully what should be changed in your alerts.

As for tools - I recommend Prometheus+Grafana. No need for separate AletrManager, as many guides recommend, recent versions of Grafana have excellent built-in alerting. Don't use those ready-to-use dashboards, start from scratch, you need to understand PromQL to set everything up efficiently. Start with a simple dashboard (and alerts!) just for generic server health (node exporter), then add exporters for your specific services, network devices (snmp), remote hosts (blackbox), SSL certs etc. etc. Then write your own exporters for what you haven't found :)

[-] Cylian91460@alien.top 1 points 11 months ago

Alerts are much more important than fancy dashboards.

It depends, If you have to install lot of stuff or manage a lot of thing it's a good idea to have one but if you mainly do maintenance and you want to have something reliable yes you should have an alerts, for exemple I don't have a lot of thing install and doesn't rly care about reliability so I do everything in terminal, I use arch btw

[-] AttitudeImportant585@alien.top 1 points 11 months ago

When you've got a lot of variables, especially when dealing with a distributed system, that importance leans the other way. Visualization and analytics are practically required to debug and tune large systems

[-] atheken@alien.top 1 points 11 months ago

One thing about using Prometheus alerting is that it’s one less link in the chain that can break, and you can also keep your alerting configs in source control. So it’s a little less “click-ops,” but easier to reproduce if you need to rebuild it at a later date.

load more comments (1 replies)
load more comments (6 replies)
[-] dom9301k@alien.top 1 points 11 months ago

Prometheus + Grafana, the same I use at my job.

[-] krysinello@alien.top 1 points 11 months ago

Grafana. Have alerts set up and get data with node exporter and cadvisor with some other containers giving some metrics.

I have alerts setup and they just ping me on a discord server I setup. High cpu and temps low disk space memory things like that. Mostly get high CPU or temp alerts and that's usually when plex does its automated things at 4am.

[-] xardoniak@alien.top 1 points 11 months ago

I use Uptime Kuma to monitor particular services and NetData for server performance. I then pipe the alerts through to Pushover

[-] trisanachandler@alien.top 1 points 11 months ago

Honestly my load is so light I don't bother monitoring performance. Uptime kuma for uptime, I used to use prtg and uptime robot when I ran a heavier stack before I switched to an all docker workload.

[-] BloodyIron@alien.top 1 points 11 months ago

libreNMS is the tool I use, and it connects to systems primarily via SNMP (use v3, do not use v1 or v2c).

[-] gold76@alien.top 1 points 11 months ago

Influx/telegraf/grafana stack. I have all 3 on one server and then I put just telegraf on the others to send data into influx. Works great for monitoring things like usage. You can also bring in sysstat.

I have some custom apps as well where each time they run I record the execution time and peak memory in a database. This lets me go back over time and see where something improved or got worse. I can get a time stamp and go look at gitea commits to see what I was messing with.

[-] speculatrix@alien.top 1 points 11 months ago

I use Zabbix. Runs fine in a relatively small VM. Easy to write plugins.

[-] how_now_brown_cow@alien.top 1 points 11 months ago

TICK stack is the only answer

[-] TheDeepTech@alien.top 1 points 11 months ago
[-] djbon2112@alien.top 2 points 11 months ago

I second CMK.

A TICK stack is unwieldy, Grafana takes a lot of setup, and all of this assumes you both know what to monitor and get stats on it.

CMK by contrast is plug and play. Install the server on a VM or host, install thr agent on your other systems, and you're good to go.

load more comments (4 replies)
load more comments (1 replies)
[-] damn_the_bad_luck@alien.top 1 points 11 months ago

When the fan gets loud enough to hear, I'll check it :P

[-] JoeB-@alien.top 1 points 11 months ago

I use Telegraf + InfluxDB + Grafana for monitoring my home network and systems. Grafana has a learning curve for building panels and dashboards, but is incredibly flexible. I use it for more than server performance. I have a dual-monitor "kiosk" (old Mac mini) in my office displaying two Grafana dashboards. These are:

Network/Power/Storage showing:

  • firewall block events & sources for last 12 hrs (from pfSense via Elasticsearch),
  • current UPS statuses and power usage for last 12 hrs (Telegraf apcupsd plugin -> InfluxDB),
  • WAN traffic for last 12 hrs ( from pfSense via Telegraf -> InfluxDB),
  • current DHCP clients (custom Python script -> MySQL), and
  • current drive and RAID pool health (custom Python scripts -> MySQL)

Server sensors and performance showing:

  • current status of important cron jobs (using Healthchecks -> Prometheus),
  • current server CPU usage and temps, and memory usage (Telegraf -> InfluxDB)
  • server host CPU usage and temps, and memory usage for last 3 hrs (Telegraf -> InfluxDB)
  • Proxmox VM CPU and memory usage for last 3 hrs (Proxmox -> InfluxDB)
  • Docker container CPU and memory usage for last 3 hrs (Telegraf Docker plugin -> InfluxDB)

Netdata works really well for system performance for Linux and can be installed from the default repositories of major distributions.

load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 22 Oct 2023
3 points (100.0% liked)

Self-Hosted Main

502 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

For Example

We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.

Useful Lists

founded 1 year ago
MODERATORS