Hacker Newsnew | past | comments | ask | show | jobs | submit | Kenny_007's commentslogin

Looks interesting. My homelab slowly turned into a mix of Portainer, btop and a couple of scripts as well. Some stuff runs on a small box at home, but I keep a few services on a tiny VPS cube-host) so they stay reachable if my home internet goes down. Curious how heavy this gets if you monitor several docker hosts.

Pretty lightweight — it SSHes into each box and runs homebutler locally there, so the overhead is basically one SSH connection + a quick read of /proc and the Docker socket per check. No background daemon sitting there polling, it only runs when you ask or when alerts --watch fires.

I run it across 3 machines (Mac Mini, Pi 5, and another box) and haven't noticed any impact. The binary itself is ~15MB and idles at zero CPU since it's not a long-running service.


I ended up somewhere in the middle. A few things run on a small machine at home, but anything that needs to stay reachable lives on a cheap VPS. Mostly just a couple of docker containers and some basic monitoring.

One thing that rarely shows up in these discussions is how uneven the data center world actually is.

Hyperscale ai facilities and a typical colocation or regional cloud node don’t behave the same in terms of power, water or local impact, but they often get grouped together in policy debates.

From an ops perspective most smaller sites spend years optimizing cooling efficiency, power density and redundancy because energy costs directly hit margins. the giant ai campuses operate on a completely different economic model where scale and speed matter more than local efficiency.

Feels like a lot of community backlash is really about hyperscale growth being treated as “just another data center”, when it’s closer to heavy industry in terms of resource planning.


i’ve seen a few projects try to leave github completely and run everything self-hosted. the control is nice, but the contributor drop usually happens faster than expected.

Most people won’t create another account just to open an issue or send a small fix. even devs who agree with the idea still default to whatever is easiest.

Keeping a mirror on github seems to be what many teams end up doing. not perfect, but it keeps the door open for contributors while you keep your own setup running.


Not necessarily.

We’ve seen 403 spikes happen even when the edge was mostly fine but parts of the control plane were degraded. During incidents the dashboard, auth or analytics layers can fail independently from actual traffic handling.

If external checks from different networks still show normal response times, it’s usually more of a panel/API issue than a full outage.


For us the main shift was accepting that “being able to work locally” and “knowing whether users are affected” are two different problems.

Local dev usually survives outages just fine. What hurts is losing external signals and assuming things are okay when they’re not.

After a few incidents like this, we stopped relying on a single monitoring setup. One self-hosted probe plus at least one fully independent external check reduced blind spots a lot. It doesn’t prevent outages, but it avoids flying blind during them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: