51
submitted 10 months ago by Dirk@lemmy.ml to c/selfhosted@lemmy.world

I'm currently researching the best method for running a static website from Docker.

The site consists of one single HTML file, a bunch of CSS files, and a few JS files. On server-side nothing needs to be preprocessed. The website uses JS to request some JSON files, though. Handling of the files is doing via client-side JS, the server only need to - serve the files.

The website is intended to be used as selfhosted web application and is quite niche so there won't be much load and not many concurrent users.

I boiled it down to the following options:

  1. BusyBox in a selfmade Docker container, manually running httpd or The smallest Docker image ...
  2. php:latest (ignoring the fact, that the built-in webserver is meant for development and not for production)
  3. Nginx serving the files (but this)

For all of the variants I found information online. From the options I found I actually prefer the BusyBox route because it seems the cleanest with the least amount of overhead (I just need to serve the files, the rest is done on the client).

Do you have any other ideas? How do you host static content?

top 50 comments
sorted by: hot top controversial new old
[-] CameronDev@programming.dev 35 points 10 months ago

Just go nginx, anything else is faffing about. Busybox may not be security tested, so best to avoid on the internet. Php is pointless when its a static site with no php. Id avoid freenginx until its clear that it is going to be supported. There is nothing wrong with stock nginx, the fork is largely political rather than technical.

[-] Dirk@lemmy.ml 4 points 10 months ago* (last edited 10 months ago)

Php is pointless when its a static site with no php

Absolutely, but it has a built-in webserver that can serve static files, too (I constantly use that in my dev environment).

But I guess you're mostly right about just using Nginx. I already have multiple containers running it, though. Most of them just serving static files. But it's ca. 50 megabytes compressed size each container just for Nginx alone.

[-] CameronDev@programming.dev 6 points 9 months ago

Having PHP installed is just unnecessary attack surface.

Are you really struggling for space that 50mb matters? An 8gb usb can hold thar 160x?

[-] Dirk@lemmy.ml 1 points 9 months ago

Having PHP installed is just unnecessary attack surface.

Yes! Especially running it's built-in webserver outside your dev environment. They "advertise" doing so in their Docker packages documentation, though. Every project without PHP is a good project. It's still an option - at least technically.

Are you really struggling for space that 50mb matters?

In a way, yes. I just want to optimize my stuff as much as possible. No unneeded tools, no overhead, a super clean environment, etc. Firing up another Nginx container just doesn't feel right anymore. (Even if it seems to be possible to manually "hack" file serving into NPM - which makes it a multi-use container serving various different sites and proxying requests.)

The machine I use as docker host also has a pretty low-end CPU and measly 4 gigabytes of RAM. So every resource not wasted is a good resource.

[-] CameronDev@programming.dev 1 points 9 months ago

RAM is not the same as storage, that 50mb docker image isn't going to require 50mb of ram to run. But don't let me hold you back from your crusade :D

[-] Dirk@lemmy.ml 1 points 9 months ago

Thanks for educating me on basic computer knowledge! 🤣

Applications need RAM, though. A full-fledged webserver with all the bells and whistles likely needs more ram than a specialized single-binary static file delivery server.

[-] CameronDev@programming.dev 1 points 9 months ago

Sorry, wasn't meant to be condescending, you just seem fixated on file size when it sounds like RAM (and/or CPU?) is what you really want to optimise for? I was just pointing out that they arent necessarily correlated to docker image size.

If you really want to cut down your cpu and ram, and are okay with very limited functionality, you could probably write your own webserver to serve static files? Plain http is not hard. But you'd want to steer clear of python and node, as they drag in the whole interpreter overhead.

load more comments (2 replies)
[-] lemmyvore@feddit.nl 1 points 9 months ago

Absolutely, but it has a built-in webserver that can serve static files, too (I constantly use that in my dev environment).

How about Python? You can get an HTTP server going with just python3 -m http.server from the dir where the files are. Worth remembering because Python is super common and probably already installed in many places (be it on host or in containers).

[-] Dirk@lemmy.ml 1 points 9 months ago

I once built a router in Python, but it was annoying. The much I like Python, the much I dislike coding in it. Just firing up a web server with it is no big deal, though.

I was even thinking of node.js, but this comes with a whole different set of issues. It would allow for future extensions of the project on the server-side, though.

[-] lemmyvore@feddit.nl 1 points 9 months ago

What do you use for Node containers? I use an Alpine image where I install Node but I've been wondering if there's a better way.

[-] Dirk@lemmy.ml 1 points 9 months ago

Would be my first one. I'd likely go the Alpine route, too. It's used as option for the Docker official image.

https://hub.docker.com/_/node/tags?page=1&name=alpine

[-] marcos@lemmy.world 20 points 9 months ago

The answer is get a minimum linux image, add nginx or apache, and put your content on the relevant place. (Basically, your third option.)

Do not bother about the future of nginx. Changing the web server on that image is the easiest thing in the world.

[-] sudneo@lemmy.world 14 points 9 months ago

I personally package the files in a scratch or distroless image and use https://github.com/static-web-server/static-web-server, which is a rust server, quite tiny. This is very similar to nginx or httpd, but the static nature of the binary removes clutter, reduces attack surface (because you can use smaller images) and reduces the size of the image.

[-] Dirk@lemmy.ml 3 points 9 months ago

Thanks, this looks actually pretty great. From the description it's basically BusyBox httpd but with Nginx stability and production-readiness and functionality. It also seems to be actively developed.

[-] Swarfega@lemm.ee 12 points 10 months ago

I just use nginx in docker. It runs from a Pi4 so needs to be lightweight. I'm sure there are lighter httpd servers to use, but it works for me. I also run nginx proxy manager to create a reverse proxy and to manage the certificate renewal that comes from Let's Encrypt.

[-] Strit@lemmy.linuxuserspace.show 3 points 10 months ago

Same here. I have a few static sites setup to just be served via nginx.

[-] rglullis@communick.news 10 points 10 months ago

caddy can serve the files and deal with SSL certificates in case you put this in a public domain.

[-] d_k_bo@feddit.de 4 points 10 months ago

Caddy is the way to go.

[-] Dirk@lemmy.ml 1 points 9 months ago

My setup already has Nginx Proxy Manager to handle SSL. This is specifically about serving files from within a docker container with as little overhead as possible.

[-] smileyhead@discuss.tchncs.de 1 points 9 months ago

Caddy and Nginx can host those files directly, no need to do a proxy to a container would be running another Nginx anyway.

[-] Dirk@lemmy.ml 1 points 9 months ago

I do more than that with my setup. This static site would be one of various different things.

[-] lemmyvore@feddit.nl 7 points 9 months ago

I see from your other comments that you're already running nginx in other containers. The simplest solution would be to make use of one of them. Zero overhead since you're not adding any new container. 🙂

You mentioned you're using NPM, well NPM already has a built-in nginx host that you can reach by making a proxy host pointed at http://127.0.0.1:80 and adding the following to the "Advanced" tab:

location / {
  root /data/nginx/local_static;
  index index.html;
}

Replace the root location with whatever dir you want, use a volume option on the NPM container to map the dir to the host, put your files in there and that's it.

[-] shnizmuffin@lemmy.inbutts.lol 2 points 9 months ago

Clarity:

NPM (Nginx Proxy Manager) != npm (node package manager).

[-] jivandabeast@lemmy.browntown.dev 1 points 9 months ago

Wait, really? I use NPM and also have two sites running via a separate nginx container -- i feel so dumb now LMAO

[-] lemmyvore@feddit.nl 3 points 9 months ago* (last edited 9 months ago)

Yeah it's not exactly an obvious feature. I don't even remember how I stumbled onto it, I think I was looking at the /data dirs and noticed the default one.

I haven't tried using it for more than one site but I think that if you add multiple domain names to the same proxy host they go to the same server instance and you might be able to tweak the "Advanced" config to serve all of them as virtual hosts.

It's not necessarily a bad thing to have a separate nginx host. For example I have a PHP app that has its own nginx container because I want to keep all the containers for it in one place and not mix it up with NPM.

[-] arran4@aussie.zone 5 points 9 months ago* (last edited 9 months ago)

The busybox one seems great as it comes with shells. php looks like it would add some issues.

Personally since I use go, I would create a go embedded app, which I would make a deb, rpm, and a dockerfile using "goreleaser"

package main

import (
	"embed"
	"net/http"
)

//go:embed static/*
var content embed.FS

func main() {
	http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
		// Serve index.html as the default page
		http.ServeContent(w, r, "index.html", nil, content)
	})

	// Serve static files
	http.Handle("/static/", http.StripPrefix("/static/", http.FileServer(http.FS(content))))

	// Start the server
	http.ListenAndServe(":8080", nil)
}

Would be all the code but allows for expansion later. However the image goreleaser builds doesn't come with busybox on it so you can't docker exec into it. https://goreleaser.com/customization/docker/

Most of the other options including the PHP one seem to include a scripting language or a bunch of other system tools etc. I think that's overkill

[-] sudneo@lemmy.world 2 points 9 months ago

I would consider the lack of a shell a benefit in this scenario. You really don't want the extra attack surface and tooling.

Considering you also manage the host, if you want to see what's going on inside the container (which for such a simple image can be done once while building it the first time more likely), you can use unshare to spawn a bash process in the container namespaces (e.g., unshare -m -p [...] -t PID bash, or something like this - I am going by memory).

[-] justcallmelarry@lemmy.dbzer0.com 4 points 9 months ago

I’ve always used an nginx alpine image and have been very happy with it.

Not sure how this fork business is turning out and I have also heard conflicting opinions on wether to care or not…

If you do wish for something simple that is not nginx I’m also very happy with caddy, which can also handle ssl certificates for you, if you plan to make it publicly reachable.

[-] possiblylinux127@lemmy.zip 4 points 9 months ago

If your looking for a small size you could build a custom image with buildroot and lighttpd. It is way, way overkill but it would be the smallest.

For something easier use the latest image of your web server of choice and then pass though a directory with the files. From there you can automate patching with watch tower.

[-] okamiueru@lemmy.world 3 points 9 months ago* (last edited 9 months ago)

First thing you mention is such a fun and useful exercise. But as you point out, way overkill. Might even be dangerous to expose it. I got mine to 20kb on top of busybox.

There is something that tickles the right spots when a complete container image significantly smaller than the average js payload in "modern" websites.

[-] Lodra@programming.dev 4 points 10 months ago* (last edited 10 months ago)

The simplest way is certainly to use a hosted service like GitHub Pages. These make it so easy to create static websites.

If you’re not flexible on that detail, then I next recommend Go actually. You could write a tiny web server and embed the static files into the app at build time. In the end, you’d have a single binary that acts as a web server and has your content. Super easy to dockerize.

Things like authentication will complicate the app over time. If you need extra features like this, then I recommend using common tools like nginx as suggested by others.

[-] summerof69@lemm.ee 3 points 9 months ago

Err, FROM webserver + COPY /path/to/content /path/to/server/directory? You don't event expect users, what's there to discuss?

[-] CetaceanNeeded@lemmy.world 3 points 9 months ago

I just use nginx alpine, if freenginx proves to be the better option later it should be fairly trivial to switch the base image.

[-] Dirk@lemmy.ml 2 points 9 months ago

Yes, Freenginx should/would/will be a drop-in replacement, at least int he beginning. We'll see how this works out over time. Forks purely out of frustration never lived long enough to gain a user base and attract devs. But it's an "anti corporate bullshit" fork and this alone puts it on my watchlist.

[-] ptman@sopuli.xyz 3 points 10 months ago

Forget about docker. Run caddy or some similar webserver that is a single file next to the assets to serve.

[-] sudneo@lemmy.world 4 points 9 months ago

Containers are a perfectly suitable use-case for serving static sites. You get isolation and versioning at the absolutely negligible cost of duplicating a binary (the webserver - which in case of the one I linked in my comment, it's 5MB of space). Also, you get autostart of the server if you use compose, which is equivalent to what you would do with a Systemd unit, I suppose.

You can then use a reverse-proxy to simply route to the different containers.

[-] jj4211@lemmy.world 2 points 9 months ago

But it you already have an nginx or other web server otherwise required to start up (which is in all likelihood the case), you don't need any more auto startup, the "reverse proxy" already started can just serve it. I would say that container orchestration versioning can be helpful in some scenarios, but a simple git repository for a static website is way more useful since it's got the right tooling to annotate changes very specifically on demand.

That reverse proxy is ultimately also a static file server. There's really no value in spinning up more web servers for a strictly static site.

Folks have gone overboard assuming docker or similar should wrap every little thing. It sometimes adds complexity without making anything simpler. It can simplify some scenarios, but adding a static site to a webserver is not a scenario that enjoys any benefit.

[-] sudneo@lemmy.world 1 points 9 months ago

It really depends, if your setup is docker based (as OP's seems to be), adding something outside is not a good solution. I am talking for example about traefik or caddy with docker plugin.

By versioning I meant that when you do a push to master, you can have a release which produces a new image. This makes it IMHO simpler than having just git and local files.

I really don't see the complexity added, I do gain isolation (sure, static sites have tiny attack surfaces), easy portability (if I want to move machine it's one command), neat organization (no local fs paths to manage essentially), and the overhead is a 3 lines Dockerfile and a couple of MB needed to duplicate a webserver binary. Of course it is a matter of preference, but I don't see the cons honestly.

[-] smileyhead@discuss.tchncs.de 1 points 9 months ago* (last edited 9 months ago)

Serving static app in Caddy:

sudo apt install caddy
sudo systemctl enable --now caddy

Then in /etc/caddy/Caddyfile:

example.com {
   root * /var/www/html
   file_server
}

That's all, really.

[-] sudneo@lemmy.world 1 points 9 months ago

If there is already another reverse proxy, doing this IMHO is worse than just running a container and adding one more rule in the proxy (if needed, with traefik it's not for example). I also build all my servers with IaC and a repeatable setup, so installing stuff manually breaks the model (I want to be able to migrate server with minimal manual action, as I had to do it already twice...).

The job is simple either way, I would say it mostly depends on which ecosystem someone is buying into and what secondary requirements one has.

[-] possiblylinux127@lemmy.zip 2 points 9 months ago

Why? It seems like docker is way more flexible and maintainable.

[-] jj4211@lemmy.world 1 points 9 months ago

Because serving static files doesn't really require any flexibility in web serving code.

If your setup has an nginx or similar as a reverse proxy entry point, you can just tell it to serve the directory. Why bother making an entire new chroot and proxy hop when you have absolutely zero requirements beyond what the reverse proxy already provides. Now if you don't have that entry point, fine, but at least 99% of the time I see some web server as initial arbiter into services that would have all the capability to just serve the files.

[-] possiblylinux127@lemmy.zip 1 points 9 months ago
[-] smileyhead@discuss.tchncs.de 3 points 9 months ago

My brother in Christ, serving a file through HTTP is exactly what Tim Berners-Lee invented in 1989.

[-] jj4211@lemmy.world 2 points 9 months ago

For 90% of static site requirements, it scales fine. That entry point reverse proxy is faster at fetching content to serve via filesystem calls than it is at making an http call to another http service. For self hosting types of applications, that percentage guess to go 99.9%

If you are in a situation where serving the files through your reverse proxy directly does not scale, throwing more containers behind that proxy won't help in the static content scenario. You'll need to do something like a CDN, and those like to consume straight directory trees, not containers.

For dynamic backend, maybe. Mainly because you might screw up and your backend code needs to be isolated to mitigate security oopsies. Often it also is useful to manage dependencies, but that facet is less useful for golang where the resulting binary is pretty well self contained except maybe a little light usage of libc.

[-] Dirk@lemmy.ml 1 points 9 months ago

I already have a fully set up docker environment that serves all sorts of things (including some containers that serve special static content using Nginx).

[-] sockenklaus@sh.itjust.works 1 points 8 months ago

I've read that you're trying for minimal resource overhead.

Is lighttpd still a thing? Back in the day I used it to deliver very simple static Http pages with minimal resource usage.

I found a docker image with like 4 mb size but being two years old I don't know how well maintained lighttpd is these days.

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 05 Mar 2024
51 points (91.8% liked)

Selfhosted

40716 readers
456 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS