Private Analytics With Umami, Docker Compose, and Ansible
I wanted first-party analytics on my blog without handing traffic data to a SaaS vendor. Umami checked every box: open source, self-hostable, and friendly to privacy. I already keep a small VPS online 24/7, so dedicating a slice of that machine to Umami felt like a perfect fit.
Analytics turned into a blind spot once I shut off the usual trackers. I needed something:
Self-hosted so the data never leaves my infrastructure.
Lightweight enough to run on the same box as the rest of my services.
Friendly to my workflow — ideally managed by Ansible like everything else.
Umami ships as a simple Node app that stores data in Postgres. The official docs make it easy to run it locally or in the cloud, but I wanted a repeatable, production-ready setup that I could test on my Mac and deploy with Ansible in one go.
If you haven’t met it yet, Umami is an open-source analytics platform that mirrors the basics of Google Analytics without the bloat. It’s a Node application with a Postgres backend, emits a tiny <script> snippet for your sites, and exposes a slick dashboard to explore the data. No third-party cookies, no hidden trackers — just a straightforward way to see who’s visiting.
I wrapped everything in ansible-role-umami so I can reuse it across machines. At commit f31f9b9a1c71039311a71ece3c8c8162de84316c, the compose template looks like this:
The main task file ties it all together. Ansible generates strong secrets on the controller (so they persist between runs), templates both .env and docker-compose.yml, then presses docker compose up via the community module:
If you publish a port without thinking, Docker quietly bypasses UFW because it owns its own iptables chain. That means even a server with “deny incoming” can leak an app to the public internet if you bind it to 0.0.0.0.
When you run a container and publish a port (e.g. -p 3000:3000), Docker modifies iptables directly — not through ufw.
Those rules are evaluated before ufw’s user-space rules.
So a simple docker run -p 3000:3000 umami exposes port 3000 on all interfaces (0.0.0.0) even if ufw is active.
Binding to 127.0.0.1 inside the compose file keeps the dashboard completely private until I put a reverse proxy (or Tailscale) in front of it.
The public web still needs /script.js and /api/send, so I carved out an Nginx site that only exposes those endpoints while keeping the full dashboard locked to my allowlist:
Ansible renders .env + docker-compose.yml, generates secrets, and runs docker compose up -d.
Docker Compose brings up Postgres + Umami, health checks everything, and binds the UI to loopback.
Tailscale Serve publishes the dashboard to my tailnet so I can check analytics from anywhere (even on mobile).
Nginx proxies just the beacon endpoints to the public internet while keeping the rest locked down.
Because the role also runs locally, I can clone the repo, fire up Colima, and test the exact stack on my Mac before pushing changes upstream. When updates drop, ansible-playbook main.yml --tags umami pulls the new image and restarts the stack cleanly.
This setup sounds complex on paper, but it condenses into a repeatable Ansible run:
Compose keeps the host tidy and the deployment predictable.
Tailscale and Nginx add just enough routing to stay private-by-default.
Secrets never leave my control, and rollbacks are one docker compose down away.
If you’re already automating servers with Ansible, steal the role, tune the defaults, and try the workflow on a Colima sandbox first. When you’re ready to go live, aim the playbook at your server and enjoy Umami’s dashboard — privately. Then take a detour through the related posts on Colima, Tailscale, and debugging Umami to see how the rest of the puzzle pieces fit together.