Nicholas Clooney

Private Analytics With Umami, Docker Compose, and Ansible

I wanted first-party analytics on my blog without handing traffic data to a SaaS vendor. Umami checked every box: open source, self-hostable, and friendly to privacy. I already keep a small VPS online 24/7, so dedicating a slice of that machine to Umami felt like a perfect fit.


Why Umami (and Why Now)

Analytics turned into a blind spot once I shut off the usual trackers. I needed something:

  • Self-hosted so the data never leaves my infrastructure.
  • Lightweight enough to run on the same box as the rest of my services.
  • Friendly to my workflow — ideally managed by Ansible like everything else.

Umami ships as a simple Node app that stores data in Postgres. The official docs make it easy to run it locally or in the cloud, but I wanted a repeatable, production-ready setup that I could test on my Mac and deploy with Ansible in one go.


A Quick Umami Primer

If you haven’t met it yet, Umami is an open-source analytics platform that mirrors the basics of Google Analytics without the bloat. It’s a Node application with a Postgres backend, emits a tiny <script> snippet for your sites, and exposes a slick dashboard to explore the data. No third-party cookies, no hidden trackers — just a straightforward way to see who’s visiting.


Deployment Paths I Considered

There are three obvious ways to stand Umami up:

  1. Install Node, pnpm, and PM2 directly on the server.
  2. Run the app in a single Docker container and manage Postgres separately.
  3. Use Docker Compose to define both services and their relationship.

Option 3 won instantly. Compose gives me:

  • Local parity. I can spin the stack up on macOS using Colima, just like in my Docker-on-macOS post.
  • A reproducible bundle. The compose file describes the exact images, health checks, and volumes needed — perfect for infra-as-code.
  • No host pollution. The VPS stays a clean Docker box with zero lingering Node/npm/PM2 packages.
  • Expressed dependencies. Compose orchestrates Postgres + Umami and waits for the DB’s health check before starting the app.
  • Strict networking. Services talk over a private bridge network. Only the ports I publish make it to the host.
  • Ansible-friendly automation. Ansible can drop the compose file, render an .env, and run docker compose up -d in one role.

The Ansible Role at the Core

I wrapped everything in ansible-role-umami so I can reuse it across machines. At commit f31f9b9a1c71039311a71ece3c8c8162de84316c, the compose template looks like this:

		
  1. services:
  2. db:
  3. image: {{ umami_postgres_image }}
  4. restart: unless-stopped
  5. environment:
  6. POSTGRES_DB: ${POSTGRES_DB}
  7. POSTGRES_USER: ${POSTGRES_USER}
  8. POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
  9. TZ: ${TZ}
  10. healthcheck:
  11. test: ["CMD-SHELL", "pg_isready -U \"$${POSTGRES_USER}\" -d \"$${POSTGRES_DB}\""]
  12. interval: 5s
  13. timeout: 5s
  14. retries: 10
  15. volumes:
  16. - umami-db-data:/var/lib/postgresql/data
  17. networks:
  18. - umami-net
  19. logging:
  20. driver: json-file
  21. options:
  22. max-size: "10m"
  23. max-file: "3"
  24. umami:
  25. image: {{ umami_image }}
  26. restart: unless-stopped
  27. depends_on:
  28. db:
  29. condition: service_healthy
  30. ports:
  31. - "{{ umami_bind_address }}:${UMAMI_PORT:-{{ umami_listen_port }}}:3000"
  32. environment:
  33. DATABASE_TYPE: postgresql
  34. DATABASE_URL: ${DATABASE_URL}
  35. APP_SECRET: ${APP_SECRET}
  36. HOSTNAME: "0.0.0.0"
  37. PORT: "3000"
  38. TZ: ${TZ}
  39. healthcheck:
  40. test: ["CMD-SHELL", "curl -fsS http://localhost:3000/api/heartbeat || exit 1"]
  41. interval: 10s
  42. timeout: 5s
  43. retries: 10
  44. networks:
  45. - umami-net

A couple of highlights:

  • Postgres persists data in a named volume and exposes a health check.
  • Umami waits on that health check before launching.
  • The ports directive binds to {{ umami_bind_address }} so I can keep it locked to 127.0.0.1 instead of public interfaces.

Defaults live alongside the template, so each install starts loopback-only on port 3000 until I override it:

		
  1. ---
  2. umami_timezone: Europe/London
  3. # Config storage (remote)
  4. umami_base_dir: "/opt/umami"
  5. # Secrets storage (local controller)
  6. umami_secrets_dir: "{{ playbook_dir }}/.secrets/{{ inventory_hostname }}"
  7. # Container images
  8. umami_image: ghcr.io/umami-software/umami:postgresql-latest
  9. umami_postgres_image: postgres:18-alpine
  10. # Networking
  11. umami_listen_port: 3000
  12. umami_bind_address: 127.0.0.1
  13. # Database
  14. umami_db_name: umami
  15. umami_db_user: umami
  16. umami_db_password: ""
  17. umami_app_secret: ""
  18. # Compose options
  19. umami_compose_project_name: umami
  20. umami_compose_pull: always
  21. umami_compose_recreate: auto

The main task file ties it all together. Ansible generates strong secrets on the controller (so they persist between runs), templates both .env and docker-compose.yml, then presses docker compose up via the community module:

		
  1. ---
  2. - name: Ensure base directory exists (remote)
  3. become: true
  4. ansible.builtin.file:
  5. path: "{{ umami_base_dir }}"
  6. state: directory
  7. owner: root
  8. group: root
  9. mode: "0755"
  10. # --- Local secrets handling ---
  11. - name: Ensure local secrets dir exists (controller)
  12. ansible.builtin.file:
  13. path: "{{ umami_secrets_dir }}"
  14. state: directory
  15. mode: "0700"
  16. delegate_to: localhost
  17. run_once: false
  18. - name: Generate DB password if needed (local)
  19. ansible.builtin.set_fact:
  20. umami_db_password: >-
  21. {{ lookup('ansible.builtin.password',
  22. umami_secrets_dir ~ '/.db_password chars=ascii_letters,digits length=32') }}
  23. when: (umami_db_password | default('') | length) == 0
  24. delegate_to: localhost
  25. run_once: false
  26. - name: Generate app secret if needed (local)
  27. ansible.builtin.set_fact:
  28. umami_app_secret: >-
  29. {{ lookup('ansible.builtin.password',
  30. umami_secrets_dir ~ '/.app_secret chars=hexdigits length=64') }}
  31. when: (umami_app_secret | default('') | length) == 0
  32. delegate_to: localhost
  33. run_once: false
  34. # --- Remote config + deployment ---
  35. - name: Render .env file
  36. become: true
  37. ansible.builtin.template:
  38. src: env.j2
  39. dest: "{{ umami_base_dir }}/.env"
  40. owner: root
  41. group: root
  42. mode: "0640"
  43. notify: Restart umami stack
  44. - name: Render docker-compose.yml
  45. become: true
  46. ansible.builtin.template:
  47. src: docker-compose.yml.j2
  48. dest: "{{ umami_base_dir }}/docker-compose.yml"
  49. owner: root
  50. group: root
  51. mode: "0644"
  52. notify: Restart umami stack
  53. - name: Ensure Umami stack is running
  54. become: true
  55. community.docker.docker_compose_v2:
  56. project_src: "{{ umami_base_dir }}"
  57. project_name: "{{ umami_compose_project_name }}"
  58. state: present
  59. pull: "{{ umami_compose_pull }}"
  60. recreate: "{{ umami_compose_recreate }}"
  61. register: umami_compose_result
  62. - name: Display docker compose changes
  63. ansible.builtin.debug:
  64. var: umami_compose_result
  65. when: umami_compose_result is defined

The handler simply restarts the stack when either template changes, which keeps upgrades predictable.


A Quick Note on Docker vs. UFW

If you publish a port without thinking, Docker quietly bypasses UFW because it owns its own iptables chain. That means even a server with “deny incoming” can leak an app to the public internet if you bind it to 0.0.0.0.

When you run a container and publish a port (e.g. -p 3000:3000), Docker modifies iptables directly — not through ufw.
Those rules are evaluated before ufw’s user-space rules.
So a simple docker run -p 3000:3000 umami exposes port 3000 on all interfaces (0.0.0.0) even if ufw is active.

Binding to 127.0.0.1 inside the compose file keeps the dashboard completely private until I put a reverse proxy (or Tailscale) in front of it.


Dropping the Role Into Project Lighthouse

My homelab playbook (ansible-project-lighthouse) consumes the role with just a few lines:

		
  1. - role: umami_nginx
  2. tags: [umami_nginx]
  3. - role: nicholasclooney.umami
  4. tags: [umami]
  5. vars:
  6. umami_timezone: Europe/London
  7. umami_bind_address: 127.0.0.1
  8. umami_listen_port: 3000
  9. - role: tailscale_serve
  10. tags: [tailscale_serve]

Group vars clamp the dashboard to the loopback network, ready for a reverse proxy to front it:

		
  1. # === Nginx/analytics access control ===
  2. #
  3. # CIDR/IPs allowed to reach the Umami dashboard via umami_nginx
  4. nginx_dashboard_allowlist:
  5. - "127.0.0.1/32"
  6. # === Domains ===
  7. #
  8. # Used by certbot role when requesting site certificates
  9. primary_domain: "example.com"
  10. # Shared by certbot + umami_nginx site template for analytics host
  11. analytics_domain: "analytics.example.com"
  12. # === Certbot ===
  13. #
  14. # Toggle ACME issuance in certbot role
  15. certbot_issue_certificates: false
  16. # Ensures packaged systemd timer stays enabled
  17. certbot_auto_renew: true
  18. # Certbot registration email for expiry notices and ToS
  19. certbot_admin_email: "[email protected]"
  20. # Domains to request via Certbot (include each site explicitly and point DNS to this host)
  21. certbot_domains:
  22. - "example.com"

Because I only trust my tailnet to see sensitive dashboards, I run a tiny systemd unit that publishes Umami through Tailscale Serve:

		
  1. ---
  2. - name: Deploy tailscale serve systemd unit
  3. become: true
  4. ansible.builtin.template:
  5. src: tailscale-serve.service.j2
  6. dest: "/etc/systemd/system/{{ tailscale_serve_service_name }}.service"
  7. owner: root
  8. group: root
  9. mode: '0644'
  10. notify:
  11. - Restart tailscale serve
  12. - name: Ensure tailscale serve service is enabled and {{ tailscale_serve_state }}
  13. become: true
  14. ansible.builtin.systemd:
  15. name: "{{ tailscale_serve_service_name }}"
  16. enabled: "{{ tailscale_serve_enabled }}"
  17. state: "{{ tailscale_serve_state }}"
  18. daemon_reload: true

That translates to a private https://umami.tailXX.ts.net endpoint that only logged-in tailnet devices can reach. No public ingress, no guesswork.


Publishing the Tracking Script With Nginx

The public web still needs /script.js and /api/send, so I carved out an Nginx site that only exposes those endpoints while keeping the full dashboard locked to my allowlist:

		
  1. # 1) HTTP → HTTPS redirect
  2. server {
  3. listen 80;
  4. listen [::]:80;
  5. server_name {{ analytics_domain }};
  6. return 301 https://$host$request_uri;
  7. }
  8. # 2) HTTPS site
  9. server {
  10. listen 443 ssl http2;
  11. listen [::]:443 ssl http2;
  12. server_name {{ analytics_domain }};
  13. # Logs
  14. access_log /var/log/nginx/{{ umami_nginx_site_name }}.access.log;
  15. error_log /var/log/nginx/{{ umami_nginx_site_name }}.error.log;
  16. # TLS certs
  17. ssl_certificate /etc/letsencrypt/live/{{ analytics_domain }}/fullchain.pem;
  18. ssl_certificate_key /etc/letsencrypt/live/{{ analytics_domain }}/privkey.pem;
  19. location = /script.js {
  20. proxy_pass http://{{ umami_nginx_upstream_host }}:{{ umami_nginx_upstream_port }};
  21. proxy_set_header Host $host;
  22. proxy_set_header X-Real-IP $remote_addr;
  23. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  24. proxy_set_header X-Forwarded-Proto $scheme;
  25. }
  26. location = /api/send {
  27. proxy_pass http://{{ umami_nginx_upstream_host }}:{{ umami_nginx_upstream_port }};
  28. proxy_set_header Host $host;
  29. proxy_set_header X-Real-IP $remote_addr;
  30. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  31. proxy_set_header X-Forwarded-Proto $scheme;
  32. limit_except POST OPTIONS { deny all; }
  33. proxy_hide_header Access-Control-Allow-Origin;
  34. proxy_hide_header Access-Control-Allow-Methods;
  35. proxy_hide_header Access-Control-Allow-Headers;
  36. proxy_hide_header Access-Control-Max-Age;
  37. add_header Access-Control-Allow-Origin "$http_origin" always;
  38. add_header Access-Control-Allow-Methods "POST, OPTIONS" always;
  39. add_header Access-Control-Allow-Headers "Content-Type, Authorization" always;
  40. add_header Access-Control-Max-Age 86400 always;
  41. add_header Vary "Origin" always;
  42. if ($request_method = OPTIONS) {
  43. return 204;
  44. }
  45. }
  46. location / {
  47. {% for cidr in nginx_dashboard_allowlist %}
  48. allow {{ cidr }};
  49. {% endfor %}
  50. deny all;
  51. proxy_pass http://{{ umami_nginx_upstream_host }}:{{ umami_nginx_upstream_port }};
  52. proxy_set_header Host $host;
  53. proxy_set_header X-Real-IP $remote_addr;
  54. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  55. proxy_set_header X-Forwarded-Proto $scheme;
  56. }
  57. }

  • /script.js and /api/send proxy straight through to Umami, complete with the required CORS headers.
  • Any other path hits the allowlist first — in production I set it to tailnet ranges so only I can see the UI.

With that in place, the public site embeds Umami’s script tag while the administrative interface stays non-routable unless you’re me.


The Stack in Motion

Putting it all together looks like this:

  1. Ansible renders .env + docker-compose.yml, generates secrets, and runs docker compose up -d.
  2. Docker Compose brings up Postgres + Umami, health checks everything, and binds the UI to loopback.
  3. Tailscale Serve publishes the dashboard to my tailnet so I can check analytics from anywhere (even on mobile).
  4. Nginx proxies just the beacon endpoints to the public internet while keeping the rest locked down.

Because the role also runs locally, I can clone the repo, fire up Colima, and test the exact stack on my Mac before pushing changes upstream. When updates drop, ansible-playbook main.yml --tags umami pulls the new image and restarts the stack cleanly.


Final Thoughts

This setup sounds complex on paper, but it condenses into a repeatable Ansible run:

  • Compose keeps the host tidy and the deployment predictable.
  • Tailscale and Nginx add just enough routing to stay private-by-default.
  • Secrets never leave my control, and rollbacks are one docker compose down away.

If you’re already automating servers with Ansible, steal the role, tune the defaults, and try the workflow on a Colima sandbox first. When you’re ready to go live, aim the playbook at your server and enjoy Umami’s dashboard — privately. Then take a detour through the related posts on Colima, Tailscale, and debugging Umami to see how the rest of the puzzle pieces fit together.