How to Debug a 502 Bad Gateway in Nginx

Troubleshoot 502 Bad Gateway errors methodically by checking the backend app, socket or port mapping, proxy config, logs, and service health in the right order.

NginxTroubleshootingReverse proxy
What you learn

A practical order of operations for finding the real cause of Nginx 502 errors.

Good for

Reverse-proxied apps, Docker services, local web servers, and VPS troubleshooting.

Risk to watch

Guessing at Nginx config first wastes time when the upstream app is actually down, unreachable, or miswired.

Before you begin

  • Identify the affected domain, path, and upstream app.
  • Know whether the app runs through systemd, Docker Compose, or a direct process.
  • Be ready to inspect both Nginx logs and the backend app logs.

A 502 is often treated like a generic web outage, but it is more specific than that. In plain terms, Nginx is up enough to answer the client, but something is wrong when it tries to talk to the upstream app. That means the fix path should start with the upstream, not with random Nginx edits.

Understand what a 502 usually means

Most 502 problems come from one of a few buckets. The upstream process is not running. The upstream is listening on a different port or socket than Nginx expects. The proxy target is unreachable because of container networking or host binding. The app is crashing at startup. Or Nginx is trying plain HTTP against an upstream that expects TLS, or vice versa.

Warning: Do not treat 502, 504, and generic timeout problems as the same issue. They overlap, but the right checks are not identical.

Check the backend before touching Nginx

Start where the proxy is supposed to connect. Confirm the upstream app is running and actually listening on the expected port or socket. If you are using systemd, check service status and recent logs. If you are using Docker, check container status and container logs. If the backend is dead, restarting Nginx will not solve anything.

Useful first commands include:

  • sudo systemctl status nginx
  • sudo tail -f /var/log/nginx/error.log
  • ss -ltnp or ss -lxnp
  • docker ps
  • docker logs <container>
Expected outcome: You know whether the upstream process is alive and whether its listening endpoint matches what Nginx expects.

Test the upstream directly

After confirming the backend process exists, test the exact upstream target directly from the Nginx host. If Nginx proxies to 127.0.0.1:8002, test that. If it proxies to a Compose service name, test from the correct container network context. If it proxies to a Unix socket, verify the file exists and the Nginx worker can access it.

A good debug sequence is simple: reproduce the failure from the public hostname, then test the upstream locally with curl, then compare the result to the proxy configuration. If the local upstream check fails, your problem is almost never the browser-facing layer.

Inspect the Nginx proxy configuration carefully

Once the upstream app is confirmed alive, verify the proxy target in Nginx. Check proxy_pass, timeouts, headers, socket paths, and whether the upstream protocol is correct. A common mistake is pointing Nginx at the wrong port after an app change or container rebuild. Another is assuming that because the host can resolve a name, the Nginx process and its network context can reach the same target.

Always run sudo nginx -t before reloading. Syntax errors are not the only thing to look for. Sometimes the config is valid but points at the wrong place.

Pay special attention to Docker and socket cases

On Compose-based stacks, many 502 problems come from network naming, startup timing, or ports bound only inside a container. If Nginx runs in one container and the app runs in another, they must share a network and use the right service name. If Nginx runs on the host but the app runs only inside Docker, confirm the port is published or the proxy is joining the same network path another way.

For Unix sockets, ownership and permissions matter. If Nginx cannot read the socket path, the app can be healthy and you will still get a 502.

Warning: "Restart everything" is not a debugging method. It can temporarily hide the issue and leave you with no understanding of what actually broke.

Common mistakes to avoid

The biggest mistakes are checking only the browser, ignoring Nginx error logs, not testing the upstream directly, forgetting Docker network boundaries, overlooking socket permissions, and treating all proxy failures like config syntax problems. Another common trap is debugging Cloudflare, HTTPS, or DNS first when the real failure is the app not listening on the expected local endpoint.

What to do next

Once you can trace proxy failures cleanly, the next useful step is tightening which services are exposed at all. Continue with UFW and Docker on the Same VPS Without Breaking Your Apps.