"Connection timed out" is one of those messages that instantly breaks momentum. We’ve seen full builds stall because a single network setting was missing, and in Docker, that usually comes down to proxy configuration. Containers made environments predictable. Docker made them practical at scale. But the moment you step into a corporate network, things change. Traffic gets filtered. Authentication appears. Requests fail silently. Suddenly, your clean, reproducible workflow starts breaking in frustrating, inconsistent ways. That’s where proxies step in. Not as a workaround, but as a control layer. And if you configure them right, everything just…works.

Docker doesn't live in isolation. It constantly reaches out—pulling images, installing packages, calling APIs. Block that access, and your workflow grinds to a halt.
In restricted environments, outbound traffic is often locked down. You try docker pull, and nothing happens. Or worse, it hangs. That's usually your first signal that Docker doesn't know how to reach the outside world. A proxy becomes your only path forward.
It's not just about access, though. In many setups, authentication is mandatory. Docker doesn't natively handle that well, so routing traffic through a proxy becomes the cleanest solution. One configuration point. Full control.
Then there's runtime behavior. Containers don't just sit there—they call services, fetch data, connect to databases. If those endpoints sit behind firewalls, your containers need a defined route out. No proxy, no connection.
And finally, control. Proxies let you log traffic, inspect requests, enforce policies, and hide internal infrastructure. That's not optional in serious environments. That's baseline.
Docker doesn't have one place to configure proxies—it has four. Choose the wrong layer, and you'll fix one problem while creating another.
Each layer controls a different part of the workflow. So the question isn't "how do I set a proxy?" It's "where should I set it?"
Configure the daemon, and you control how Docker pulls and pushes images, connects to registries, and communicates across clusters.
Use this when image operations fail. Or when every outbound request must go through a proxy.
The most reliable method on Linux is editing daemon.json:
{
"proxies": {
"http-proxy": "http://proxy.example.com:3128",
"https-proxy": "https://proxy.example.com:3129",
"no-proxy": "*.example.com,127.0.0.1"
}
}
Then restart Docker:
sudo systemctl restart docker
Simple. Persistent. Predictable.
If you need more control, especially in managed environments, use a systemd drop-in file instead. It gives you cleaner separation and works in both standard and rootless modes.
One thing to keep in mind—modern Docker versions don't pass these settings down to containers automatically. That's intentional. It prevents unnecessary overhead, but it also means you'll often need an additional layer of configuration.
This one is quick and often overlooked. The client itself can use proxies for commands like docker login or docker pull.
If your CLI commands fail but the daemon is correctly configured, this is where you look.
Set it with environment variables:
export HTTP_PROXY=http://PROXY:PORT
export HTTPS_PROXY=http://PROXY:PORT
On Windows:
setx HTTP_PROXY http://PROXY:PORT
No restart needed. Changes apply immediately.
It's lightweight, which makes it great for temporary fixes or user-specific setups. But it won't solve deeper issues like build failures or container connectivity.
Your applications live here. If they can't reach the internet, your system doesn't work—no matter how well the daemon is configured.
Use this layer when builds fail to install dependencies or when apps inside containers can't reach external services.
For builds:
docker build \
--build-arg HTTP_PROXY=http://PROXY:PORT \
--build-arg HTTPS_PROXY=http://PROXY:PORT \
-t myimage .
For runtime:
docker run \
-e HTTP_PROXY=http://PROXY:PORT \
-e HTTPS_PROXY=http://PROXY:PORT \
myimage
If you're working with multiple services, switch to Docker Compose. It keeps things clean and repeatable:
environment:
HTTP_PROXY: http://PROXY:PORT
HTTPS_PROXY: http://PROXY:PORT
NO_PROXY: "localhost,127.0.0.1"
This layer gives you precision. Some containers use proxies, others don't. That flexibility matters in real systems.
But be careful—hardcoding proxy settings into images can expose credentials. Keep sensitive data out of Dockerfiles whenever possible.
This is the blunt instrument. Set a proxy at the OS level, and everything routes through it—Docker, browsers, system services, all of it.
It's fast to implement. One configuration, wide coverage. But it comes with trade-offs. Debugging becomes harder. Behavior becomes less predictable. And containers may still ignore these settings unless explicitly configured.
Use it when your organization enforces system-wide proxy usage. Otherwise, it's usually better to stay closer to Docker itself.
Most proxy problems in Docker look the same at first. Requests fail. Builds hang. Errors feel vague. But the root cause is usually very specific.
If docker pull fails, your daemon isn't configured. Start there.
If pulling works but builds fail, your containers don't have proxy access. Add it at the container level.
If NO_PROXY doesn't behave, check formatting. Commas only. No protocols. Small mistake, big impact.
If authentication breaks things, look at your credentials. Special characters like @ or # need to be URL-encoded. Miss that, and nothing connects.
And if you're using Docker Desktop and things still fail, remember—its proxy settings don't automatically apply to containers. You'll need to configure those separately.
Get the layers right, and Docker stops feeling unpredictable. Most failures aren't random—they're just misaligned network rules. Once proxies are configured correctly across the right level, containers connect, builds complete, and systems behave consistently instead of breaking under pressure.