“How do Docker containers talk?” is surprisingly interesting. The rabbit hole down containerization is deep. I skim the surface here very quickly. This should not be seen as incredibly technically complete.
Container networks in a nutshell
Initialization:
When you install Docker, the docker daemon creates a network interface, (e.g. default is docker0
). This network interface is a bridge.
This bridge typically has the ‘gateway’ ip address “X.X.X.1
”, and by default has a subnet of /16
, meaning it can support 2^16 hosts.
This bridge can be thought of as a ‘switch’ or ‘router’ - all containers (unless config otherwise) will be given an IP address under the bridge’s subnet.
Container is launched:
When a container is spun up, the daemon gets to work.
It creates a virtual network interface for the container (something like vethabc123
). This virtual network interface (usually a virtual ethernet device) are tunnels between network namespaces. This is used to map the veth interface on the host machine with the ‘real’ eth interface in the container.
The host machine’s veth handle is connected to the docker bridge.
Then, ip addresses are set. Inside the container, eth0
is set to be part of the bridge’s subnet. Its gateway address is set to be the bridge IP address. Note that the container also has a unique MAC address.
The deamon sets up some extra routing rules on the host machine. It sets up some iptables (firewall) and NAT config to make sure everything can talk to each other.
The topology is as such:

Setup Demo
To verify the topology, you can spin up a docker container in interactive mode: $ docker run -it busybox sh
.
Then, within the shell, ifconfig
to see the container’s IP address (inet addr):
|
|
On the host machine, you can verify that the container’s inet address is indeed the gateway (i.e. X.X.X.1):
|
|
When you spin up the docker container, we can also see that the daemon sets up veth to the docker0 bridge via dmesg
!
|
|
Routing Rules
The daemon did some voodoo on the routing. Let’s demystify it. We can see the firewall and routing config the daemon did by $ iptables -t nat -L -n -v
!
|
|
This is a pain to read. But this sets up the communication for the following cases.
Case 1: Traffic from container to the internet
Packets from the container travel through its eth0
to vethX
which is mapped to the docker0
bridge.
|
|
This rule is in play. The rule is essentially: any traffic from 172.17.0.0 (the docker subnet) that is not going to the docker0 interface should be masqueraded to the host’s external IP (192.x.x.x)
Case 2: Traffic from container to another container on the same subnet
None of the NAT rules apply! The docker0
bridge acts as a layer 2 switch to forward traffic automatically to the other container. This is possible because the bridge has been registered with the veth
handles for the containers (assuming the docker containers ran without special networking config).
For sake of brevity, I will stop here. There are many other interesting data flow cases that I could explore (what happens when a docker container runs a webapp and exposes a port to the internet) which may be worth a proper in-depth essay. Networking as always is a hell of a deep field.