Docker images fail to communicate with each other

I am currently working on a uni project which includes multiple services running in docker containers. Some have to register at a directory node. However, this fails with a java.net.NoRouteToHostException for each container that wants to connect to the directory node.

I then tried to double check by spawning another container. Resolving and pinging other containers is no issue, but whenever I try to actually connect to the other container(s), it fails with a similar error message as above.

After some digging I think the issue is the firewall:

https://success.docker.com/article/firewalld-problems-with-container-to-container-network-communications (Although here it is said to add docker0 to trusted the device seems to be re-added to public after a reboot, but adding the same rule to public as well doesn't change anything)

https://forums.docker.com/t/no-route-to-host-network-request-from-container-to-host-ip-port-published-from-other-container/39063/6 (Note here, that I tried to add this rule via the GUI, but I think I added the described rule correctly)

But none of the proposed solutions seem to work - even after a reboot.

The weird part is, that everything seems to work on my Fedora laptop, which has no further firewall settings added just for docker AFAIK, and also my colleagues (Ubuntu/Fedora) do not have any issues with docker's networking. This is the reason why I'm posting this here instead of the docker forum or similar...

Has anyone of you experienced the same issue?

Thanks for your help!

Are you giving your docker images an internal IP addresses or not?

I've done it both ways. What I have found is that when I don't give the images an IP address I can't use "localhost" to communicate between them, I have to use one of the host's IP addresses.

Sounds like homework ...

https://wiki.manjaro.org/index.php?title=Forum_Rules#Homework

@pixel27 I'm not super sure, what you mean with internal IP, but I don't want to (and shouldn't in my case) access other nodes with localhost. They should behave like different machines, but currently don't.

@sueridgepipe Thanks for linking the rules, but as I already mentioned, this is not part of my assignment. In fact, I already solved everything on my laptop, but am confused as to why the containers cannot reach each other (only) on my Manjaro machine. To be clear: Just because I found the bug/error/misconfiguration during solving an assignment, does not mean that solving the issue is part of my assignment...

However, if you still think that this breaks the rules in some way, I can also live without docker, I guess.

Just updated docker, issue still persists.

Just making sure:

  • are the necessary ports exposed?
  • do you use selinux, if yes, what happens if you turn it off?
  • did you try to create a specific docker net for those containers (if not, maybe use docker-compose to do so, or do it completly manually)
  • is any kind of privilegde or resource needed for those conntainers?

Thanks for your reply!

  • Yes, it is also working on my Fedora laptop, so it should be implemented correctly
  • I don't think so, at least when googling how to turn it off, none of the commands or files are present
  • I'm using docker-compose already
  • Well not quite. The only thing that's somewhat privileged is that we are mapping the docker (unix) socket to one of the containers to spawn sibling containers. Everything else are just unprivileged Java applications. But this has only been implemented yesterday by a college of mine, and it did not work beforehand either.

Does it work when stopping the firewalld service?

I just tried it by simply typing systemctl stop firewalld, but this causes docker to fail entirely:

driver failed programming external connectivity on endpoint client (352488f522f57993a2f96843c6f5cf163c8e35cde7897d6b63afc03b62c9dcb2):  (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 8080 -j DNAT --to-destination 172.19.0.3:8080 ! -i br-c15db2ce3f87: iptables: No chain/target/match by that name.

Do I have to kill something else?

Try to restart docker and the containers please.
Edit: you could also flush iptables after saving the settings, just to make sure if the problem is located in here or with docker itself.

Thank you!

Yes, all docker container can communicate with each other if I stop the firewalld service.

So my assumption is correct then.. How do I fix it? ^^

Can you please show the firewall rules, structure of your docker net and maybe iptables? Of course with firewalld running again.

I'm sorry, I would have to ask you for instructions again, but I'm not sure how to effectively list all firewall rules/iptables :frowning:

Regarding the network structure, I can share with you the docker-compose we used to initially start the project:

version: '3.4'

x-tn:
  &transfer-node
  image: aic-transfer-node
  build: ./transfernode
  environment:
    DIRECTORY_ADDR: aic-directory-node
    DIRECTORY_PORT: 10001
    TRANSFER_PORT: 8090
  expose:
    - "8090"

services:
  aic-api:
    image: aic-api
    build: ./api
    container_name: api
    environment:
      API_PORT: 10002
    expose:
      - "10002"
  aic-directory-node:
    image: aic-directory-node
    build: ./directorynode
    container_name: directory
    environment:
      DIRECTORY_PORT: 10001
    expose:
      - "10001"
  aic-transfer-node-1:
    <<: *transfer-node
    container_name: transfer-1
  aic-transfer-node-2:
    <<: *transfer-node
    container_name: transfer-2
  aic-transfer-node-3:
    <<: *transfer-node
    container_name: transfer-3
  aic-transfer-node-4:
    <<: *transfer-node
    container_name: transfer-4
  aic-transfer-node-5:
    <<: *transfer-node
    container_name: transfer-5
  aic-transfer-node-6:
    <<: *transfer-node
    container_name: transfer-6
  aic-client:
    image: aic-client
    build: ./client
    container_name: client
    ports:
      - "8080:8080"
    expose:
      - "8080"
    environment:
      DIRECTORY_ADDR: aic-directory-node
      DIRECTORY_PORT: 10001
      API_ADDR: aic-api
      API_PORT: 10002
      CLIENT_PORT: 8080
    stdin_open: true
    tty: true

Not that this isn't the most current version of the docker-compose, but they still work basically the same.

firewall-cmd - - list-all
iptables - S

Sorry for my shortiness, I am typing on a smartphone at the moment. Is the device named docker0? You can check with ip link show.

You could also link the containers, that could make it work too and I consider it best practise.

Thanks again for your swift response :slight_smile:

Sadly I had to catch a train, so it will take some time before I can update you with all rules. But you already helped me a lot! Thanks again

I am sitting in a train myself. Take your time.

firewall-cmd --list-all:

public
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: dhcpv6-client mdns ssh steam-streaming
  ports: 1714-1764/tcp 1714-1764/udp
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

iptables -S

-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-d9a722ffe069 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-d9a722ffe069 -j DOCKER
-A FORWARD -i br-d9a722ffe069 ! -o br-d9a722ffe069 -j ACCEPT
-A FORWARD -i br-d9a722ffe069 -o br-d9a722ffe069 -j ACCEPT
-A FORWARD -o br-c15db2ce3f87 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-c15db2ce3f87 -j DOCKER
-A FORWARD -i br-c15db2ce3f87 ! -o br-c15db2ce3f87 -j ACCEPT
-A FORWARD -i br-c15db2ce3f87 -o br-c15db2ce3f87 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-d9a722ffe069 ! -o br-d9a722ffe069 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-c15db2ce3f87 ! -o br-c15db2ce3f87 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-d9a722ffe069 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-c15db2ce3f87 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN

Sorry for the delay, havn't been home over the weekend :slight_smile:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.

Forum kindly sponsored by