In CloudStack, each zone has its own SSVM and SSVMs will be communicating in terms of copying template.

In CloudStack 3.x, SSVM basically has four nics, they are: 

  1. eth0: link local nic used for ssh login from host
  2. eth1: private nic used as management interface between mgmt server and SSVM
  3. eth2: public nic used as interface that can  reach outside internet 
  4. eth3: storage nic used as interface to access secondary storage share like NFS

CloudStack sets route for each nic, however, the most important route 'default' is set to public nic which is eth2.

That means a healthy SSVM should have default route like(by command 'ip route'):

default via public_gateway_ip_address dev eth2

this also implies communication between SSVMs happen thru public nic even both SSVMs are in the same private subnet.

So when communication between SSVMs failed, the first thing is to check default route of them. if they are correct, then the issue is caused by physical network setup, you have to track down it.

BTW: In Linux, "no route to host" error often implies your firewall blocks the traffic, check iptable rules in SSVM then host then physical firewall.

  • No labels