Monday, December 29, 2014

2 Openstack Neutron Distributed Virtual Router (DVR) - Part 2 of 3

In this post, the 2nd of a 3-post series about DVR, I go into the North-South DNAT scenario in details.

Up until Juno, all L3 traffic was sent through the network node.  In Juno, DVR was introduced to distribute the load from the network node onto the compute nodes.

The L3 networking in Neutron is divided into 3 main services:

  1. East-West communication: IP traffic between VMs in the data center
  2. Floating IP (aka DNAT): The ability to provide a public IP to a VM, making it directly accessible from public networks (i.e. internet)
  3. Shared IP (aka SNAT): The ability to provide public network access to VMs in the data center using a shared (public) IP address
In my previous post, I covered how DVR distributes the the East-West L3 traffic. 
In this post I am going to begin covering the North-South traffic starting with  Floating IP (DNAT).

DNAT Floating IP North-South 

In order to support Juno's DVR local handling of floating IP DNAT traffic in the compute nodes, we now require an additional physical port that connects to the external network, on each compute node. 

The Floating IP functionality enable direct access from the public network (e.g. Internet) to a VM.

Let's follow the example below, where we will assign a floating IP to the web servers.  

When we associate a VM with a floating IP, the following actions take place:
  1. The fip-<netid> namespace is created on the local compute node (if it does not yet exist)
  2. A new port rfp-<portid> is created on the qrouter-<routeridnamespace (if it does not yet exist
  3. The rfp port on the qrouter namespace is assigned the associated floating IP address
  4. The fbr port on the fip namespace is created and linked via point-to-point network to the rfp port of the qrouter namespace
  5. The fip namespace gateway port fg-<portid> is assigned an additional address from the public network range (the floating IP range)
  6. The fg-<portid> is configured as a Proxy ARP

Now, lets take a closer look at VM4 (one of the web servers).

In the diagram below, the red dashed line shows the outbound network traffic flow from VM4 to the public network.

The flow goes through 5 steps:
  1. The originating VM sends a packet via default gateway and the integration bridge forwards the traffic to the local DVR gateway port (qr-<portid>).
  2. DVR routes the packet using the routing table to the rfp-<portid> port
  3. The packet is applied NAT rule using IPTables, changing the source-IP of VM4 to the assigned floating IP, and then it is sent through the rfp-<portid> port, which connects to the fip namespace via point-to-point network (e.g.
  4. The packet is received on the fbr-<portid> port in the fip namespace and then routed outside through the fg-<portid> port
At this point you may be confused with the descriptions, so let's try to simplify this a bit with a concrete example:
[Public Network range]=
[Web Network]=
[VM4 floating IP]=
private IP]=

As you can see in the diagram, routing consumes an additional IP from the public range per compute node (e.g.

The reverse flow will go in the same route, the fg-<portid> act as a proxy ARP  for the DVR namespace.  

In the next post, I will go into the North-South scenario using Shared IP (SNAT).

Please feel free to leave comments, questions and corrections.


  1. Hello Eran! Great article, very clearly described.
    One question about the public IP. You mentioned that the floating IP namespace will consume an additional IP from the public range per compute node. What is this IP used for? Is it possible to use an IP from an internal subnet so that we can save this IP allocation?

    A niche correction: [VM4 floating IP]= should be VM4 _native_ IP

  2. Hi Junhui Thanks for the correction,
    You are correct, it is not necessarily required, it can use the assigned FIP addresses directly on the external bridge .
    The additional IP address is used as a "proxy mac" for all the local VMs assigned floating IPs.
    There is an effort now to remove the FIP namespace and not consume an additional Public IP address. Take a look at