Skip to content. | Skip to navigation

Navigation

You are here: Home / Support / Guides / Tools / OpenStack / Networking

Personal tools

OpenStack

Networking

The networking component of OpenStack is what makes the whole thing fly. Through it we create networks and ports (how VM interfaces connect to the networks) and it handles all the switching of traffic between hosts. The switching, which in most cases is via Open vSwitch (OVS), handles all the subtleties of intra- and inter-host packet switching.

The Basics

A network in OpenStack is an isolated entity and figuratively represents the cable that you use to connect computers together. There is an implied switch connecting all VMs on the network together.

On top of that network you create subnets of IP address ranges as you might assign IP addresses within a LAN. The isolated part is important as there's nothing to stop you creating two distinct networks which use the same IP ranges. VMs on the two networks will not be able to see each other.

Note

Complications occur if you want both of these isolated networks to be able to communicate with real computers on the physical LAN as you might expect if you had otherwise plugged two distinct physical networks together which had previously used the same subnet range. There's no miracle work going on here.

When creating a network you can specify that it is a flat or a vlan network.

Some of the magic of the underlying OVS is in the tunnelling of traffic between multiple physical compute hosts such that the two isolated networks remain isolated -- OpenStack allows you to use GRE, VXLAN or regular VLANs to implement the inter-host tunnels. Another source of magic is the work in the networking component to allow selected networks to be able to communicate with physical hosts unrelated to OpenStack.

Network Creation

Prior to OpenStack Liberty you needed to work some OVS bridging magic. We'll skip that and go straight with the Liberty mode where in /etc/neutron/plugins/ml2/linuxbridge_agent.ini you can specify some LABEL:interface mappings:

physical_interface_mappings = nic-enp0s31f6:enp0s31f6

Note

The OpenStack documentation has a habit of using external as the label which implies some sort of magic status which doesn't exist.

Warning

The choice of LABEL is very important if you subsequently want to add compute nodes to your cloud. The label must be the same on all compute nodes. In this case the choice of nic-enp0s31f6 -- which you might have used to remind you which physical device the traffic is being transported over -- becomes anti-intuitive if your next host doesn't have a (or you can't use the) enp0s31f6 device.

In that sense the label becomes more like the name of a MUX of networks. All hosts (controller/networking and compute) who want to communicate together must use the same labels.

You might have several compute nodes that can communicate over 10GbE devices and create suitable labels for them. Other hosts may want to interact (the networking node must in order to supply DHCP traffic!). These nodes can simply use the same labels but map them to their 1GbE device (possibly their only available NIC). So long as the physical switching fabric binds everything together underneath.

Subsequently, the label can be used as a provider physical network:

neutron net-create \
 A \
 --provider:network_type vlan \
 --provider:physical_network nic-enp0s31f6 \
 --provider:segmentation_id 101 \
 --router:external \
 --shared

Here we've gone all in:

  • the network is called A
  • we're creating a VLAN network with VLAN ID 101
  • we're using the previously defined nic-enp0s31f6 label as the base for this
  • the network will visible externally, ie. to non-OpenStack systems, and in this instance will use the VLAN ID, 101, on the physical switching fabric
  • the network is shared with other tenants

Subnet Creation

In a pure cloudy world we expect to be creating and deleting VMs to consume our load on demand. There's nothing unique about them so these VMs will expect to pick up their networking details via DHCP.

Accordingly, OpenStack uses dnsmasq and consumes three IP addresses in any subnet range for:

  • the default gateway
  • (something)
  • dnsmasq

You can specify a different address for the default gateway but the other two addresses will be consumed at the low end of the subnet range unless you explicitly turn them off.

If you disable DHCP then your VMs must be configured manually with their network information.

If you disable the (OpenStack) default gateway then you need to provide your own or traffic won't get off this network. Which may be your plan.

An example might be:

neutron subnet-create \
 --name A-subnet \
 --allocation-pool start=192.168.5.32,end=192.168.5.127 \
 A 192.168.5.0/24

Here we've:

  • called the subnet A-subnet
  • limited the DHCP address range to .32 through .127

There is an implied default gateway of the first IP address, .1.

You asking for DHCP service so dnsmasq will use .32 and will allocate addresses .33 through .127 to DHCP clients

Another example:

neutron subnet-create \
 --name B-subnet \
 --no-gateway \
 --disable-dhcp \
 A 192.168.6.0/24

Here we've:

  • created a second subnet, possibly confusingly called B-subnet
  • forced no (OpenStack) default gateway
  • disabled DHCP

In this case, no OpenStack service is going to hand out DHCP or handle packets headed off network. We'd better be doing all that ourselves.

Document Actions