Virtual Private Network

TechPreview

The Virtual Private Network as a Service (VPNaaS) extension to the MOSK Networking service (OpenStack Neutron) enables cloud users to extend their private networks securely over the internet to remote sites or devices. Users create and manage encrypted tunnels (such as IPsec) and configure VPN policies (IKE, IPsec) on demand from the same OpenStack interface they use for the rest of their networking without deploying or operating dedicated VPN virtual machines.

Architecture overview

The Virtual Private Network service establishes IPsec tunnels between networks, allowing data to be transferred securely across the public network. Workload traffic that needs to be encrypted is processed by the StrongSwan process running on an OpenStack gateway node and sent over the internet in encrypted form. At the remote gateway, the packets are decrypted and forwarded to the final destination as normal traffic. Encryption and decryption are transparent to cloud workloads. Other traffic passes through the public network unchanged. The VPNaaS service network topology varies depending on the Networking service backend in use.

Networking backend: Open vSwitch

With the Open vSwitch backend, the StrongSwan process runs in the router namespace on the Neutron L3 Agent. That router is directly connected to the private networks whose traffic needs to get protected by the VPN.

The diagram below illustrates a site-to-site VPNaaS network topology using the Open vSwitch backend, where two separate OpenStack clusters act as VPN peers.

vpnaas-ovs.html

Networking backend: Open Virtual Network

With the OVN backend, there are no Neutron L3 Agent services. OVN handles L3 traffic entirely within the OVS br-int (integration bridge) on the compute hosts, using the host network stack and OpenFlow rules. Dedicated OVN VPN Agent services perform encryption. The StrongSwan process runs in the VPN Agent pod. A separate namespace is created for each VPN connection; the namespace name corresponds to the router ID of the router attached to the private networks whose traffic is encrypted. The VPN Agent and the router are connected over the 169.254.0.0/30 network, and encrypted traffic enters the public network through a dedicated port on the VPN Agent. The router selects packets that require encryption and sends them to the VPN Agent pod, where they are encrypted and sent onto the public network. All other traffic reaches the public network through the router’s public port.

The diagram below illustrates a site-to-site VPNaaS network topology using the Open Virtual Network backend, where two separate OpenStack clusters act as VPN peers.

vpnaas-ovn.html

Networking backend: OpenSDN

OpenSDN, formerly known as Tungsten Fabric, does not support VPNaaS directly.

Enabling VPNaaS in MOSK

The VPNaaS Neutron extension needs to be explicitly enabled in the OpenStackDeployment custom resource.

To enable the VPNaaS Neutron extension, add the following lines to the spec:features:neutron:extensions dictionary:

spec:
  features:
    neutron:
      extensions:
        vpnaas:
          enabled: true

Known Limitations

IPv4/IPv6 support

VPNaaS limitations in IPv4/IPv6 network support

Topology (private/public networks)

Open vSwitch

Open vSwitch DVR

Open Virtual Network

Open Virtual Network DVR

IPv4 over IPv4

IPv4 over IPv6

IPv6 over IPv4

IPv6 over IPv6

IPSec VPN configuration changes not applying to active tunnels

An upstream issue #2127166 in OpenStack Caracal and Epoxy causes a silent synchronization failure between the OpenStack control plane and the network gateway nodes during configuration updates. While the OpenStack API and database successfully record changes to IPSec site connections, such as updated Pre-Shared Keys, peer IP addresses, or MTU settings, an internal coding error in the VPN agent prevents these updates from being written to the local IPSec configuration file on the gateway node. As a result, the system erroneously reports a successful update in the CLI and OpenStack Dashboard, but the actual data plane remains stuck with stale configuration parameters, causing tunnels to fail or remain in an inconsistent state. To resolve this, cloud users must delete and recreate the IPSec site connection entirely, as the create workflow correctly initializes the configuration where the update workflow fails.