SWITCH Cloud Blog


RadosGW/Keystone Integration Performance Issues—Finally Solved?

For several years we have been running OpenStack and Ceph clusters as part of SWITCHengines, an IaaS offering for the Swiss academic community. Initially, our main “job” for Ceph was to provide scalable block storage for OpenStack VMs—which it does quite well. But we also provided S3 (and Swift, but that’s outside the scope of this post) -based object storage via RadosGW from early on. This easy-to-use object storage turned out to be popular far beyond our initial expectations.

One valuable feature of RadosGW is that it integrates with Keystone, the Authentication and Authorization service in OpenStack. This meant that any user of our OpenStack offering can create, within her Project/tenant, EC2-compatible credentials to set up, and manage access to, S3 object store buckets. And they sure did! SWITCHengines users started to use our object store to store videos (and stream them directly from our object store to users’ browsers), research data for archival and dissemination, external copies from (parts of) their enterprise backup systems, and presumably many other interesting things; a “defining characteristic” of the cloud is that you don’t have to ask for permission (see “On-demand self-service” in the NIST Cloud definition)—though as a community cloud provider, we are happy to hear about, and help with, specific use cases.

Now this sounds pretty close to cloud nirvana, but… there was a problem: Each time a client made an authenticated (signed) S3 request on any bucket, RadosGW had to outsource the validation of the request signature to Keystone, which would return either the identity of the authenticated user (that RadosGW could then use for authorization purposes), or a negative reply in case the signature doesn’t validate. Unfortunately, this outsourced signature validation process turns out to bring significant per-request overhead. In fact, for “easy” requests such as reading and writing small objects, this authentication overhead easily dominates total processing time. For a sense of the magnitude, small requests without Keystone validation often take <10ms to complete (according to the logs of our NGinx-based HTTPS server that acts as a front end to the RadosGW nodes. Whereas any request involving Keystone takes at least 600ms.

One undesirable effect is that our users probably wonder why simple requests have such a high baseline response time. Transfers of large objects don’t care much, because at some point the processing time is dominated by Rados/network transfer time of user data.

But an even worse effect is that S3 users could, by using client software that “aggressively” exploited parallelism, put very high load on our Keystone service, to the point that OpenStack operations sometimes ran into timeouts when they needed to use the authentication/authorization service.

In our struggle to cope with this reoccurring issue, we found a somewhat ugly workaround: When we found a EC2 credential in Keystone whose use in S3/RadosGW contributed significant load, we extracted that credential (basically an ID/secret pair) from Keystone, and provisioned it locally in all of our RadosGW instances. This always solved the individual performance problem for that client, response times dropped by 600ms immediately, and load on our Keystone system subsided.

While the workaround fixed our immediate troubles, it was deeply unsatisfying in several ways:

  • Need to identify “problematic” S3 uses that caused high Keystone load
  • Need to (more or less manually) re-provision Keystone credentials in RadosGW
  • Risk of “credential drift” in case the Keystone credentials changed (or disappeared) after their re-provisioning in RadosGW—the result would be that clients would still be able to access resources that they shouldn’t (anymore).

But the situation was bearable for us, and we basically resigned to having to fix performance emergencies every once in a while until maybe one day, someone would write a Python script or something that would synchronize EC2 credentials between Keystone and RadosGW…

PR #26095: A New Hope

But then out of the blue, James Weaver from the BBC contributed PR #26095, rgw: Added caching for S3 credentials retrieved from keystone. This changes the approach to signature validation when credentials are found in Keystone: The key material (including secret key) found in Keystone is cached by RadosGW, and RadosGW always performs signature validation locally.

James’s change was merged into master and will presumably come out with the “O” release of Ceph. We run Nautilus, and when we got wind of this change, we were excited to try it out. We had some discussions as to whether the patch might be backported to Nautilus; in the end we considered that unlikely at the current state, because the patch unconditionally changes the behavior in a way that could violate some security assumptions (e.g. that EC2 secrets would never leave Keystone).

We usually avoid carrying local patches, but in this case we were sufficiently motivated to go and cherry-pick the change on top of the version we were running (initially v14.2.5, later v14.2.6 and v14.2.7). We basically followed the instructions on how to build Ceph, but after cloning the Ceph repo, ran

git checkout v14.2.7
git cherry-pick affb7d396f76273e885cfdbcd363c1882496726c -m 1 -v
edit debian/changelog and prepend:

ceph (14.2.7-1bionic-switch1) stable; urgency=medium

  * Cherry-picked upstream pull #26095:

    rgw: Added caching for S3 credentials retrieved from keystone

 -- Simon Leinen <simon.leinen@switch.ch>  Thu, 01 Feb 2020 19:51:21 +0000

Then, dpkg-buildpackage and wait for a couple of hours…

First Results

We tested the resulting RadosGW package in our staging environment for a couple of days before trying them in our production clusters.

When we activated the patched RadosGW in production, the effects were immediately visible: The CPU load of our Keystone system went down by orders of magnitude.

Screenshot 2020-02-02 at 10.29.54

On 2020-01-27 at around 08:00, we upgraded our first production cluster’s RadosGWs. Twenty-four hours later, we upgraded the RadosGWs on the second cluster. The baseline load on our Keystone service dropped visibly on the first upgrade, but some high load peaks could still be seen. Since the second region was upgraded, no sharp peaks anymore. There is a periodic load increase every night between 03:10 and 04:10, presumably due to some charging/accounting system doing its thing. Probably these peaks were “always” there, but they only became apparent once we started deploying the credential-caching code.

The 95th-percentile latency of “small” requests (defined as both $body_bytes_sent and $request_length being lower than 65536 was reduced from ~750ms to ~100ms:

95%

 

Conclusion and Outlook

We owe the BBC a beer.

To make the patch perfect, maybe it would be cool to limit the lifetime of cached credentials to some reasonable value such as a few hours. This could limit the damage in case credentials should be invalidated. Though I guess you could just restart all RadosGW processes and lose any cached credentials immediately.

If you are interested in using our RadosGW packages made from cherry-picking PR #20965 on top of Nautilus, please contact us. Note that we only have x86_64 packages for Ubuntu 18.04 “Bionic” GNU/Linux.


Enable Keystone federated users to use CLI tools with Application Credentials

On this blog I talked in the past about Keystone authentication for your Kubernetes cluster. The solution described works great if you have Openstack users stored in the keystone mysql database. However, in real production systems, it is common to access Openstack with a Federated login. The web login works with a redirect to an Identity provider that will confirm the user identity and will redirect again to the Openstack dashboard.

It is a well known problem that the federated login process needs to go through web pages redirects to enter the necessary information, and this does not work for users that need to authenticate with CLI tools. In our case the CLI tool is kubectl.

A small team of people at SWITCH and GARR worked jointly to find a solution for this use case.

The good news is that the Keystone developers already implemented a solution for this problem, the Keystone Application Credentials. This is a feature available since the Queens release of Openstack. The key idea is that a user can login on the web interface with the federated login process, and then from the dashboard identity panel he can create new credentials to be consumed directly from CLI tools.

The following three screenshoots show the user journey to create an Application Credential in the Openstack Horizon dashboard:

Select Application Credential in the Identity Panel

Select Application Credential in the Identity Panel

 

 

Enter the data to create an application credential

Enter the data to create an application credential

 

 

Download the openrc file to store your application credential

Download the openrc file to store your application credential

 

So what is missing to authenticate with kubectl using a keystone application credential ?

Starting at kubectl v1.11 the specific cloud providers API implementations were moved out of the kubernetes source tree. You have to modify our client configuration as follows:

contexts:
- context:
    cluster: kubernetes
    namespace: keystone-daadb4bcc9704054b108de8ed263dfc2
    user: openstackgarr
  name: garr

users:
- name: openstackgarr
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: /bin/client-keystone-auth
      args:
      - --domain-name=myDomain
      - --keystone-url=https://keystone.doman.tld:5000/v3
      env:
      - name: OS_USERNAME
        value: username
      - name: OS_PASSWORD
        value: secret
      - name: OS_PROJECT_NAME
        value: test
 

Note that in this configuration snippet we are still using username and password. If you want to test this setup make sure the client-keystone-auth version is newer than v0.2.0 or that is patched including commit 66961abd. The version v0.2.0 is not able to request keystone project scoped token, so your setup will not work.

The software client-keystone-auth uses the golang library gophercloud to talk to the Kubernetes API.

To reach our goal, the first step was to patch gophercloud to implement the application_credential authentication method described in the Queens spec.

This patch enables any golang application to easily access the application credentials authentication method, so it could be useful to other golang software tools, like for example Terraform.

Please note that the patch implements just the possibility to issue a token authenticating with the application credential. I did not propose a gophercloud patch that implements the full create/update/delete workflow for application credentials, because this went beyond the scope of this work.

Once the gophercloud PR was merged I proposed the a PR for client-keystone-auth to use the new gophercloud feature.

At the time of this writing the latest PR is still not merged, so you will need to compile the code yourself to test it.

Now that all the code is there, we can change the client configuration to use the application credential instead of username and password as we show in the example below:

contexts:
- context:
    cluster: kubernetes
    namespace: keystone-daadb4bcc9704054b108de8ed263dfc2
    user: openstackgarr
  name: garr

users:
- name: openstackgarr
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: /bin/client-keystone-auth
      args:
      - --domain-name=myDomain
      - --keystone-url=https://keystone.doman.tld:5000/v3
      - --user-name=username
      - --application-credential-name=kuberneteslogin
      - --application-credential-secret=thisismysecret 

You can also use environment variables instead of command line arguments. The client-keystone-auth supports the same variable names as the official openstack client.

Conclusion: you can today use kubectl with your keystone application credentials, this is really useful if you have a federated login to the Openstack cloud.

I would like to thank the people that participated to this development work with me, especially Giuseppe Attardi , Roberto Di Lallo  and Joe Topjian that helped in implementation, discussion and code review.


Openstack Horizon runs on Kubernetes in production at SWITCH

In April we upgraded the SWITCHengines OpenStack Horizon dashboard to the OpenStack Pike version. But this upgrade was a little bit special, it was more than an Horizon upgrade from Newton to Pike.

Our Horizon deployment is now hosted on a Kubernetes cluster. The cluster is deployed using the playbook k8s-on-openstack that we actively develop. We have been testing this Kubernetes deployment for a while, but it is only when you have to deploy an application on top of it in production that you really learn and you fix real problems.

Horizon is a good application to start learning Kubernetes, because it is completely stateless and it does not require any persistent storage. It is just a GUI to the OpenStack API. The user logs in with his credentials, and Horizon will get a token and will start making API calls with the user’s credentials.

Running Horizon in a single Kubernetes pod for a demo takes probably 5 minutes, but deploying for production usage is far more complex. We needed to address the following issues:

  • Horizontally scale the number of pods, keeping a central memcached or redis cache
  • Allow both IPv4 and IPv6 access to engines.switch.ch
  • Define the Load Balancing architecture
  • Implement a persistent logging system

If you want to run to the solution of all these problems, you can have a look at the project SWITCH-openstack-horizon-k8s-deployment where we have published all the Dockerfiles and the Kubernetes descriptors to recreate our deployment.

Scale Horizontally

Horizon performs much faster when it accesses a memory cache, it is the recommended way to deploy in production. We decided to go for Redis cache.

Creating a Redis service in our namespace with the name redis-master we are able to use the special environment variable ${REDIS_MASTER_SERVICE_HOST} when booting the Horizon container, to make sure all the instances point to the same cache server.

This is a good example of how you combine two services together in a Kubernetes namespace. We can horizontally scale the Horizon pods, but the Horizon deployment is independent from the Redis deployment.

IPv4 and IPv6

We always publish our services on IPv6. In our previous Kubernetes demos we used the OpenStack LBaaS to expose services to the outside world. Unfortunately in the Newton version of OpenStack, the LBaaS lacks proper IPv6 integration. To publish a production service on Kubernetes, we suggest to use an ingress controller. There are several kinds available, but we used the standard Nginx ingress controller. The key idea is that we have a K8s node with an interface exposed to the public Internet where a privileged Docker container is running with –net=host. The container runs Nginx that can bind to IPv6 and IPv4 on the node, but of course it can also reach any other pod on the cluster network.

Define the Load Balancing architecture

I already wrote above that if you need IPv6, you should not use the Openstack LBaaSv2. However I am going to explain why I would not use that kind of load balancer even for IPv4.

The first picture shows the network diagram of a LBaaSv2 deployment. The LoadBalancer is implemented as a network namespace on the network node, called qlbaas-<uuid>, in which a HAProxy process is running. This is a L4 LoadBalancer. The bad thing of this architecture is that when an instance boots, the default gateway configured via DHCP will be the IP address of the neutron router. When we expose a service with the floating IP configured on the outer interface of the LBaaS, in order to force the traffic to follow a symmetric return path, the Load Balancer must perform a DNAT and SNAT operation. This means that the IP packets hitting the Pod have completely lost the information about the source IP address of the original client. Because it is a pure L4 load balancer, we don’t have the possibility to carry this lost information on in a HTTP header. This prevents the operator from building any useful logging system, because once the traffic arrives at the pod, the information about the client is filtered out.

In the next picture we have a look on how the Nginx ingress works. In this case the external traffic is received on a public floating IP that is configured on the virtual machine running the ingress pod, in this case on the master. We terminate the TLS connection at the nginx-ingress. This is necessary because the ingress also has to perform a SNAT and DNAT but it adds to the HTTP requests the X-Forwarded-For header that we use to populate our log files. We could not add the header if we were just moving encrypted packets around.

Another advantage of this solution is that it uses just a normal instance to implement the ingress, this means that you can use in a totally independent way from the version of OpenStack you are running on.

In the future you might be able to use the newer OpenStack Octavia Load Balancer, but at the moment I did not investigate that. All I know is that the solution is really similar, but you will have an OpenStack service VM running an Nginx instance.

Implement a persistent logging system

Pods are short lived and distributed over different VMs that are also ephemeral. To collect the logs, we run docker with the log-driver journald. Once this is set up, all the docker containers running on the host will send their logging output to journald. We then collect this information with journalbeat to send the data to our elastic search cluster. This part is not yet released into our public playbook because is not very portable. If you don’t have a ready-to-use ELK cluster, you would have no benefit from running journalbeat.

Conclusion

It is now almost a month that we have been running in production, and we found the system to be robust and stable. We had no complaints from our users, so we can say that the migration was seamless for our users. We have learned a lot from this experience.

In the next blog post we will describe how we implemented the metrics monitoring, to observe how much memory and CPU cores each pod is consuming. Make sure you keep an eye on our blog for updates.


2 Comments

Openstack Keystone authentication for your Kubernetes cluster

At SWITCH we are looking to provide a container platform as a Service solution. We are working on Kubernetes and Openshift to gauge what is possible and how a service could be structured. It would be really nice to use the existing Openstack username and password to authenticate to Kubernetes. We tested this solution and it works great.

How does it work ? Lets start from the client side.

Kubernetes users use the kubectl client to access the cluster. The good news is that since version v1.8.0 of the client, kubectl is able to read the usual openstack env variables, contact keystone to request a token, and forward the request to the kubernetes cluster using the token. This was merged the 7th of August 2017. I could not find anywhere how to correctly configure the client to use this functionality. Finally I wrote some documentation notes HERE.

How does it work on the Kubernetes master side ?

The Kubernetes API receives a request with a keystone token. In the Kubernetes language this is a Bearer Token. To verify the keystone token the Kubernetes API server will use a WebHook. What does it means ? That the Kubernetes API will contact yet another Kubernetes component that is able to authenticate the keystone token.

The k8s-keystone-auth component developed by Dims makes exactly this. I tested his code and I created a Docker container to integrate the k8s-keystone-auth in my kube-system namespace. When you run the k8s-keystone-auth container your pass as an argument the URL of your keystone server.

If you are deploying your cluster with k8s-on-openstack you find this integration summarized in a single commit.

Now that everything is setup I can try:

source ~/openstackcredentials
kubectl get pods

I will be correctly authenticated by keystone that will verify my identity, but I will have no authorization to do anything:

Error from server (Forbidden): pods is forbidden: User "saverio.proto@switch.ch" cannot list pods in the namespace "default"

This is because we need to set up some authorization for this keystone user. You can find detailed documentation about RBAC but I make here a simple example:

kubectl create rolebinding saverio-view --clusterrole view --user saverio.proto@switch.ch --namespace default

Now the my user is able to view anything at the default namespace, and I will be able to do kubectl get pods

Of course setting up RBAC specific rules for every user is not optimal. You can at least use the keystone projects, that are mapped to kind: Group in Kubernetes. Here an example:

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: read-pods
 namespace: default
subjects:
- kind: Group
 name: <openstack_project_uuid>
 apiGroup: rbac.authorization.k8s.io
roleRef:
 kind: Role
 name: pod-reader
 apiGroup: rbac.authorization.k8s.io

You can then achieve a “soft multitenancy” where every user belonging to a specific keystone project is limited with permissions to a specific namespace. I talk about soft multitenancy because all the pods from all the namespaces, depending on your networking solution, could end up on the same network with a completely open policy.

I would like to thank Dims and the other people on the Slack channel #sig-openstack for the great help while developing this Kubernetes deployment.


Deploy Kubernetes v1.8.3 on Openstack with native Neutron networking

Hello,
I wrote in the past how to deploy Kubernetes on SWITCHengines (Openstack) using this ansible playbook. When I wrote that article, I did not care about the networking setup, and I used the proposed weavenet plugin. I went to Sydney at the Openstack Summit and I saw the great presentation from Angus Lees. It was the right time to see the presentation because I recently watched this video where they explain the networking of Kubernetes when running on GCE. Going back to Openstack, Angus mentioned that the Kubernetes master can talk to neutron, to inject routes in the tenant router to provide connectivity without NAT among the pods that live in different instances. This would make easier the troubleshooting, and would leave MTU 1500 between the pods.

It looked very easy, just use:

--network-plugin=kubenet

and specify in the cloud config the router uuid.

Our first tests with version 1.7.0 did not work. First of all I had to fix the Kubernetes documentation, because the syntax to specify the router UUID was wrong. Then I had a problem with Security groups disappearing from the instances. After troubleshooting and asking for help on the Kubernetes slack channel, I found out that I was hitting a gophercloud known bug.

The bug was already fixed in gophercloud at the time of my finding, but I learned that Kubernetes freezes an older version of this library in the folder “vendor/github.com/gophercloud/gophercloud”. So the only way to get the updated library version was to upgrade to Kubernetes v1.8.0, or any newer version including this commit.

After a bit of testing every works now. The changes are summarised in this PR, or you can just use the master branch from my git repository.

After you deploy, the K8s master will assign from network ClusterCIDR (usually a /16 address space) a smaller /24 subnet per each Openstack instance. The Pods will get addresses from the subnet assigned to the instance. The kubernetes master will inject static routes to the neutron router, to be able to route packets to the Pods. It will also configure the neutron ports of the instances with the correct allowed_address_pairs value, so that the traffic is not dropped by the Openstack antispoofing rules.

This is what a show of the Openstack router looks like:

$ openstack router show b11216cb-a725-4006-9a55-7853d66e5894 -c routes
+--------+--------------------------------------------------+
| Field  | Value                                            |
+--------+--------------------------------------------------+
| routes | destination='10.96.0.0/24', gateway='10.8.10.3'  |
|        | destination='10.96.1.0/24', gateway='10.8.10.8'  |
|        | destination='10.96.2.0/24', gateway='10.8.10.11' |
|        | destination='10.96.3.0/24', gateway='10.8.10.10' |
+--------+--------------------------------------------------+

And this is what the allowed_address_pairs on the port of one instance looks like:

$ openstack port show 42f2a063-a316-4fe2-808c-cd2d4ed6592f -c allowed_address_pairs
+-----------------------+------------------------------------------------------------+
| Field                 | Value                                                      |
+-----------------------+------------------------------------------------------------+
| allowed_address_pairs | ip_address='10.96.1.0/24', mac_address='fa:16:3e:3e:34:2c' |
+-----------------------+------------------------------------------------------------+

There is of course more work to be done.

I will improve the ansible playbook to create automatically the Openstack router and network, at the moment these steps are done manually before starting the playbook.

Working with network-plugin=kubenet is actually deprecated, so I have to understand what is the long term plan for this way of deployment.

The Kubernetes master is still running on a single VM, the playbook can be extended to have an HA setup.

I really would like to have feedback from users of Kubernetes on Openstack. If you use this playbook please let me know, and if you improve it, the Pull Requests on github are very welcome! 🙂


(Ceph) storage (server) power usage

We have been running Ceph in production for SWITCHengines since mid-2014, and are at the third generation of servers now.

SWITCHengines storage server evolution

  • First generation, since March 2014: 2U Dalco based on Intel S2600GZ, 2×E5-2650v2 CPUs, 128GB RAM, 2×200GB Intel DC S3610 SSD, 12×WD SE 4TB
  • Second generation, since Dec. 2015: 2U Dalco based on Intel S2600WTT, 1×E5-2620v4 CPU, 64GB RAM, 2×200GB Intel DC S3610 SSD, 12×WD SE 4TB
  • Third generation, since June 2017: 1U Quanta S1Q-1ULH-8, 1×Xeon D-1541 CPU, 64GB RAM, 2×240GB Micron 5100 MAX SSD, 12×HGST Ultrastar He8 (TB)

What all servers have in common: 2×10GE (SFP+ DAC) network connections, redundant power supplies, simple BMC modules connected to separate GigE network.

We run all those servers together in a single large Ceph RADOS cluster (actually we have two clusters in different towns, but for this article I focus on just the larger and more heavily loaded one). The cluster has 480 OSDs, contains about 500TiB user data, mostly RBD block devices used by OpenStack instances, and some S3 object storage, including video streaming directly from RadosGW to browsers. Cluster-wide I/O rates during my measurements were around 2’700 IOPS, 150MB/s read, 65MB/s write. We didn’t apply any particular optimization for energy or otherwise.

To understand the story behind the server types: Initially we used the same server chassis for compute and storage servers. We also used the same relatively generous CPU and RAM configurations. This would have allowed us to turn compute into storage servers (or vice-versa) relatively easily. When purchasing the second server generation we saved some money by reducing CPU power and RAM. For the third generation, we opted for increased density and efficiency made possible by a “system-on-a-chip” (Xeon D)-based server design.

Power measurement results

All these servers have IPMI-accessible power sensors. Last week my colleagues did some measurement with an external power meter, and found that (for the servers they tested—not all types, for lack of time) the values from the IPMI readers are within 5% or so of the values from the “real” power meter. Good enough!

Unfortunately we don’t yet feed IPMI measurements into any of our continuous measurement tools (Carbon/Graphite/Grafana or Nagios). If you do that, please use the comments to tell us how you set this up.

But recently I looked at the IPMI power consumption readings for these servers during a time of relatively light use (weekend) and got the following results:

  • Gen 1: 248W
  • Gen 2: 205W
  • Gen 3: 155W

Note that the Gen 3 servers have larger disks, and thus Ceph puts twice as much data on them, and thus they get double the IOPS than the old servers. Still, they use significantly less power. This is partly due to the simplified mainboard and more modern CPU, partly to the Helium-filled disks that only draw ~4.5W each (when idle) as opposed to ~7.5W for the older 4TB drives.

Cost to power a Terabyte-year of user data

Just for fun, I also performed some cost calculations in relation to usable space, under the following assumptions:

  1. We pay 0.15 €/kWh. (actually I used CHF, but doesn’t really matter—some countries will pay more than this even without overhead, others pay only half, so this would cover some of the other directly energy-dependent costs like A/C and redundancy. Anyway it’s about the relative costs 🙂
  2. We can fill disks up to an average of 70% before things get messy.

When using traditional three-way replication, storing a usable Terabyte for a year costs us the following amounts just for power:

  • Gen 1: € 29.10
  • Gen 2: € 24.05
  • Gen 3: € 9.09 (note again that these have twice the capacity)

If we assume Erasure Coding with 50% overhead, e.g. 2+1, then the power cost would go down to

  • Gen 1: € 14.55
  • Gen 2: € 12.03
  • Gen 3: € 4.55

We could consider even more space-efficient EC configurations, but I don’t have any experience with that… “left as an exercise for the reader”.

In conclusion, we could say that advances in hardware (more efficient servers, larger and more efficient disks), software (EC in Ceph), as well as our own optimizations (less spare CPU/RAM) have brought down the power component of our storage costs by a factor of 6.5 over these 3.5 years. Not bad, huh? Of course there are trade-offs: the new servers have lower IOPS per space, EC uses more CPU and disk-read operations etc.

The next frontier: powering down idle disks

Finally, I also took an unused server of the Gen 3 ones (we keep a few powered down until we need them—I powered one on for the test). It consumed 136W, not much less than the 155W under (light) load. Although the 12 8TB disks weren’t even mounted, they were spinning. Putting them all in “standby” mode with sudo hdparm -y lowered the total system load to just 82W. So for infrequently accessed “cold storage”, there’s even more room for optimization—although it might be tricky to leverage standby mode in practice with a system such as Ceph. At least scrubbing strategy would need to be adapted, I guess.


SWITCHdrive Over IPv6

When we built the SWITCHdrive service on the OpenStack platform that was to become SWITCHengines, that platform didn’t really support IPv6 yet. But since Spring 2016 it does. This week, we enabled IPv6 in SWITCHdrive and performed some internal tests. Today around noon, we published its IPv6 address (“AAAA record”) in the DNS. We quickly saw around 5% of accesses use IPv6 instead of IPv4.

In the evening, this percentage climbed to about 14%. This shows the relatively good support for IPv6 on Swiss broadband (home) networks, notably by the good folks at Swisscom.

The lower percentage during office (and lecture, etc.) hours shows that the IPv6 roll-out to higher education campuses still has some way to go. Our SWITCHlan backbone has been running “dual-stack” (IPv4 and IPv6 in parallel) in production for more than 10 years, and most institutions have added IPv6 configuration to their connections to us. But campus networks are wonderfully complex, so getting IPv6 deployed to every network plug and every wireless access point is a daunting task. Some schools are almost there, including some large ones that don’t use SWITCHdrive—yet!?—so the 5% may underestimate the extent of the roll-out for the overall SWITCH community. The others will follow in their footsteps. They can count on the help of the community and benefit from IPv6 training courses organized by our colleagues in the security and network teams. Contact us if you need help!

[Update: After a few weeks, the proportion of IPv6 traffic increased somewhat. Now we typically see around 10% during office hours and 20% during weekends. So the “retail” sector is still clearly ahead of (our academic) enterprise networks in terms of IPv6 penetration.]


6 Comments

Starting 1000 instances on SWITCHengines

Is it really possible with Openstack to start 1000 instances, make a parallel computation, and then save the data and delete the instances ?
To answer this question we tested it on SWITCHengines. I had a lot of troubles getting this work, and I have to thank other Openstack Operators I been chatting with: Mattia Belluco, Matteo Panella and Anton Aksola.
Our Openstack control plane is deployed with a dedicated pet VM for each Openstack service (Nova, Cinder, Neutron, Glance and Keystone) and a generic controller VM where we run the mysql and the rabbitmq services. This configuration makes possible to monitor each Openstack service as an isolated VM, and it makes easier for us to identify bottlenecks in the control plane.
For this experiment we never used the web interface, but the Openstack CLI with this reference command line.

openstack server create \
--image "Ubuntu Xenial 16.04 (SWITCHengines)" \
--flavor c1.small \
--network demo-network \
--user-data cloud-init.txt \
--key-name mykey \
--min 100 \
--max 100 test

The c1.small flavor has just 1 CPU core and 1GB of RAM.

We did the experiment in 4 steps, trying with 100, 200, 400 and 1000 instances. To make sure that the instances were really started and operational, we used cloud-init to make them phone home to a registration server. This is a very easy cloud-init feature to use, here is an example cloud-init.txt file:


#cloud-config
phone_home:
url: http://x.x.x.x:8000/$INSTANCE_ID/
post: [ hostname, fqdn ]

In this github gist we share the python code to run the registration service.

The first test with 100 instances did not work. We tried a few runs and we always had a minimum of 4 to a maximum of 7 instances that did not start for various reasons. Monitoring our control plane we noticed that we were saturating the CPUs and memory of the nova and neutron pets.
We increased the resources for both the nova and the neutron pets from 4 to 16 CPU cores and we doubled the memory from 8 GB to 16 GB.
After these changes we were able to start 100 instances without problems. We noticed that the neutron pet had an higher load than the nova pet during the process of creating 100 instances.

When we tried with 200 instances, those were all reported as Running by Openstack but we always had a minimum of 8 to a maximum of 20 instances not phoning home. Looking at the serial console with the command:

openstack console log show

we noticed that these instances were not able to get an IP address from the DHCP server, and the DHCP client would give up after 300 seconds. Using the hint that the neutron pet was more loaded than the nova pet, we found out that the nova instances reached the RUNNING state while the corresponding neutron ports were still in the BUILDING phase.
Thinking of a race condition between nova instances and neutron ports, I asked on the Openstack Developers mailing list, and it turned out that we had a wrong configuration.

We changed our nova.conf as follows:

vif_plugging_is_fatal=True
vif_plugging_timeout=300

After fixing the configuration we had the same result, but instead of the instances starting and not being able to obtain an IP address, they never started and were reported in ERROR state by Openstack.
The real challenge was not to schedule 200 instances, but to allocate 200 network ports.
Troubleshooting in this direction we observed that the rabbitmq queues of the neutron dhcp agents were filling up during the ports creation. For each created port the dhcp agent had to add a corresponding line to the file /var/lib/neutron/dhcp/$UUID/host. Where $UUID is the corresponding Neutron network UUID.

We looked into the detail of what happens when a neutron port is created. Using the guru meditation report we traced down the culprit in a slow “ip route list” call.
This command is called everytime a neutron port is created:

time sudo ip netns exec qdhcp-7a1cfb7f-2960-45f5-903f-0d602450525a ip route list
default via 10.10.0.1 dev tapaf136b11-a5
10.10.0.0/16 dev tapaf136b11-a5 proto kernel scope link src 10.10.0.2
real 0m0.048s
user 0m0.000s
sys 0m0.016s

However calling the same command within neutron-rootwrap takes about 10 times longer:

time sudo neutron-rootwrap /etc/neutron/rootwrap.conf ip netns exec qdhcp-7a1cfb7f-2960-45f5-903f-0d602450525a ip route list dev tapaf136b11-a5
default via 10.10.0.1
10.10.0.0/16 proto kernel scope link src 10.10.0.2
real 0m0.713s
user 0m0.472s
sys 0m0.172s

Once we identified this bottleneck, we changed the configuration again to enable the Openstack rootwrap to work in daemon mode.
We had to change the agent section of neutron.conf

[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
root_helper_daemon=sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf

After this change, we were able to start successfully 200, 400 and 1000 instances.
With 1000 instances we still get a HTTP 504 gateway timeout error.
This is because the nova-api server takes longer than the reverse proxy timeout to answer the request. The reverse proxy replies with HTTP 504 but the nova-api server will later finish to process the request with a HTTP 200. This is easily fixed using a longer timeout, but we plan to trace the problem in detail to shorten the processing time of the request.

Finally the answer is yes, with Openstack it is really possible to start 1000 instances quickly to have compute power just when needed.


Temporary elevated reachability (aka – security issue on some VMs)

TL;DR: A small percentage of VMs running in the Zurich region of SWITCHengines weren’t protected by the default firewall from 11.8 to 18.8.2017. The root problem has been fixed. We have implemented additional measures to prevent this from happening again.
Thanks to one of our users, we recently were made aware of a security problem that has affected a small percentage of our customers virtual machines. While it looked like the machines were protected by the standard OpenStack firewall rules, in effect those machines were completely open to the Internet. We were able to fix the problem within a few hours. Our investigations showed that the problem was existent for roughly one week (starting 11.8.2018, ending 18.8.2017) due to a mismatch between the software deployed on 5 specific hypervisors and the configuration applied to them.
If you were affected by this problem, you already have received an email from us. If you didn’t receive anything, your VMs were secure.

Technical background

Each VM running on SWITCHengines is completely isolated from the Internet. VMs run on a number of hypervisors (physical servers) that share an internal physical network (The hypervisors also are isolated from the Internet and can’t be reached from the outside). In order for a VM to reach the Internet, it has to use a software defined virtual network that connects it to one of our “Network Nodes”. These network nodes are special virtual machines that are have both an address on our private network and and address on the Internet – and can thus bridge between virtual machines and the Internet. They are using a technique called NAT (Network Address Translation) to provide external access to the VMs and vice versa. The software component running on the network nodes is called Neutron – it’s a part of the OpenStack project.
On each hypervisor, another part of Neutron runs. This part controls the connectivity and the security access to the virtual machines running on the hypervisor. The neutron component on the hypervisor is responsible for managing the virtual networks and the access from the VMs to this virtual network. In addition it also controls the Firewall component (we use `iptables` a standard component of the Linux operating system running on the hypervisors). By default, each VM running on the hypervisor is protected by strict security rules that disallow access to any ports on the VM. 
A user can configure these security groups by adding rules through the SWITCHengines GUI. When a security rule is modified, the Neutron Server sends a command to the Neutron component on the Hypervisor that in turn adds the relevant rules to the `iptables` configuration on that specific hypervisor.

Cause of loss of Firewall functionality

Besides upgrading the OpenStack software regularly (every 6 months) we also maintain and upgrade the operating system on the hypervisors. The last months we have been busy upgrading the hypervisors from Ubuntu Trusty (14.04) to Ubuntu Xenial (16.04). Upgrading a server takes a long time, because we live migrate all running VMs from that server to another, then upgrade the server OS (together with upgrades of any installed packages) and then take the server back into production – i.e. move VMs to it. This process has been ongoing for the last 3 months and we will be finished in September 2017.
The 5 hypervisors that were affected by the problem, were the first ones to be upgraded to Ubuntu Xenial and the OpenStack Newton components. Because they were upgraded early, they had an older version of OpenStack Newton installed. There was a bug in the Neutron component of that OpenStack release – however, that bug didn’t surface at first.
On the 11. August, we did routine configuration changes to all hypervisors running on the Newton release. This config change went well on all recent installed hypervisors, but caused the Firewall rules to be dropped on the older machines.
When we were made aware of the problem and upgraded the Newton component, the firewall rules were recreated and the VMs protected. 

Remedies

We have identified two fundamental problems through this incident:
  • Some servers have different software versions of the same components due to them being installed at different times
  • We didn’t detect the lack of firewall rules for the affected VMs
To address the first problem, we are being more strict about specifying the exact release of each software component that we install on all our servers. We strive to have identical installations everywhere.
To address the second problem, we have written a script that checks the firewall status for all running virtual machines. We will incorporate this script into our regular monitoring and testing so that we will be alerted about that problem automatically, should it happen again.
We take the security of SWITCHengines serious and we are sorry that we left some of our customers VMs unprotected. Thanks to the people reporting the problem and thank you for your understanding. We are sorry for any problems this might have caused you.
Jens-Christian Fischer
Product Owner SWITCHengines


Deploy Kubernetes on the SWITCHengines Openstack cloud

Increasing demand for container orchestration tools is coming from our users. Kubernetes has currently a lot of hype, and often it comes the question if we are providing a Kubernetes cluster at SWITCH.

At the moment we suggest that our users deploy their own Kubernetes cluster on top of SWITCHengines. To make sure our Openstack deployment works with this solution we tried ourself.

After deploying manually with kubeadm to learn the tool, I found a well written ansible playbook from Francois Deppierraz. I extended the playbook to make Kubernetes aware that SWITCHengines implements the LBaaSv2, and the patch is now merged in the original version.

The first problem I discovered deploying Kubernetes is the total lack of support for IPv6. Because instances in SWITCHengines get IPv6 addresses by default, I run into problems running the playbook and nothing was working. The first thing you should do is to create your own tenant network with a router, with IPv4 only connectivity. This is already explained in detail in our standard documentation.

Now we are ready to clone the ansible playbook:

git clone https://github.com/infraly/k8s-on-openstack

Because the ansible playbook creates instances through the Openstack API, you will have to source your Openstack configuration file. We extend a little bit the usual configuration file with more variables that are specific to this ansible playbook. Lets see a template:

export OS_USERNAME=username
export OS_PASSWORD=mypassword
export OS_PROJECT_NAME=myproject
export OS_PROJECT_ID=myproject_uuid
export OS_AUTH_URL=https://keystone.cloud.switch.ch:5000/v2.0
export OS_REGION_NAME=ZH
export KEY=keyname
export IMAGE="Ubuntu Xenial 16.04 (SWITCHengines)"
export NETWORK=k8s
export SUBNET_UUID=subnet_uuid
export FLOATING_IP_NETWORK_UUID=network_uuid

Lets review what changes. It is important to add also the variable OS_PROJECT_ID because the Kubernetes code that creates Load Balancers requires this value, and it is not able to extract it from the project name. To find the uuid just use the Openstack cli:

openstack project show myprojectname -f value -c id

The KEY is the name of an existing keypair that will be used to start the instances. The IMAGE is also self explicative, at the moment only Xenial is tested by me. The variable NETWORK is the name of the tenant network you created earlier. When you created a network you created also a subnet, and you need to set the uuid into SUBNET_UUID. The last variable is FLOATING_IP_NETWORK_UUID that tells kubernetes the network where to get floating IPs. In SWITCHengines this network is always called public, so you can extract the uuid like this:

openstack network show public -f value -c id

You can customize your configuration even more, reading the README file in the git repository you will find more options like the flavors to use or the cluster size. When your configuration file is ready you can run the playbook:

source /path/to/config_file
cd k8s-on-openstack
ansible-playbook site.yaml

It will take a few minutes to go through all the tasks. When everything is done you can ssh into the kubernetes master instance and check that everything is running as expected:

ubuntu@k8s-master:~$ kubectl get nodes
NAME         STATUS    AGE       VERSION
k8s-1        Ready     2d        v1.6.2
k8s-2        Ready     2d        v1.6.2
k8s-3        Ready     2d        v1.6.2
k8s-master   Ready     2d        v1.6.2

I found very useful adding bash completion for kubectl:

source <(kubectl completion bash)

Lets deploy an instance of nginx to test if everything works:

kubectl run my-nginx --image=nginx --replicas=2 --port=80

This will create two containers with nginx. You can monitor the progress with the commands:

kubectl get pods
kubectl get events

At this stage you have your containers running but the service is still not accessible from the outside. One option is to use the Openstack LBaaS to expose it, you can do it with this command:

kubectl expose deployment my-nginx --port=80 --type=LoadBalancer

The expose command will create the Openstack Load Balancer and will configure it. To know the public floating ip address you can use this command to describe the service:

ubuntu@k8s-master:~$ kubectl describe service my-nginx
Name:			my-nginx
Namespace:		default
Labels:			run=my-nginx
Annotations:		
Selector:		run=my-nginx
Type:			LoadBalancer
IP:			10.109.12.171
LoadBalancer Ingress:	10.8.10.15, 86.119.34.151
Port:				80/TCP
NodePort:			30620/TCP
Endpoints:		10.40.0.1:80,10.43.0.1:80
Session Affinity:	None
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----			-------------	--------	------			-------
  1m		1m		1	service-controller			Normal		CreatingLoadBalancer	Creating load balancer
  10s		10s		1	service-controller			Normal		CreatedLoadBalancer	Created load balancer

Conclusion

Following this blog post you should be able to deploy Kubernetes on Openstack to understand how things work. For a real deployment you might want to make some customisations, we encourage you to share any patch to the ansible playbook with github pull requests.
Please note that Kubernetes is not bug free. When you will delete your deployment you might find this bug where Kubernetes is not able to delete correctly the load balancer. Hopefully this is fixed by the time you read this blog post.


Hosting and computing public scientific datasets in the cloud

SWITCH offers a cloud computing service called SWITCHengines, using OpenStack and Ceph. These computing resources are targeted to research usage, where the demand for Big Data (Hadoop, Spark) workloads is increasing. Big Data Analysis requires access to large datasets. In several use cases the analysis is done against public available datasets.  To add value to the cloud computing service our plan is to provide a place for researchers to store their data sets and make them available to others. A possible alternative at the moment is Amazon EC2, because many datasets are available for free in Amazon S3 when the computation is done within the Amazon infrastructure. Storing as many datasets as Amazon does is challenging because the real data utilisation in a object storage system is usually 3 times the real data. Each dataset has a size of 100s of TB, so storing a few of them with a 3 times replica factor means working in the PB domain.

Technical challenges

While it sounds a easy task to share some scientific data in a cloud datacenter, there are some problems you will have to address.

1) The size of the datasets is challenging. It will cost money to host the datasets. To make the service cost sustainable you must find reduced redundancy solutions, that have a reasonable cost and a reasonable risk of data loss. This is acceptable because there datasets are public and most likely there are going to be other providers hosting a copy, so in case of disaster data recovery will be possible. The key idea is to store the dataset locally to speed up access to the data during computation, not to store the dataset for persistent storage. Before we can host a dataset, we have to download it from another source. Also this first copy operation is challenging because of the size of the data involved.

2) We need a proper ACL system to control who can access the data. We can’t just publish all the datasets with public access. Some datasets require the user to sign an End User Agreement.

How we store the data

We use Ceph in our cloud datacenter as storage backend. We have Ceph pools for the OpenStack rbd volumes, but we also use the rados gateway to provide an object storage service, accessible via S3 and Swift compatible APIs. We believe that the best solution to access the Scientific Datasets is to store them on Object Storage, because the size of the data requires a technical solution that can scale horizontally spreading the data on many heterogeneous disks. Our Ceph cluster works with a replication factor of 3. This means that each object exists in 3 replicas on three different servers. To work with big amounts of data we deployed in production Erasure Coded Pools in our Ceph cluster. In this case at the cost of an higher CPU computation on the storage nodes, is possible to store the data with an expansion for 1.5 instead of 3. We found very useful the blog post cephnotes that explains how to create a new Ceph pool with Erasure Coding to be used with the rados gateway. We are now running this setup in production, and when we create a S3 bucket we can decide if the bucket will land on the normal pool or on the erasure coded pool. To make the decision we use the bucket-location command line option like in this this example:

s3cmd mb --bucket-location=:ec-placement s3://mylargedatabucket

It is important to note that the bucket-location option should be there when creating the S3 bucket. When writing objects at a later time, the objects will always go to the right pool.

How we download the first copy

How do you download a dataset of 200TB from an external source ? How long does it take ? The task if of course not trivial, to address this problem Amazon started his snowball service. The key idea is that data is shipped to you via snail mail on a physical device. Because we are a NREN, we decided to download the datasets over the GEANT network from other institutions that are hosting datasets. We used rclone for our download operations. It is a modern object storage client written in go, that supports multiple protocols. Working with big amount of data we soon found some issues with the client. The community behind rclone was very active and helped us. “Fail to copy files larger than 80GB” was the most severe problem we had. We also improved the documentation about Swift keys with slashes do not work with Ceph in swift emulation mode.

We also had some issues with the rados gateway. When we write the data, this goes into a Ceph bucket. In theory the data in the bucket can be written and read using both the S3 and the Swift APIs that the rados gateway implements. In reality we found out that there is a difference in how the checksums for the multipart objects is computed when using the S3 or the Swift API. This means that if wrote a object, you have to read it back with the same API you used for writing, otherwise your client will warn you that the object is corrupted. We notified the Ceph developers about this problem opening a bug that is still open.

Set the ACL to the dataset

When we store a dataset we would like to have read-write access for the cloud admin accounts, and read-only access for the users that are allowed to read the data. This means that after the dataset is uploaded, we should be able to easily grant and revoke access to users and projects. S3 and Swift manage the ACL in very different ways. In S3 the concept of inheritance of the ACL from a parent bucket does not exist. This means that for each individual object a specific ACL must be set. We read this in the official documentation:

Bucket and object permissions are completely independent; an object does not inherit the permissions from its bucket. For example, if you create a bucket and grant write access to another user, you will not be able to access that user’s objects unless the user explicitly grants you access.

This does not scale with the size of the dataset because making an API call for each object is really time consuming. There is a workaround, that it is to mark the bucket as completely public, bypassing the authorization for all objects. This workaround does not match with our use case where we have some users with read-only permission but the dataset is not public. As a viable workaround we create for each dataset a dedicated OpenStack tenant ‘Dataset X readonly access’ and what we do is adding and removing users from this special tenant, because this is a fast operation. It becomes very important to set the correct ACL when writing the objects for the first time. Unfortunately our favourite client rclone had no existent support for setting ACLs, so we had to use s3cmd.

We found out that also s3cmd ignores the –acl-grant flag, we notified the bug to the community but the bug is still open. This means that for each object you want to write you need two HTTP requests, one to actually write the object and one to set the proper ACL.

In OpenStack Swift you can set the ACL for a Swift container, and all the objects in the container will inherit that property. This sounds very good, but we are not running a real OpenStack Swift deployment, we serve objects in Ceph using an implementation of the Swift api.

During our tests we found out that the rados gateway Swift API implementation is not complete: ACL are not implemented. Other openstack operators reported that native Swift deployments work well with ACL and inheritance of ACL to the objects. However, when touching ACL and large objects also Swift is not bug free.

Access the dataset

To be able to compute the data without wasting time copying the data from the object store to a HDFS deployment, the best is to use Hadoop directly attached to the object store, in what is called the streaming mode. The result of the computation can be stored as well on the object store, or on a smaller HDFS cluster. To get started we published this tutorial.

Unfortunately the latest versions of Hadoop require the S3 backend to support the AWS4 signature, that will not work immediately with your SWITCHengines credentials because of this bug triggered when using Keystone together with the rados gateway. As a workaround we manually have to add the user on our rados gateway deployment, to make the AWS4 signature work correctly.

Conclusion

Making available large scientific datasets on a multi tenant cloud is still challenging but not impossible. Bring the data close to the compute power is of great importance. Letting the user compute public scientific datasets makes a scientific cloud service more attractive.

Moreoever, we envision users producing scientific datasets, and being able to share them with other researchers for further analysis.


IPv6 Address Assignment in OpenStack

In an inquiry “IPv6 and Liberty (or Mitaka)” on the openstack mailing list,

Ken D’Ambrosio writes:
> Hey, all. I have a Liberty cloud, and decided for the heck of it to
> start dipping my toe into IPv6. I do have some confusion, however. I
> can choose between SLAAC, DHCPv6 stateful and DHCPv6 stateless — and
> I see some writeups on what they do, but I don’t understand what
> differentiates them. As far as I can tell, they all do pretty much
> the same thing, just with different pieces doing different things.
> E.g., the chart, found here
> (http://docs.openstack.org/liberty/networking-guide/adv-config-ipv6.html
> — page down a little) shows those three options, but it isn’t clear:
> * How to configure the elements involved
> * What they exactly do (e.g., “optional info”? What’s that?)
> * Why there even *are* different choices. Do they offer functionally
> different results?

SLAAC and DHCPv6-stateless use the same mechanism (SLAAC) to provide instances with IPv6 addresses. The only difference between them is that with DHCPv6-stateless, the instance can also use DHCPv6 requests to get other (than its own address) information such as nameserver addresses etc. So between SLAAC and DHCPv6-stateless, I would always prefer DHCPv6-stateless—it’s a strict superset in terms of functionality, and I don’t see any particular risks associated with it.

DHCPv6-stateful is a different beast: It will use DHCPv6 to give an instance its IPv6 address. DHCPv6 actually fits OpenStack’s model better than SLAAC.

Why DHCPv6-Stateful Fits OpenStack Better

OpenStack (Nova) sees it as part of its job to control the IP address(es) that an instance uses. In IPv4 it uses DHCP (always did). DHCP assigns complete addresses—which are under control of OpenStack. In IPv6, stateful DHCPv6 would be the equivalent.

SLAAC is different in that the node (instance) actually chooses its address based on information it gets from the router. The most common method is that the node uses an “EUI-64” address as the local part (host ID) of the address. The EUI-64 is derived from the MAC address by a fixed algorithm. This can work with OpenStack because OpenStack controls the MAC addresses too, and can thus “guess” what IPv6 address an instance will auto-configure on a given network. You see how this is a little less straightforward than OpenStack simply telling the instance what IPv6 address it should use.

In practice, OpenStack’s guessing fails when an instance uses other methods to get the local part, for example “privacy addresses” according to RFC 4941. These will lead to conflicts with OpenStack’s built-in anti-spoofing filters. So such mechanisms need to be disabled when SLAAC is used under OpenStack (including under “DHCPv6-stateless”).

Why we Use SLAAC/DHCPv6-Stateless Anyway

Unfortunately, most GNU/Linux distributions don’t support Stateful DHCPv6 “out of the box” today.

Because we want our users to use unmodified operating systems images and still get usable IPv6, we have grudgingly decided to use DHCPv6-stateless. For configuration information, see SWITCHengines Under the Hood: Basic IPv6 Configuration.

If you decide to go for DHCPv6-stateful, then there’s a Web page that explains how to enable it client-side for a variety of GNU/Linux distributions.

It would be nice if all systems honored the “M” (Managed) flag in Router Advertisements and would use DHCPv6 if it is set, otherwise SLAAC.

[This is an edited version of my response, which I wasn’t sure I was allowed to post because I use GMANE to read the list– SL.]


Tuning Virtualized Network Node: multi-queue virtio-net

The infrastructure used by SWITCHengines is composed of about 100 servers. Each one uses two 10 Gb/s network ports. Ideally, a given instance (virtual machine) on SWITCHengines would be able to achieve 20 Gb/s of throughput when communicating with the rest of the Internet. In the real world, however, several bottlenecks limit this rate. We are working hard to address these bottlenecks and bring actual performance closer to the theoretical optimum.

An important bottleneck in our infrastructure is the network node for each region. All logical routers are implemented on this node, using Linux network namespaces and Open vSwitch (OVS). That means that all packets between the Internet and all the instances of the region need to pass through the node.

In our architecture, the OpenStack services run inside various virtual machines (“service VMs” or “pets”) on a dedicated set of redundant “provisioning” (or “prov”) servers. This is good for serviceability and reliability, but has some overhead—especially for I/O-intensive tasks such as network packet forwarding. Our network node is one of those service VMs.

In the original configuration, a single instance would never get more than about 2 Gb/s of throughput to the Internet when measured with iperf. What’s worse, the aggregate Internet throughput for multiple VMs was not much higher, which meant that a single high-traffic VM could easily “starve” all other VMs.

We had investigated many options for improving this situation: DVR, multiple network nodes, SR-IOV, DPDK, moving work to switches etc. But each of these methods has its drawbacks such as additional complexity (and thus potential for new and exciting bugs and hard-to-debug failure modes), lock-in, and in some cases, loss of features like IPv6 support. So we stayed with our inefficient but simple configuration that has worked very reliably for us so far.

Multithreading to the Rescue!

Our network node is a VM with multiple logical CPUs. But when running e.g. “top” during high network load, we noticed that only one (virtual) core was busy forwarding packets. So we started looking for a way to distribute the work over several cores. We found that we could achieve this by enabling three things:

Multi-queue virtio-net interfaces

Our service nodes run under libvirt/Qemu/KVM and use virtio-net network devices. These interfaces can be configured to expose multiple queues. Here is an example of an interface definition in libvirt XML syntax which has been configured for eight queues:

 <interface type='bridge'>
   <mac address='52:54:00:e0:e1:15'/>
   <source bridge='br11'/>
   <model type='virtio'/>
   <driver name='vhost' queues='8'/>
   <virtualport type='openvswitch'/>
 </interface>

A good rule of thumb is to set the number of queues to the number of (virtual) CPU cores of the system.

Multi-threaded forwarding in the network node VM

Within the VM, kernel threads need to be allocated to the interface queues. This can be achieved using ethtool -L:

ethtool -L eth3 combined 8

This should be done during interface initialization, for example in a “pre-up” action in /etc/network/interfaces. But it seems to be possible to change this configuration on a running interface without disruption.

Recent version of the Open vSwitch datapath

Much of the packet forwarding on the network node is performed by OVS. Its “datapath” portion is integrated into the Linux kernel. Our systems normally run Ubuntu 14.04, which includes the Linux 3.13 kernel. The OVS kernel module isn’t included with this package, but is installed separately from the openvswitch-datapath-dkms package, which corresponds to the relatively old OVS version 2.0.2. Although the OVS kernel datapath is supposed to have been multi-threaded since forever, we found that in our setup, upgrading to a newer kernel is vital for getting good (OVS) network performance.

The current Ubuntu 16.04.1 LTS release includes a fairly new Linux kernel based on 4.4. That kernel also has the OVS datapath module included by default, so that the separate DKMS package is no longer necessary. Unfortunately we cannot upgrade to Ubuntu 16.04 because that would imply upgrading all OpenStack packages to OpenStack “Mitaka”, and we aren’t quite ready for that. But thankfully, Canonical makes newer kernel packages available for older Ubuntu releases as part of their “hardware enablement” effort, so it turns out to be very easy to upgrade 14.04 to the same new kernel:

sudo apt-get install -y --install-recommends linux-generic-lts-xenial

And after a reboot, the network node should be running a fresh Linux 4.4 kernel with the OVS 2.5 datapath code inside.

Results

A simple test is to run multiple netperf TCP_STREAM tests in parallel from a single bare-metal host to six VMs running on separate nova-compute nodes behind the network node.

Each run consists of six netperf TCP_STREAM measurements started in parallel, whose throughput values are added together. Each figure is the average over ten consecutive runs with identical configuration.

The network node VM is set up with 8 vCPUs, and the two interfaces that carry traffic are configured with 8 queues each. We vary the number of queues that are actually used using ethtool -L iface combined n. (Note that even the 1-queue case does not exactly correspond to the original situation; but it’s the closest approximation that we had time to test.)

Network node running 3.13.0-95-generic kernel

1: 3.28 Gb/s
2: 3.41 Gb/s
4: 3.51 Gb/s
8: 3.57 Gb/s

Making use of multiple queues gives very little benefit.

Network node running 4.4.0-36-generic kernel

1: 3.23 Gb/s
2: 6.00 Gb/s
4: 8.02 Gb/s
8: 8.42 Gb/s (8.75 Gb/s with 12 target VMs)

Here we see that performance scales up nicely with multiple queues.

The maximum possible throughput in our setup is lower than 10 Gb/s, because the network node VM uses a single physical 10GE interface for both sides of traffic. And traffic between the network node and the hypervisors is sent encapsulated in VXLAN, which has some overhead.

Outlook

Now we know how to enable multi-core networking for hand-configured service VMs (“pets”) such as our network node. But what about the VMs under OpenStack’s control?

Starting in Liberty, Nova supports multi-queue virtio-net. Our benchmarking cluster was still running Kilo, so we could not test that yet. But stay tuned!

 

 


SWITCHengines Under the Hood: Basic IPv6 Configuration

My last post, IPv6 Finally Arriving on SWITCHengines, described what users of our IaaS offering can expect from our newly introduced IPv6 support: Instances using the shared default network (“private”) will get publicly routable IPv6 addresses.

This post explains how we set this up, and why we decided to go this route.  We hope that this is interesting to curious SWITCHengines users, and useful for other operators of OpenStack infrastructure.

Before IPv6: Neutron, Tenant Networks and Floating IP

[Feel free to skip this section if you are familiar with Tenant Networks and Floating IPs.]

SWITCHengines uses Neutron, the current generation of OpenStack Networking.  Neutron supports user-definable networks, routers and additional “service” functions such as Load Balancers or VPN gateways.  In principle, every user can build her own network (or several) isolated from the other tenants.  There is a default network accessible to all tenants.  It is called private, which I find quite confusing because it is totally not private, but shared between all the tenants.  But it has a range of private (in the sense of RFC 1918) IPv4 addresses—a subnet in OpenStack terminology—that is used to assign “fixed” addresses to instances.

There is another network called public, which provides global connectivity.  Users cannot connect instances to it directly, but they can use Neutron routers (which include NAT functionality) to route between the private (RFC 1918) addresses of their instances on tenant networks (whether the shared “private” or their own) and the public network, and by extension, the Internet.  By default, they get “outbound-only” connectivity using 1:N NAT, like users behind a typical broadband router.  But they can also request a Floating IP, which can be associated with a particular instance port.  In this case, a 1:1 NAT provides both outbound and inbound connectivity to the instance.

The router between the shared private network and the external public network was provisioned by us; it is called private-router.  Users who build their own tenant networks and want to connect them with the outside world need to set up their own routers.

This is a fairly standard setup for OpenStack installations, although some operators, especially in the “public cloud” business, forgo private addresses and NAT, and let customers connect their VMs directly to a network with publicly routable addresses.  (Sometimes I wish we’d have done that when we built SWITCHengines—but IPv4 address conservation arguments were strong in our minds at the time.  Now it seems hard to move to such a model for IPv4.  But let’s assume that IPv6 will eventually displace IPv4, so this will become moot.)

Adding IPv6: Subnet, router port, return route—that’s it

So at the outset, we have

  • a shared internal network called private
  • a provider network with Internet connectivity called public
  • a shared router between private and public called private-router

We use the “Kilo” (2015.1) version of OpenStack.

As another requirement, the “real” network underlying the public network (in our case a VLAN) needs connectivity to the IPv6 Internet.

Create an IPv6 Subnet with the appropriate options

And of course we need a fresh range of IPv6 addresses that we can route on the Internet.  A single /64 will be sufficient.  We use this to define a new Subnet in Neutron:

neutron subnet-create --ip-version 6 --name private-ipv6 \
  --ipv6-ra-mode dhcpv6-stateless --ipv6-address-mode dhcpv6-stateless \
  private 2001:620:5ca1:80f0::/64

Note that we use dhcpv6-stateless for both ra-mode and address-mode.  This will actually use SLAAC (stateless address autoconfiguration) and router advertisements to configure IPv6 on the instance.  Stateless DHCPv6 could be used to convey information such as name server addresses, but I don’t think we’re actively using that now.

We should now see a radvd process running in an appropriate namespace on the network node.  And instances—both new and pre-existing!—will start to get IPv6 addresses if they are configured to use SLAAC, as is the default for most modern OSes.

Create a new port on the shared router to connect the IPv6 Subnet

Next, we need to add a port to the shared private-router that connects this new subnet with the outside world via the public network:

neutron router-interface-add private-router private-ipv6

Configure a return route on each upstream router

Now the outside world also needs a route back to our IPv6 subnet.  The subnet is already part of a larger aggregate that is routed toward our two upstream routers.  It is sufficient to add a static route for our subnet on each of them.  But where do we point that route to, i.e. what should be the “next hop”? We use the link-local address of the external (gateway) port of our Neutron router, which we can find out by looking inside the namespace for the router on the network node.  Our router private-router has UUID 2b8d1b4f-1df1-476a-ab77-f69bb0db3a59.  So we can run the following command on the network node:

$ sudo ip netns exec qrouter-2b8d1b4f-1df1-476a-ab77-f69bb0db3a59 ip -6 addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
55: qg-2d73d3fb-f3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
 inet6 2001:620:5ca1:80fd:f816:3eff:fe00:30d7/64 scope global dynamic
 valid_lft 2591876sec preferred_lft 604676sec
 inet6 fe80::f816:3eff:fe00:30d7/64 scope link
 valid_lft forever preferred_lft forever
93: qr-02b9a67d-24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
 inet6 2001:620:5ca1:80f0::1/64 scope global
 valid_lft forever preferred_lft forever
 inet6 fe80::f816:3eff:fe7d:755b/64 scope link
 valid_lft forever preferred_lft forever
98: qr-6aaf629f-19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
 inet6 fe80::f816:3eff:feb6:85f4/64 scope link
 valid_lft forever preferred_lft forever

The port we’re looking for is the one whose name starts with qr-, the gateway port.  The address we’re looking for is the one starting with fe80:, the link-local address.

The “internal” subnet has address 2001:620:5ca1:80f0::/64, and VLAN 908 (ve908 in router-ese) is the VLAN that connects our network node to the upstream router.  So this is what we configure on each of our routers using the “industry-standard CLI”:

ipv6 route 2001:620:5ca1:80f0::/64 ve 908 fe80::f816:3eff:fe00:30d7

And we’re done! IPv6 packets can flow between instances on our private network and the Internet.

Coming up

Of course this is not the end of the story.  While our customers were mostly happy that they suddenly got IPv6, there are a few surprises that came up.  In a future episode, we’ll tell you more about them and how they can be addressed.