SWITCH Cloud Blog


Leave a comment

Openstack Horizon runs on Kubernetes in production at SWITCH

In April we upgraded the SWITCHengines OpenStack Horizon dashboard to the OpenStack Pike version. But this upgrade was a little bit special, it was more than an Horizon upgrade from Newton to Pike.

Our Horizon deployment is now hosted on a Kubernetes cluster. The cluster is deployed using the playbook k8s-on-openstack that we actively develop. We have been testing this Kubernetes deployment for a while, but it is only when you have to deploy an application on top of it in production that you really learn and you fix real problems.

Horizon is a good application to start learning Kubernetes, because it is completely stateless and it does not require any persistent storage. It is just a GUI to the OpenStack API. The user logs in with his credentials, and Horizon will get a token and will start making API calls with the user’s credentials.

Running Horizon in a single Kubernetes pod for a demo takes probably 5 minutes, but deploying for production usage is far more complex. We needed to address the following issues:

  • Horizontally scale the number of pods, keeping a central memcached or redis cache
  • Allow both IPv4 and IPv6 access to engines.switch.ch
  • Define the Load Balancing architecture
  • Implement a persistent logging system

If you want to run to the solution of all these problems, you can have a look at the project SWITCH-openstack-horizon-k8s-deployment where we have published all the Dockerfiles and the Kubernetes descriptors to recreate our deployment.

Scale Horizontally

Horizon performs much faster when it accesses a memory cache, it is the recommended way to deploy in production. We decided to go for Redis cache.

Creating a Redis service in our namespace with the name redis-master we are able to use the special environment variable ${REDIS_MASTER_SERVICE_HOST} when booting the Horizon container, to make sure all the instances point to the same cache server.

This is a good example of how you combine two services together in a Kubernetes namespace. We can horizontally scale the Horizon pods, but the Horizon deployment is independent from the Redis deployment.

IPv4 and IPv6

We always publish our services on IPv6. In our previous Kubernetes demos we used the OpenStack LBaaS to expose services to the outside world. Unfortunately in the Newton version of OpenStack, the LBaaS lacks proper IPv6 integration. To publish a production service on Kubernetes, we suggest to use an ingress controller. There are several kinds available, but we used the standard Nginx ingress controller. The key idea is that we have a K8s node with an interface exposed to the public Internet where a privileged Docker container is running with –net=host. The container runs Nginx that can bind to IPv6 and IPv4 on the node, but of course it can also reach any other pod on the cluster network.

Define the Load Balancing architecture

I already wrote above that if you need IPv6, you should not use the Openstack LBaaSv2. However I am going to explain why I would not use that kind of load balancer even for IPv4.

The first picture shows the network diagram of a LBaaSv2 deployment. The LoadBalancer is implemented as a network namespace on the network node, called qlbaas-<uuid>, in which a HAProxy process is running. This is a L4 LoadBalancer. The bad thing of this architecture is that when an instance boots, the default gateway configured via DHCP will be the IP address of the neutron router. When we expose a service with the floating IP configured on the outer interface of the LBaaS, in order to force the traffic to follow a symmetric return path, the Load Balancer must perform a DNAT and SNAT operation. This means that the IP packets hitting the Pod have completely lost the information about the source IP address of the original client. Because it is a pure L4 load balancer, we don’t have the possibility to carry this lost information on in a HTTP header. This prevents the operator from building any useful logging system, because once the traffic arrives at the pod, the information about the client is filtered out.

In the next picture we have a look on how the Nginx ingress works. In this case the external traffic is received on a public floating IP that is configured on the virtual machine running the ingress pod, in this case on the master. We terminate the TLS connection at the nginx-ingress. This is necessary because the ingress also has to perform a SNAT and DNAT but it adds to the HTTP requests the X-Forwarded-For header that we use to populate our log files. We could not add the header if we were just moving encrypted packets around.

Another advantage of this solution is that it uses just a normal instance to implement the ingress, this means that you can use in a totally independent way from the version of OpenStack you are running on.

In the future you might be able to use the newer OpenStack Octavia Load Balancer, but at the moment I did not investigate that. All I know is that the solution is really similar, but you will have an OpenStack service VM running an Nginx instance.

Implement a persistent logging system

Pods are short lived and distributed over different VMs that are also ephemeral. To collect the logs, we run docker with the log-driver journald. Once this is set up, all the docker containers running on the host will send their logging output to journald. We then collect this information with journalbeat to send the data to our elastic search cluster. This part is not yet released into our public playbook because is not very portable. If you don’t have a ready-to-use ELK cluster, you would have no benefit from running journalbeat.

Conclusion

It is now almost a month that we have been running in production, and we found the system to be robust and stable. We had no complaints from our users, so we can say that the migration was seamless for our users. We have learned a lot from this experience.

In the next blog post we will describe how we implemented the metrics monitoring, to observe how much memory and CPU cores each pod is consuming. Make sure you keep an eye on our blog for updates.


2 Comments

Openstack Keystone authentication for your Kubernetes cluster

At SWITCH we are looking to provide a container platform as a Service solution. We are working on Kubernetes and Openshift to gauge what is possible and how a service could be structured. It would be really nice to use the existing Openstack username and password to authenticate to Kubernetes. We tested this solution and it works great.

How does it work ? Lets start from the client side.

Kubernetes users use the kubectl client to access the cluster. The good news is that since version v1.8.0 of the client, kubectl is able to read the usual openstack env variables, contact keystone to request a token, and forward the request to the kubernetes cluster using the token. This was merged the 7th of August 2017. I could not find anywhere how to correctly configure the client to use this functionality. Finally I wrote some documentation notes HERE.

How does it work on the Kubernetes master side ?

The Kubernetes API receives a request with a keystone token. In the Kubernetes language this is a Bearer Token. To verify the keystone token the Kubernetes API server will use a WebHook. What does it means ? That the Kubernetes API will contact yet another Kubernetes component that is able to authenticate the keystone token.

The k8s-keystone-auth component developed by Dims makes exactly this. I tested his code and I created a Docker container to integrate the k8s-keystone-auth in my kube-system namespace. When you run the k8s-keystone-auth container your pass as an argument the URL of your keystone server.

If you are deploying your cluster with k8s-on-openstack you find this integration summarized in a single commit.

Now that everything is setup I can try:

source ~/openstackcredentials
kubectl get pods

I will be correctly authenticated by keystone that will verify my identity, but I will have no authorization to do anything:

Error from server (Forbidden): pods is forbidden: User "saverio.proto@switch.ch" cannot list pods in the namespace "default"

This is because we need to set up some authorization for this keystone user. You can find detailed documentation about RBAC but I make here a simple example:

kubectl create rolebinding saverio-view --clusterrole view --user saverio.proto@switch.ch --namespace default

Now the my user is able to view anything at the default namespace, and I will be able to do kubectl get pods

Of course setting up RBAC specific rules for every user is not optimal. You can at least use the keystone projects, that are mapped to kind: Group in Kubernetes. Here an example:

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: read-pods
 namespace: default
subjects:
- kind: Group
 name: <openstack_project_uuid>
 apiGroup: rbac.authorization.k8s.io
roleRef:
 kind: Role
 name: pod-reader
 apiGroup: rbac.authorization.k8s.io

You can then achieve a “soft multitenancy” where every user belonging to a specific keystone project is limited with permissions to a specific namespace. I talk about soft multitenancy because all the pods from all the namespaces, depending on your networking solution, could end up on the same network with a completely open policy.

I would like to thank Dims and the other people on the Slack channel #sig-openstack for the great help while developing this Kubernetes deployment.


Deploy Kubernetes v1.8.3 on Openstack with native Neutron networking

Hello,
I wrote in the past how to deploy Kubernetes on SWITCHengines (Openstack) using this ansible playbook. When I wrote that article, I did not care about the networking setup, and I used the proposed weavenet plugin. I went to Sydney at the Openstack Summit and I saw the great presentation from Angus Lees. It was the right time to see the presentation because I recently watched this video where they explain the networking of Kubernetes when running on GCE. Going back to Openstack, Angus mentioned that the Kubernetes master can talk to neutron, to inject routes in the tenant router to provide connectivity without NAT among the pods that live in different instances. This would make easier the troubleshooting, and would leave MTU 1500 between the pods.

It looked very easy, just use:

--network-plugin=kubenet

and specify in the cloud config the router uuid.

Our first tests with version 1.7.0 did not work. First of all I had to fix the Kubernetes documentation, because the syntax to specify the router UUID was wrong. Then I had a problem with Security groups disappearing from the instances. After troubleshooting and asking for help on the Kubernetes slack channel, I found out that I was hitting a gophercloud known bug.

The bug was already fixed in gophercloud at the time of my finding, but I learned that Kubernetes freezes an older version of this library in the folder “vendor/github.com/gophercloud/gophercloud”. So the only way to get the updated library version was to upgrade to Kubernetes v1.8.0, or any newer version including this commit.

After a bit of testing every works now. The changes are summarised in this PR, or you can just use the master branch from my git repository.

After you deploy, the K8s master will assign from network ClusterCIDR (usually a /16 address space) a smaller /24 subnet per each Openstack instance. The Pods will get addresses from the subnet assigned to the instance. The kubernetes master will inject static routes to the neutron router, to be able to route packets to the Pods. It will also configure the neutron ports of the instances with the correct allowed_address_pairs value, so that the traffic is not dropped by the Openstack antispoofing rules.

This is what a show of the Openstack router looks like:

$ openstack router show b11216cb-a725-4006-9a55-7853d66e5894 -c routes
+--------+--------------------------------------------------+
| Field  | Value                                            |
+--------+--------------------------------------------------+
| routes | destination='10.96.0.0/24', gateway='10.8.10.3'  |
|        | destination='10.96.1.0/24', gateway='10.8.10.8'  |
|        | destination='10.96.2.0/24', gateway='10.8.10.11' |
|        | destination='10.96.3.0/24', gateway='10.8.10.10' |
+--------+--------------------------------------------------+

And this is what the allowed_address_pairs on the port of one instance looks like:

$ openstack port show 42f2a063-a316-4fe2-808c-cd2d4ed6592f -c allowed_address_pairs
+-----------------------+------------------------------------------------------------+
| Field                 | Value                                                      |
+-----------------------+------------------------------------------------------------+
| allowed_address_pairs | ip_address='10.96.1.0/24', mac_address='fa:16:3e:3e:34:2c' |
+-----------------------+------------------------------------------------------------+

There is of course more work to be done.

I will improve the ansible playbook to create automatically the Openstack router and network, at the moment these steps are done manually before starting the playbook.

Working with network-plugin=kubenet is actually deprecated, so I have to understand what is the long term plan for this way of deployment.

The Kubernetes master is still running on a single VM, the playbook can be extended to have an HA setup.

I really would like to have feedback from users of Kubernetes on Openstack. If you use this playbook please let me know, and if you improve it, the Pull Requests on github are very welcome! 🙂


(Ceph) storage (server) power usage

We have been running Ceph in production for SWITCHengines since mid-2014, and are at the third generation of servers now.

SWITCHengines storage server evolution

  • First generation, since March 2014: 2U Dalco based on Intel S2600GZ, 2×E5-2650v2 CPUs, 128GB RAM, 2×200GB Intel DC S3610 SSD, 12×WD SE 4TB
  • Second generation, since Dec. 2015: 2U Dalco based on Intel S2600WTT, 1×E5-2620v4 CPU, 64GB RAM, 2×200GB Intel DC S3610 SSD, 12×WD SE 4TB
  • Third generation, since June 2017: 1U Quanta S1Q-1ULH-8, 1×Xeon D-1541 CPU, 64GB RAM, 2×240GB Micron 5100 MAX SSD, 12×HGST Ultrastar He8 (TB)

What all servers have in common: 2×10GE (SFP+ DAC) network connections, redundant power supplies, simple BMC modules connected to separate GigE network.

We run all those servers together in a single large Ceph RADOS cluster (actually we have two clusters in different towns, but for this article I focus on just the larger and more heavily loaded one). The cluster has 480 OSDs, contains about 500TiB user data, mostly RBD block devices used by OpenStack instances, and some S3 object storage, including video streaming directly from RadosGW to browsers. Cluster-wide I/O rates during my measurements were around 2’700 IOPS, 150MB/s read, 65MB/s write. We didn’t apply any particular optimization for energy or otherwise.

To understand the story behind the server types: Initially we used the same server chassis for compute and storage servers. We also used the same relatively generous CPU and RAM configurations. This would have allowed us to turn compute into storage servers (or vice-versa) relatively easily. When purchasing the second server generation we saved some money by reducing CPU power and RAM. For the third generation, we opted for increased density and efficiency made possible by a “system-on-a-chip” (Xeon D)-based server design.

Power measurement results

All these servers have IPMI-accessible power sensors. Last week my colleagues did some measurement with an external power meter, and found that (for the servers they tested—not all types, for lack of time) the values from the IPMI readers are within 5% or so of the values from the “real” power meter. Good enough!

Unfortunately we don’t yet feed IPMI measurements into any of our continuous measurement tools (Carbon/Graphite/Grafana or Nagios). If you do that, please use the comments to tell us how you set this up.

But recently I looked at the IPMI power consumption readings for these servers during a time of relatively light use (weekend) and got the following results:

  • Gen 1: 248W
  • Gen 2: 205W
  • Gen 3: 155W

Note that the Gen 3 servers have larger disks, and thus Ceph puts twice as much data on them, and thus they get double the IOPS than the old servers. Still, they use significantly less power. This is partly due to the simplified mainboard and more modern CPU, partly to the Helium-filled disks that only draw ~4.5W each (when idle) as opposed to ~7.5W for the older 4TB drives.

Cost to power a Terabyte-year of user data

Just for fun, I also performed some cost calculations in relation to usable space, under the following assumptions:

  1. We pay 0.15 €/kWh. (actually I used CHF, but doesn’t really matter—some countries will pay more than this even without overhead, others pay only half, so this would cover some of the other directly energy-dependent costs like A/C and redundancy. Anyway it’s about the relative costs 🙂
  2. We can fill disks up to an average of 70% before things get messy.

When using traditional three-way replication, storing a usable Terabyte for a year costs us the following amounts just for power:

  • Gen 1: € 29.10
  • Gen 2: € 24.05
  • Gen 3: € 9.09 (note again that these have twice the capacity)

If we assume Erasure Coding with 50% overhead, e.g. 2+1, then the power cost would go down to

  • Gen 1: € 14.55
  • Gen 2: € 12.03
  • Gen 3: € 4.55

We could consider even more space-efficient EC configurations, but I don’t have any experience with that… “left as an exercise for the reader”.

In conclusion, we could say that advances in hardware (more efficient servers, larger and more efficient disks), software (EC in Ceph), as well as our own optimizations (less spare CPU/RAM) have brought down the power component of our storage costs by a factor of 6.5 over these 3.5 years. Not bad, huh? Of course there are trade-offs: the new servers have lower IOPS per space, EC uses more CPU and disk-read operations etc.

The next frontier: powering down idle disks

Finally, I also took an unused server of the Gen 3 ones (we keep a few powered down until we need them—I powered one on for the test). It consumed 136W, not much less than the 155W under (light) load. Although the 12 8TB disks weren’t even mounted, they were spinning. Putting them all in “standby” mode with sudo hdparm -y lowered the total system load to just 82W. So for infrequently accessed “cold storage”, there’s even more room for optimization—although it might be tricky to leverage standby mode in practice with a system such as Ceph. At least scrubbing strategy would need to be adapted, I guess.


SWITCHdrive Over IPv6

When we built the SWITCHdrive service on the OpenStack platform that was to become SWITCHengines, that platform didn’t really support IPv6 yet. But since Spring 2016 it does. This week, we enabled IPv6 in SWITCHdrive and performed some internal tests. Today around noon, we published its IPv6 address (“AAAA record”) in the DNS. We quickly saw around 5% of accesses use IPv6 instead of IPv4.

In the evening, this percentage climbed to about 14%. This shows the relatively good support for IPv6 on Swiss broadband (home) networks, notably by the good folks at Swisscom.

The lower percentage during office (and lecture, etc.) hours shows that the IPv6 roll-out to higher education campuses still has some way to go. Our SWITCHlan backbone has been running “dual-stack” (IPv4 and IPv6 in parallel) in production for more than 10 years, and most institutions have added IPv6 configuration to their connections to us. But campus networks are wonderfully complex, so getting IPv6 deployed to every network plug and every wireless access point is a daunting task. Some schools are almost there, including some large ones that don’t use SWITCHdrive—yet!?—so the 5% may underestimate the extent of the roll-out for the overall SWITCH community. The others will follow in their footsteps. They can count on the help of the community and benefit from IPv6 training courses organized by our colleagues in the security and network teams. Contact us if you need help!

[Update: After a few weeks, the proportion of IPv6 traffic increased somewhat. Now we typically see around 10% during office hours and 20% during weekends. So the “retail” sector is still clearly ahead of (our academic) enterprise networks in terms of IPv6 penetration.]


6 Comments

Starting 1000 instances on SWITCHengines

Is it really possible with Openstack to start 1000 instances, make a parallel computation, and then save the data and delete the instances ?
To answer this question we tested it on SWITCHengines. I had a lot of troubles getting this work, and I have to thank other Openstack Operators I been chatting with: Mattia Belluco, Matteo Panella and Anton Aksola.
Our Openstack control plane is deployed with a dedicated pet VM for each Openstack service (Nova, Cinder, Neutron, Glance and Keystone) and a generic controller VM where we run the mysql and the rabbitmq services. This configuration makes possible to monitor each Openstack service as an isolated VM, and it makes easier for us to identify bottlenecks in the control plane.
For this experiment we never used the web interface, but the Openstack CLI with this reference command line.

openstack server create \
--image "Ubuntu Xenial 16.04 (SWITCHengines)" \
--flavor c1.small \
--network demo-network \
--user-data cloud-init.txt \
--key-name mykey \
--min 100 \
--max 100 test

The c1.small flavor has just 1 CPU core and 1GB of RAM.

We did the experiment in 4 steps, trying with 100, 200, 400 and 1000 instances. To make sure that the instances were really started and operational, we used cloud-init to make them phone home to a registration server. This is a very easy cloud-init feature to use, here is an example cloud-init.txt file:


#cloud-config
phone_home:
url: http://x.x.x.x:8000/$INSTANCE_ID/
post: [ hostname, fqdn ]

In this github gist we share the python code to run the registration service.

The first test with 100 instances did not work. We tried a few runs and we always had a minimum of 4 to a maximum of 7 instances that did not start for various reasons. Monitoring our control plane we noticed that we were saturating the CPUs and memory of the nova and neutron pets.
We increased the resources for both the nova and the neutron pets from 4 to 16 CPU cores and we doubled the memory from 8 GB to 16 GB.
After these changes we were able to start 100 instances without problems. We noticed that the neutron pet had an higher load than the nova pet during the process of creating 100 instances.

When we tried with 200 instances, those were all reported as Running by Openstack but we always had a minimum of 8 to a maximum of 20 instances not phoning home. Looking at the serial console with the command:

openstack console log show

we noticed that these instances were not able to get an IP address from the DHCP server, and the DHCP client would give up after 300 seconds. Using the hint that the neutron pet was more loaded than the nova pet, we found out that the nova instances reached the RUNNING state while the corresponding neutron ports were still in the BUILDING phase.
Thinking of a race condition between nova instances and neutron ports, I asked on the Openstack Developers mailing list, and it turned out that we had a wrong configuration.

We changed our nova.conf as follows:

vif_plugging_is_fatal=True
vif_plugging_timeout=300

After fixing the configuration we had the same result, but instead of the instances starting and not being able to obtain an IP address, they never started and were reported in ERROR state by Openstack.
The real challenge was not to schedule 200 instances, but to allocate 200 network ports.
Troubleshooting in this direction we observed that the rabbitmq queues of the neutron dhcp agents were filling up during the ports creation. For each created port the dhcp agent had to add a corresponding line to the file /var/lib/neutron/dhcp/$UUID/host. Where $UUID is the corresponding Neutron network UUID.

We looked into the detail of what happens when a neutron port is created. Using the guru meditation report we traced down the culprit in a slow “ip route list” call.
This command is called everytime a neutron port is created:

time sudo ip netns exec qdhcp-7a1cfb7f-2960-45f5-903f-0d602450525a ip route list
default via 10.10.0.1 dev tapaf136b11-a5
10.10.0.0/16 dev tapaf136b11-a5 proto kernel scope link src 10.10.0.2
real 0m0.048s
user 0m0.000s
sys 0m0.016s

However calling the same command within neutron-rootwrap takes about 10 times longer:

time sudo neutron-rootwrap /etc/neutron/rootwrap.conf ip netns exec qdhcp-7a1cfb7f-2960-45f5-903f-0d602450525a ip route list dev tapaf136b11-a5
default via 10.10.0.1
10.10.0.0/16 proto kernel scope link src 10.10.0.2
real 0m0.713s
user 0m0.472s
sys 0m0.172s

Once we identified this bottleneck, we changed the configuration again to enable the Openstack rootwrap to work in daemon mode.
We had to change the agent section of neutron.conf

[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
root_helper_daemon=sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf

After this change, we were able to start successfully 200, 400 and 1000 instances.
With 1000 instances we still get a HTTP 504 gateway timeout error.
This is because the nova-api server takes longer than the reverse proxy timeout to answer the request. The reverse proxy replies with HTTP 504 but the nova-api server will later finish to process the request with a HTTP 200. This is easily fixed using a longer timeout, but we plan to trace the problem in detail to shorten the processing time of the request.

Finally the answer is yes, with Openstack it is really possible to start 1000 instances quickly to have compute power just when needed.