SWITCH Cloud Blog


Leave a comment

Deploy Kubernetes v1.8.3 on Openstack with native Neutron networking

Hello,
I wrote in the past how to deploy Kubernetes on SWITCHengines (Openstack) using this ansible playbook. When I wrote that article, I did not care about the networking setup, and I used the proposed weavenet plugin. I went to Sydney at the Openstack Summit and I saw the great presentation from Angus Lees. It was the right time to see the presentation because I recently watched this video where they explain the networking of Kubernetes when running on GCE. Going back to Openstack, Angus mentioned that the Kubernetes master can talk to neutron, to inject routes in the tenant router to provide connectivity without NAT among the pods that live in different instances. This would make easier the troubleshooting, and would leave MTU 1500 between the pods.

It looked very easy, just use:

--network-plugin=kubenet

and specify in the cloud config the router uuid.

Our first tests with version 1.7.0 did not work. First of all I had to fix the Kubernetes documentation, because the syntax to specify the router UUID was wrong. Then I had a problem with Security groups disappearing from the instances. After troubleshooting and asking for help on the Kubernetes slack channel, I found out that I was hitting a gophercloud known bug.

The bug was already fixed in gophercloud at the time of my finding, but I learned that Kubernetes freezes an older version of this library in the folder “vendor/github.com/gophercloud/gophercloud”. So the only way to get the updated library version was to upgrade to Kubernetes v1.8.0, or any newer version including this commit.

After a bit of testing every works now. The changes are summarised in this PR, or you can just use the master branch from my git repository.

After you deploy, the K8s master will assign from network ClusterCIDR (usually a /16 address space) a smaller /24 subnet per each Openstack instance. The Pods will get addresses from the subnet assigned to the instance. The kubernetes master will inject static routes to the neutron router, to be able to route packets to the Pods. It will also configure the neutron ports of the instances with the correct allowed_address_pairs value, so that the traffic is not dropped by the Openstack antispoofing rules.

This is what a show of the Openstack router looks like:

$ openstack router show b11216cb-a725-4006-9a55-7853d66e5894 -c routes
+--------+--------------------------------------------------+
| Field  | Value                                            |
+--------+--------------------------------------------------+
| routes | destination='10.96.0.0/24', gateway='10.8.10.3'  |
|        | destination='10.96.1.0/24', gateway='10.8.10.8'  |
|        | destination='10.96.2.0/24', gateway='10.8.10.11' |
|        | destination='10.96.3.0/24', gateway='10.8.10.10' |
+--------+--------------------------------------------------+

And this is what the allowed_address_pairs on the port of one instance looks like:

$ openstack port show 42f2a063-a316-4fe2-808c-cd2d4ed6592f -c allowed_address_pairs
+-----------------------+------------------------------------------------------------+
| Field                 | Value                                                      |
+-----------------------+------------------------------------------------------------+
| allowed_address_pairs | ip_address='10.96.1.0/24', mac_address='fa:16:3e:3e:34:2c' |
+-----------------------+------------------------------------------------------------+

There is of course more work to be done.

I will improve the ansible playbook to create automatically the Openstack router and network, at the moment these steps are done manually before starting the playbook.

Working with network-plugin=kubenet is actually deprecated, so I have to understand what is the long term plan for this way of deployment.

The Kubernetes master is still running on a single VM, the playbook can be extended to have an HA setup.

I really would like to have feedback from users of Kubernetes on Openstack. If you use this playbook please let me know, and if you improve it, the Pull Requests on github are very welcome! 🙂


Leave a comment

(Ceph) storage (server) power usage

We have been running Ceph in production for SWITCHengines since mid-2014, and are at the third generation of servers now.

SWITCHengines storage server evolution

  • First generation, since March 2014: 2U Dalco based on Intel S2600GZ, 2Ă—E5-2650v2 CPUs, 128GB RAM, 2Ă—200GB Intel DC S3610 SSD, 12Ă—WD SE 4TB
  • Second generation, since Dec. 2015: 2U Dalco based on Intel S2600WTT, 1Ă—E5-2620v4 CPU, 64GB RAM, 2Ă—200GB Intel DC S3610 SSD, 12Ă—WD SE 4TB
  • Third generation, since June 2017: 1U Quanta S1Q-1ULH-8, 1Ă—Xeon D-1541 CPU, 64GB RAM, 2Ă—240GB Micron 5100 MAX SSD, 12Ă—HGST Ultrastar He8 (TB)

What all servers have in common: 2Ă—10GE (SFP+ DAC) network connections, redundant power supplies, simple BMC modules connected to separate GigE network.

We run all those servers together in a single large Ceph RADOS cluster (actually we have two clusters in different towns, but for this article I focus on just the larger and more heavily loaded one). The cluster has 480 OSDs, contains about 500TiB user data, mostly RBD block devices used by OpenStack instances, and some S3 object storage, including video streaming directly from RadosGW to browsers. Cluster-wide I/O rates during my measurements were around 2’700 IOPS, 150MB/s read, 65MB/s write. We didn’t apply any particular optimization for energy or otherwise.

To understand the story behind the server types: Initially we used the same server chassis for compute and storage servers. We also used the same relatively generous CPU and RAM configurations. This would have allowed us to turn compute into storage servers (or vice-versa) relatively easily. When purchasing the second server generation we saved some money by reducing CPU power and RAM. For the third generation, we opted for increased density and efficiency made possible by a “system-on-a-chip” (Xeon D)-based server design.

Power measurement results

All these servers have IPMI-accessible power sensors. Last week my colleagues did some measurement with an external power meter, and found that (for the servers they tested—not all types, for lack of time) the values from the IPMI readers are within 5% or so of the values from the “real” power meter. Good enough!

Unfortunately we don’t yet feed IPMI measurements into any of our continuous measurement tools (Carbon/Graphite/Grafana or Nagios). If you do that, please use the comments to tell us how you set this up.

But recently I looked at the IPMI power consumption readings for these servers during a time of relatively light use (weekend) and got the following results:

  • Gen 1: 248W
  • Gen 2: 205W
  • Gen 3: 155W

Note that the Gen 3 servers have larger disks, and thus Ceph puts twice as much data on them, and thus they get double the IOPS than the old servers. Still, they use significantly less power. This is partly due to the simplified mainboard and more modern CPU, partly to the Helium-filled disks that only draw ~4.5W each (when idle) as opposed to ~7.5W for the older 4TB drives.

Cost to power a Terabyte-year of user data

Just for fun, I also performed some cost calculations in relation to usable space, under the following assumptions:

  1. We pay 0.15 €/kWh. (actually I used CHF, but doesn’t really matter—some countries will pay more than this even without overhead, others pay only half, so this would cover some of the other directly energy-dependent costs like A/C and redundancy. Anyway it’s about the relative costs 🙂
  2. We can fill disks up to an average of 70% before things get messy.

When using traditional three-way replication, storing a usable Terabyte for a year costs us the following amounts just for power:

  • Gen 1: € 29.10
  • Gen 2: € 24.05
  • Gen 3: € 9.09 (note again that these have twice the capacity)

If we assume Erasure Coding with 50% overhead, e.g. 2+1, then the power cost would go down to

  • Gen 1: € 14.55
  • Gen 2: € 12.03
  • Gen 3: € 4.55

We could consider even more space-efficient EC configurations, but I don’t have any experience with that… “left as an exercise for the reader”.

In conclusion, we could say that advances in hardware (more efficient servers, larger and more efficient disks), software (EC in Ceph), as well as our own optimizations (less spare CPU/RAM) have brought down the power component of our storage costs by a factor of 6.5 over these 3.5 years. Not bad, huh? Of course there are trade-offs: the new servers have lower IOPS per space, EC uses more CPU and disk-read operations etc.

The next frontier: powering down idle disks

Finally, I also took an unused server of the Gen 3 ones (we keep a few powered down until we need them—I powered one on for the test). It consumed 136W, not much less than the 155W under (light) load. Although the 12 8TB disks weren’t even mounted, they were spinning. Putting them all in “standby” mode with sudo hdparm -y lowered the total system load to just 82W. So for infrequently accessed “cold storage”, there’s even more room for optimization—although it might be tricky to leverage standby mode in practice with a system such as Ceph. At least scrubbing strategy would need to be adapted, I guess.