SWITCH Cloud Blog


Adding 60 Terabytes to a Ceph Cluster

[Note: This post was republished from the now-defunct “petablog”]

BCC – an Experiment that “Escaped the Lab”

Starting in Fall 2012, we built a small prototype “cloud” consisting of about ten commodity servers running OpenStack and Ceph.  That project was called Building Cloud Competence (BCC), and the primary intended purpose was to acquire experience running such systems.  We worked with some “pilot” users, both external (mostly researchers) and internal (“experimental” services).  As these things go, experiments become beta tests, and people start relying on them… so this old BCC cluster now supports several visible applications such as SWITCH’s SourceForge mirror, the SWITCHdrive sync & share service, as well as SWITCHtube, our new video distribution platform.  In particular, SWITCHtube uses our “RadosGW” service (similar to Amazon’s S3) to stream HTML5 video directly from our Ceph storage system.

Our colleagues from the Interaction Enabling team would like to enhance SWITCHtube by adding more of the content that has been produced by users of the SWITCHcast lecture recording system.  This could amount to 20-30TB of new data on our Ceph cluster.  Until last week, the cluster consisted of fifty-three 3TB disks distributed across eight hosts, for a total raw storage capacity of 159TB, corresponding to 53TB of usable storage given the three-way replication that Ceph uses by default.  That capacity was already used to around 40%.  In order to accomodate the expected influx of data, we decided to upgrade capacity by adding new disks.

IMG_0543_400x300Since we bought the original disks less than two years ago, the maximum capacity for this type of disks – the low-power/low-cost/low-performance disks intended for “home NAS” applications – has increased from 3TB to 6TB.  So by adding just ten new disks, we could increase total capacity by almost 38%.  We found that for a modest investment, we could significantly extend the usable lifetime of the cluster.  Our friendly neighborhood hardware store had 14 items in stock, so we quickly ordered ten and built them into our servers the next morning.

Rebalancing the Cluster

Now, the Ceph cluster adapts to changes such as disks being added or lost (e.g. by failing) by redistributing data across the cluster.  This is a great feature because it makes the infrastructure very easy to grow and very robust to failures.  It is also quite impressive to watch, because redistribution makes use of the entire capacity of the cluster.  Unfortunately this tends to have a noticeable impact on the performance as experienced by other users of the storage system, in particular when writing to storage.  So in order to minimize annoyance to users, we scheduled the integration of the new disks for Friday late in the afternoon.

ceph-insert-6TB-48h

We use Graphite for performance monitoring of various aspects of the BCC cluster.  Here is one of the Ceph-related graphs showing what happened when the disks were added to the cluster as new “OSDs” (Object Storage Daemons).  The graph shows, for each physical disk server in the Ceph cluster, the rate of data written to disk, summed up across all disks for a given server.  The grey, turqoise, and yellow graphs correspond to servers h4, h1s, and h0s, respectively.  These servers are the ones that got the new 6TB disks: h4 got 5, h1s got 3, and h0s got 2.

We can see that the process of reshuffling data took about 20 hours, starting shortly after 1700 on Friday, and terminating around 1330 on Saturday.  The rate of writing to the new disks exceeded a Gigabyte per second for several hours.

Throughput is limited by the speed at which the local filesystem can write to disks (in particular the new ones) over the 6 Gb/s SATA channels, and by how fast the data copies can be retrieved from the old disks.  As most of the replication is done across the network, that could also become a bottleneck.  But in our case each server has dual 10GE connections, so in this case the network supports more throughput for each server than the disks can handle.  Why does it get slower over time? I guess one reason is that writing to a fresh file system is faster than writing to one that already has data on it, but I’m not sure whether that’s sufficient explanation.

Outlook

Based upon the experiences from the BCC project, SWITCH decided to start building cloud infrastructure in earnest.  We secured extensible space in two university data center locations in Lausanne and Zurich, and started deploying new clusters in Spring 2014.  We are now in the process of fine-tuning the configuration in order to ensure reliable operation, scalability, and fast and future-proof networking.  These activities are supported by the CUS program P-2 “information scientifique” as project “SCALE”.  We hope to make OpenStack-based self-service VMs available to first external users this Fall.