SWITCH Cloud Blog


Hosting and computing public scientific datasets in the cloud

SWITCH offers a cloud computing service called SWITCHengines, using OpenStack and Ceph. These computing resources are targeted to research usage, where the demand for Big Data (Hadoop, Spark) workloads is increasing. Big Data Analysis requires access to large datasets. In several use cases the analysis is done against public available datasets.  To add value to the cloud computing service our plan is to provide a place for researchers to store their data sets and make them available to others. A possible alternative at the moment is Amazon EC2, because many datasets are available for free in Amazon S3 when the computation is done within the Amazon infrastructure. Storing as many datasets as Amazon does is challenging because the real data utilisation in a object storage system is usually 3 times the real data. Each dataset has a size of 100s of TB, so storing a few of them with a 3 times replica factor means working in the PB domain.

Technical challenges

While it sounds a easy task to share some scientific data in a cloud datacenter, there are some problems you will have to address.

1) The size of the datasets is challenging. It will cost money to host the datasets. To make the service cost sustainable you must find reduced redundancy solutions, that have a reasonable cost and a reasonable risk of data loss. This is acceptable because there datasets are public and most likely there are going to be other providers hosting a copy, so in case of disaster data recovery will be possible. The key idea is to store the dataset locally to speed up access to the data during computation, not to store the dataset for persistent storage. Before we can host a dataset, we have to download it from another source. Also this first copy operation is challenging because of the size of the data involved.

2) We need a proper ACL system to control who can access the data. We can’t just publish all the datasets with public access. Some datasets require the user to sign an End User Agreement.

How we store the data

We use Ceph in our cloud datacenter as storage backend. We have Ceph pools for the OpenStack rbd volumes, but we also use the rados gateway to provide an object storage service, accessible via S3 and Swift compatible APIs. We believe that the best solution to access the Scientific Datasets is to store them on Object Storage, because the size of the data requires a technical solution that can scale horizontally spreading the data on many heterogeneous disks. Our Ceph cluster works with a replication factor of 3. This means that each object exists in 3 replicas on three different servers. To work with big amounts of data we deployed in production Erasure Coded Pools in our Ceph cluster. In this case at the cost of an higher CPU computation on the storage nodes, is possible to store the data with an expansion for 1.5 instead of 3. We found very useful the blog post cephnotes that explains how to create a new Ceph pool with Erasure Coding to be used with the rados gateway. We are now running this setup in production, and when we create a S3 bucket we can decide if the bucket will land on the normal pool or on the erasure coded pool. To make the decision we use the bucket-location command line option like in this this example:

s3cmd mb --bucket-location=:ec-placement s3://mylargedatabucket

It is important to note that the bucket-location option should be there when creating the S3 bucket. When writing objects at a later time, the objects will always go to the right pool.

How we download the first copy

How do you download a dataset of 200TB from an external source ? How long does it take ? The task if of course not trivial, to address this problem Amazon started his snowball service. The key idea is that data is shipped to you via snail mail on a physical device. Because we are a NREN, we decided to download the datasets over the GEANT network from other institutions that are hosting datasets. We used rclone for our download operations. It is a modern object storage client written in go, that supports multiple protocols. Working with big amount of data we soon found some issues with the client. The community behind rclone was very active and helped us. “Fail to copy files larger than 80GB” was the most severe problem we had. We also improved the documentation about Swift keys with slashes do not work with Ceph in swift emulation mode.

We also had some issues with the rados gateway. When we write the data, this goes into a Ceph bucket. In theory the data in the bucket can be written and read using both the S3 and the Swift APIs that the rados gateway implements. In reality we found out that there is a difference in how the checksums for the multipart objects is computed when using the S3 or the Swift API. This means that if wrote a object, you have to read it back with the same API you used for writing, otherwise your client will warn you that the object is corrupted. We notified the Ceph developers about this problem opening a bug that is still open.

Set the ACL to the dataset

When we store a dataset we would like to have read-write access for the cloud admin accounts, and read-only access for the users that are allowed to read the data. This means that after the dataset is uploaded, we should be able to easily grant and revoke access to users and projects. S3 and Swift manage the ACL in very different ways. In S3 the concept of inheritance of the ACL from a parent bucket does not exist. This means that for each individual object a specific ACL must be set. We read this in the official documentation:

Bucket and object permissions are completely independent; an object does not inherit the permissions from its bucket. For example, if you create a bucket and grant write access to another user, you will not be able to access that user’s objects unless the user explicitly grants you access.

This does not scale with the size of the dataset because making an API call for each object is really time consuming. There is a workaround, that it is to mark the bucket as completely public, bypassing the authorization for all objects. This workaround does not match with our use case where we have some users with read-only permission but the dataset is not public. As a viable workaround we create for each dataset a dedicated OpenStack tenant ‘Dataset X readonly access’ and what we do is adding and removing users from this special tenant, because this is a fast operation. It becomes very important to set the correct ACL when writing the objects for the first time. Unfortunately our favourite client rclone had no existent support for setting ACLs, so we had to use s3cmd.

We found out that also s3cmd ignores the –acl-grant flag, we notified the bug to the community but the bug is still open. This means that for each object you want to write you need two HTTP requests, one to actually write the object and one to set the proper ACL.

In OpenStack Swift you can set the ACL for a Swift container, and all the objects in the container will inherit that property. This sounds very good, but we are not running a real OpenStack Swift deployment, we serve objects in Ceph using an implementation of the Swift api.

During our tests we found out that the rados gateway Swift API implementation is not complete: ACL are not implemented. Other openstack operators reported that native Swift deployments work well with ACL and inheritance of ACL to the objects. However, when touching ACL and large objects also Swift is not bug free.

Access the dataset

To be able to compute the data without wasting time copying the data from the object store to a HDFS deployment, the best is to use Hadoop directly attached to the object store, in what is called the streaming mode. The result of the computation can be stored as well on the object store, or on a smaller HDFS cluster. To get started we published this tutorial.

Unfortunately the latest versions of Hadoop require the S3 backend to support the AWS4 signature, that will not work immediately with your SWITCHengines credentials because of this bug triggered when using Keystone together with the rados gateway. As a workaround we manually have to add the user on our rados gateway deployment, to make the AWS4 signature work correctly.

Conclusion

Making available large scientific datasets on a multi tenant cloud is still challenging but not impossible. Bring the data close to the compute power is of great importance. Letting the user compute public scientific datasets makes a scientific cloud service more attractive.

Moreoever, we envision users producing scientific datasets, and being able to share them with other researchers for further analysis.