Practical persistent cloud storage for Docker in AWS using RexRay - pt 2

Posted by Eric Noriega

Jul 21, 2017 8:25:00 AM Docker, Amazon Web Services, aws, RexRay

RexRay is plugin module available for use with Docker which provides the ability to use shared storage as a Docker volume. It is quick to setup and provides near seamless data sharing between containers. We review it's basic design and detail tips for it's use in the AWS environment.


In a previous installment we setup the RexRay plugin and will now expand on it's use in the AWS environment using S3. Let's perform some AWS IAM policy review.

RexRay can be used in order to interchange data between other processes using S3, but by design it is focused primarily on inter-container file exchange. Access to S3 is governed by the policy associated to the IAM account.  The access keys you downloaded earlier are what associate the API request with that IAM account.

Previously we installed the plugin using:

docker plugin install rexray/s3fs:0.9.2 S3FS_ACCESSKEY=XXXXXXXXXXXXX S3FS_SECRETKEY=XXXXXXXXXXXXXXXXX


We can see how the plugin is configured by running the following command:

docker plugin inspect rexray/s3fs:0.9.2

Note that docker plugin inspect displays the credentials, and as such anyone who has access to the Docker daemon has visibility to the AWS access keys. Assuming that this is a reasonable condition to have (which it might be or not), practical considerations revolve around not just setting up the connection, but also the need to limit and control the scope of changes.

The RexRay plugin is fairly careful about removing buckets. Even so we might want to limit the damage possible through the AWS credentials, which can always be used out-of-band to make API calls or though the AWS CLI. In order to do this we are going to limit access to the storage using S3 policy.

Basic policy

We'll start with a simple set of policies providing full access to a single bucket, and then limit to read-only access to a given bucket. This first policy is the minimum set of permissions needed to use S3. They allow the plugin to function without giving any particular access to content.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
}
]
}

Restrict access to a single bucket

Now we add the ability to access S3 buckets. This gives full access to a specific bucket.  You'll want to change the name of the bucket here to whatever you're using to test things out (so probably not inky-dink-rexray-volume).

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject", "s3:PutObject",
"s3:DeleteObject", "s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::inky-dink-rexray-volume",
"arn:aws:s3:::inky-dink-rexray-volume/*"
]
}
]
}


We'll attach this to the user directly as a policy using the AWS console.   

In the console, select the IAM service, select Users.

IAM dashboard.png


Choose the user you are using for plugin storage access.  Remove any policies that you might have associated with the user (1), and then select "Add inline policy" (2).

iam-remove-policy.png

Select "Custom Policy"; you can then paste in the policy. Give the policy a name, and select "Apply Policy".

IAM-policy-add.png

Testing the setup

Since we have already set up the plugin, we don't need to install or associated the keys.  Our new policy is now in force.  You should now see that you cannot create buckets:

docker volume create --driver rexray/s3fs:0.9.2 newvolume-for-test

    (This fails, because we don't have rights to create a bucket.)

We can use the specific volume without issue:

docker run -it -v inky-dink-rexray-volume:/myvol ubuntu bash
...inside the container here...
# cd /myvol
# date >mydate
# ls -l

Read-Only access to a bucket

By tweaking the set of permissions granted we can provide a guaranteed read-only access. Docker can control read-write access to each individual mount using the `:ro` flag added to the container run command: docker container run -it -v myvolume:/home/root:ro but using policy enforces read-only including bucket adds or deletes as well as for all I/O requests.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject", "s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::inky-dink-rexray-volume",
"arn:aws:s3:::inky-dink-rexray-volume/*"
]
}
]
}

Note that by removing the allowed actions, the same rule lets us do the get action without the ability to put or delete objects. We'll leave it to you to try this policy out on your own.

In the next installment we will look at allowing the Docker users to have control over their own sets of buckets...

Posted by Eric Noriega

    

Posts by Topic

see all
Request a Complimentary Docker Consultation

An Open View

Vizuri Blog

Subscribing to our blog is a great way to stay up to date with the latest information from Vizuri, as well as our strategic partners. We focus on providing a range of content that is practically useful and relevant from both a technical and business perspective.

We promise to respect your privacy.

×