A bigger picture view of RexRay.
RexRay wraps around libStorage, a generalized storage API which can front end various "providers". These providers are what actually implement the storage. libStorage is basically an abstraction layer allowing systems to connect to these providers without the need to install specific drivers.
libStorage and the big picture
If we were to look at the logical view of libstorage, it maps out to the following:
The client and server portions can be on separate machines with the server providing access for many machines. In many implementations this would be the preferred pattern to use, as it centralizes the storage gateway configuration. It is also possible to install on a single machine provided the storage provider can handle multiple requests properly. S3 has no issue with this as it is designed to be highly distributed.
Installed as a single host is closer to how the plugin functions, wrapping all of the components in a single entity. The following graphic shows how we currently have things configured in the examples:
Note how the client and server configurations are bound together. How is it that the plugin can affect the operation of the host machine? Remember when you ran
docker plugin install rexray/s3fs:0.9.2 the plugin asked for certain permissions. These permissions that you granted allow the processes inside of the plugin to integrate with the underlying host system configuration.
Once installed Docker volumes can then be created and managed via the plugin, as requests are passed by Docker, and then orchestrated by the local server. One change with this model is that volumes are usually protected from deletion via a reference count. If you are using libStorage with a central server this is still true as it implements a reference count. Using the plugin means that the reference count is kept at the node level, so the plugin is only aware of the containers on a single node. This is not a problem with S3, especially if you manage volume creation and deletion centrally, but it is a consideration. The S3FS plugin as of version 0.9.2 cannot delete an S3 bucket unless the bucket is empty, and has never been used (just created) as a Docker volume. This has to do with the need to force the deletion which does not happen by default in order to protect data. (It's a bit more complicated than that but the takeaway is that you might just want to delete from the AWS console or cli.)
What exactly is a Docker plugin?
Before Docker 1.12 Docker allowed for plugins which worked through a simple API. This API allowed other executables to interact with the Docker engine. While easy to set up, it created an ecosystem of different extensions which all had their own mechanisms for installation and setup. Starting with Docker 1.13 a new plugin system was introduced in which the plugin runs inside of a container. This is a far more elegant solution in that it allows for a single mechanism for locating , downloading and deploying the plugin.
Even though the plugin is a container image, you cannot start it using either
docker image pull or
docker container run; you need to use the
docker plugin set of sub‑commands. This is because the plugin uses the container as it's implementation and for a great delivery mechanism, but it is raising the abstraction up a level by imposing certain specific files be in place to allow for things such as requesting permissions and enforcing a contract. It can be a bit confusing because the plugins will show up on a search, for example
docker search rexray; so it can be hard to distinguish between "bare" containers and plugins. You will need to count on the description in order to differentiate.
Hopefully this series has provided a broad background for a set of technologies, and has given you enough to understand the potential. Here's a few links to follow on as you explore further:
- RexRay plugin
- S3FS plugin volume deletion issue
- Docker HUB S3FS repo
- AWS IAM users guide
- AWS IAM policy variables
- Docker plugin command reference