Compare commits

..

34 Commits
0.7.0 ... 0.7.8

Author SHA1 Message Date
dc8667d36a Return not implemented for logs
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-23 20:50:24 +00:00
137d199cb5 Update to 0.7.7
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-23 20:19:30 +00:00
560c295eb0 Enable labeling containers
Enabling the faasd-provider to label containers

Signed-off-by: Martin Dekov <mvdekov@gmail.com>
2020-02-23 20:06:30 +00:00
93325b713e Enable containerd on reboot
Fixes #48

Signed-off-by: Gabriel Duke <gabeduke@gmail.com>
2020-02-23 20:05:11 +00:00
2307fc71c5 Add log shim and collect command
The collect command redirects function logs to the journal for
viewing on journalctl. faas-cli logs is not implemented as of
yet. View logs with journalctl -t openfaas-fn:FN_NAME_HERE.

Tested on Dell XPS with Ubuntu Linux. The approach takes
inspiration from the Stellar project.

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-23 19:54:49 +00:00
853830c018 Add shim for collecting logs
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-23 19:54:49 +00:00
262770a0b7 Add note for credentials helper
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-22 18:56:16 +00:00
0efb6d492f Update developer docs
* Adds new step for installing containerd with systemd
* Adds warning up top that this is not for newbies :-)

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-17 17:47:11 +00:00
27cfe465ca Update context on DO
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-17 17:27:55 +00:00
d6c4ebaf96 Add help & support
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-17 17:26:18 +00:00
e9d1423315 Add terraform sample
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-17 17:18:24 +00:00
4bca5c36a5 Use 0.7.5 for auth support 2020-02-14 12:00:22 +00:00
10e7a2f07c Add tutorial for private registry
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-13 12:18:15 +00:00
4775a9a77c service: support /var/lib/faasd/.docker/config.json
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-02-11 12:47:13 +00:00
e07186ed5b deploy: use reference.ParseNormalizedNamed()
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-02-11 12:47:13 +00:00
2454c2a807 Update README.md 2020-02-08 09:16:14 +00:00
8bd2ba5334 Clarify faasd purpose 2020-02-08 09:14:45 +00:00
c379b0ebcc Lift out developer instructions
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-06 10:03:37 +00:00
226a20c362 Update README.md 2020-02-06 10:01:19 +00:00
02c9dcf74d Increase e2e sleep
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-06 09:53:45 +00:00
0b88fc232d Update to 0.7.4 of faasd
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-06 09:39:49 +00:00
fcd1c9ab54 Fix cloud-config script to use new openfaas org
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-06 09:38:19 +00:00
592f3d3cc0 Move to openfaas org
Closes: #36

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-06 09:36:18 +00:00
b06364c3f4 Update faasd release in cloud-config
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-04 20:27:24 +00:00
75fd07797c Bump travis to use Go 1.13
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-04 20:21:10 +00:00
65c2cb0732 Correct naming of faas-provider
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-04 20:00:08 +00:00
44df1cef98 Update missing reference to faas-containerd
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-04 20:00:08 +00:00
881f5171ee Remove two bad panic statements
Errors should be returned and handled in the caller.

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-04 20:00:08 +00:00
970015ac85 Update order of code
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-04 20:00:08 +00:00
283e8ed2c1 Fix provider name to faasd-provider
Not sure how this got reverted / affected, but was wrong. The
name "faas-containerd" is gone and not in use.

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-02-04 20:00:08 +00:00
d49011702b Refactor faasd and faas-containerd merge
The use of containerd and CNI functions has been refactored to reuse
the same codebase.

Added all network functionality to own directory and package. Removed
netlink and weave library in favor of using CNI plugin result files.

Rename containers handler to functions to clear-up functionality.

Signed-off-by: Carlos de Paula <me@carlosedp.com>
2020-02-04 10:12:43 +00:00
eb369fbb16 Enable fix for secret support 2020-01-28 13:20:27 +00:00
040b426a19 Set all permissions to 0644 vs a mixture
This appeared to prevent the provider's secret code from
creating files in its working directory. The patch makes all
code use the same permission.

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-01-28 12:48:00 +00:00
251cb2d08a Update faasd version
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-01-28 12:45:45 +00:00
216 changed files with 13943 additions and 2215 deletions

View File

@ -2,7 +2,7 @@ sudo: required
language: go
go:
- '1.12'
- '1.13'
addons:
apt:
@ -26,5 +26,5 @@ deploy:
on:
tags: true
env:
- GO111MODULE=off
env: GO111MODULE=off

84
Gopkg.lock generated
View File

@ -13,7 +13,7 @@
version = "v0.4.14"
[[projects]]
digest = "1:b28f788c0be42a6d26f07b282c5ff5f814ab7ad5833810ef0bc5f56fb9bedf11"
digest = "1:f06a14a8b60a7a9cdbf14ed52272faf4ff5de4ed7c784ff55b64995be98ac59f"
name = "github.com/Microsoft/hcsshim"
packages = [
".",
@ -33,6 +33,7 @@
"internal/timeout",
"internal/vmcompute",
"internal/wclayer",
"osversion",
]
pruneopts = "UT"
revision = "9e921883ac929bbe515b39793ece99ce3a9d7706"
@ -54,7 +55,7 @@
version = "0.7.1"
[[projects]]
digest = "1:386ca0ac781cc1b630b3ed21725759770174140164b3faf3810e6ed6366a970b"
digest = "1:cf83a14c8042951b0dcd74758fc32258111ecc7838cbdf5007717172cab9ca9b"
name = "github.com/containerd/containerd"
packages = [
".",
@ -102,6 +103,7 @@
"remotes/docker/schema1",
"rootfs",
"runtime/linux/runctypes",
"runtime/v2/logging",
"runtime/v2/runc/options",
"snapshots",
"snapshots/proxy",
@ -113,10 +115,11 @@
version = "v1.3.2"
[[projects]]
digest = "1:7e9da25c7a952c63e31ed367a88eede43224b0663b58eb452870787d8ddb6c70"
digest = "1:e4414857969cfbe45c7dab0a012aad4855bf7167c25d672a182cb18676424a0c"
name = "github.com/containerd/continuity"
packages = [
"fs",
"pathdriver",
"syscallx",
"sysx",
]
@ -167,6 +170,27 @@
revision = "4cfb7b568922a3c79a23e438dc52fe537fc9687e"
version = "v0.7.1"
[[projects]]
digest = "1:bcf36df8d43860bfde913d008301aef27c6e9a303582118a837c4a34c0d18167"
name = "github.com/coreos/go-systemd"
packages = ["journal"]
pruneopts = "UT"
revision = "2d78030078ef61b3cae27f42ad6d0e46db51b339"
version = "v22.0.0"
[[projects]]
digest = "1:92ebc9c068ab8e3fff03a58694ee33830964f6febd0130069aadce328802de14"
name = "github.com/docker/cli"
packages = [
"cli/config",
"cli/config/configfile",
"cli/config/credentials",
"cli/config/types",
]
pruneopts = "UT"
revision = "99c5edceb48d64c1aa5d09b8c9c499d431d98bb9"
version = "v19.03.5"
[[projects]]
digest = "1:e495f9f1fb2bae55daeb76e099292054fe1f734947274b3cfc403ccda595d55a"
name = "github.com/docker/distribution"
@ -178,6 +202,30 @@
pruneopts = "UT"
revision = "0d3efadf0154c2b8a4e7b6621fff9809655cc580"
[[projects]]
digest = "1:10f9c98f627e9697ec23b7973a683324f1d901dd9bace4a71405c0b2ec554303"
name = "github.com/docker/docker"
packages = [
"pkg/homedir",
"pkg/idtools",
"pkg/mount",
"pkg/system",
]
pruneopts = "UT"
revision = "ea84732a77251e0d7af278e2b7df1d6a59fca46b"
version = "v19.03.5"
[[projects]]
digest = "1:9f3f49b4e32d3da2dd6ed07cc568627b53cc80205c0dcf69f4091f027416cb60"
name = "github.com/docker/docker-credential-helpers"
packages = [
"client",
"credentials",
]
pruneopts = "UT"
revision = "54f0238b6bf101fc3ad3b34114cb5520beb562f5"
version = "v0.6.3"
[[projects]]
digest = "1:0938aba6e09d72d48db029d44dcfa304851f52e2d67cda920436794248e92793"
name = "github.com/docker/go-events"
@ -185,6 +233,14 @@
pruneopts = "UT"
revision = "9461782956ad83b30282bf90e31fa6a70c255ba9"
[[projects]]
digest = "1:e95ef557dc3120984bb66b385ae01b4bb8ff56bcde28e7b0d1beed0cccc4d69f"
name = "github.com/docker/go-units"
packages = ["."]
pruneopts = "UT"
revision = "519db1ee28dcc9fd2474ae59fca29a810482bfb1"
version = "v0.4.0"
[[projects]]
digest = "1:fa6faf4a2977dc7643de38ae599a95424d82f8ffc184045510737010a82c4ecd"
name = "github.com/gogo/googleapis"
@ -227,14 +283,6 @@
revision = "6c65a5562fc06764971b7c5d05c76c75e84bdbf7"
version = "v1.3.2"
[[projects]]
digest = "1:582b704bebaa06b48c29b0cec224a6058a09c86883aaddabde889cd1a5f73e1b"
name = "github.com/google/uuid"
packages = ["."]
pruneopts = "UT"
revision = "0cd6bf5da1e1c83f8b45653022c74f71af0538a4"
version = "v1.1.1"
[[projects]]
digest = "1:cbec35fe4d5a4fba369a656a8cd65e244ea2c743007d8f6c1ccb132acf9d1296"
name = "github.com/gorilla/mux"
@ -375,15 +423,15 @@
revision = "d98352740cb2c55f81556b63d4a1ec64c5a319c2"
[[projects]]
digest = "1:2d9d06cb9d46dacfdbb45f8575b39fc0126d083841a29d4fbf8d97708f43107e"
digest = "1:1314b5ef1c0b25257ea02e454291bf042478a48407cfe3ffea7e20323bbf5fdf"
name = "github.com/vishvananda/netlink"
packages = [
".",
"nl",
]
pruneopts = "UT"
revision = "a2ad57a690f3caf3015351d2d6e1c0b95c349752"
version = "v1.0.0"
revision = "f049be6f391489d3f374498fe0c8df8449258372"
version = "v1.1.0"
[[projects]]
branch = "master"
@ -533,8 +581,14 @@
"github.com/containerd/containerd/errdefs",
"github.com/containerd/containerd/namespaces",
"github.com/containerd/containerd/oci",
"github.com/containerd/containerd/remotes",
"github.com/containerd/containerd/remotes/docker",
"github.com/containerd/containerd/runtime/v2/logging",
"github.com/containerd/go-cni",
"github.com/google/uuid",
"github.com/coreos/go-systemd/journal",
"github.com/docker/cli/cli/config",
"github.com/docker/cli/cli/config/configfile",
"github.com/docker/distribution/reference",
"github.com/gorilla/mux",
"github.com/morikuni/aec",
"github.com/opencontainers/runtime-spec/specs-go",

View File

@ -43,5 +43,5 @@
version = "0.14.0"
[[constraint]]
name = "github.com/google/uuid"
version = "1.1.1"
name = "github.com/docker/cli"
version = "19.3.5"

View File

@ -25,8 +25,8 @@ prepare-test:
sudo /sbin/sysctl -w net.ipv4.conf.all.forwarding=1
sudo mkdir -p /opt/cni/bin
curl -sSL https://github.com/containernetworking/plugins/releases/download/$(CNI_VERSION)/cni-plugins-linux-$(ARCH)-$(CNI_VERSION).tgz | sudo tar -xz -C /opt/cni/bin
sudo cp $(GOPATH)/src/github.com/alexellis/faasd/bin/faasd /usr/local/bin/
cd $(GOPATH)/src/github.com/alexellis/faasd/ && sudo /usr/local/bin/faasd install
sudo cp $(GOPATH)/src/github.com/openfaas/faasd/bin/faasd /usr/local/bin/
cd $(GOPATH)/src/github.com/openfaas/faasd/ && sudo /usr/local/bin/faasd install
sudo systemctl status -l containerd --no-pager
sudo journalctl -u faasd-provider --no-pager
sudo systemctl status -l faasd-provider --no-pager
@ -38,7 +38,7 @@ prepare-test:
test-e2e:
sudo cat /var/lib/faasd/secrets/basic-auth-password | /usr/local/bin/faas-cli login --password-stdin
/usr/local/bin/faas-cli store deploy figlet --env write_timeout=1s --env read_timeout=1s
sleep 2
sleep 5
/usr/local/bin/faas-cli list -v
uname | /usr/local/bin/faas-cli invoke figlet
uname | /usr/local/bin/faas-cli invoke figlet --async

361
README.md
View File

@ -1,16 +1,17 @@
# faasd - serverless with containerd
# faasd - serverless with containerd and CNI 🐳
[![Build Status](https://travis-ci.com/alexellis/faasd.svg?branch=master)](https://travis-ci.com/alexellis/faasd)
[![Build Status](https://travis-ci.com/openfaas/faasd.svg?branch=master)](https://travis-ci.com/openfaas/faasd)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![OpenFaaS](https://img.shields.io/badge/openfaas-serverless-blue.svg)](https://www.openfaas.com)
faasd is a Golang supervisor that bundles OpenFaaS for use with containerd instead of a container orchestrator like Kubernetes or Docker Swarm.
faasd is the same OpenFaaS experience and ecosystem, but without Kubernetes. Functions and microservices can be deployed anywhere with reduced overheads whilst retaining the portability of containers and cloud-native tooling.
## About faasd:
## About faasd
* faasd is a single Golang binary
* faasd is multi-arch, so works on `x86_64`, armhf and arm64
* faasd downloads, starts and supervises the core components to run OpenFaaS
* is a single Golang binary
* can be set-up and left alone to run your applications
* is multi-arch, so works on Intel `x86_64` and ARM out the box
* uses the same core components and ecosystem of OpenFaaS
![demo](https://pbs.twimg.com/media/EPNQz00W4AEwDxM?format=jpg&name=small)
@ -18,51 +19,126 @@ faasd is a Golang supervisor that bundles OpenFaaS for use with containerd inste
## What does faasd deploy?
* faasd - itself, and its [faas-provider](https://github.com/openfaas/faas-provider)
* [Prometheus](https://github.com/prometheus/prometheus)
* [the OpenFaaS gateway](https://github.com/openfaas/faas/tree/master/gateway)
* faasd - itself, and its [faas-provider](https://github.com/openfaas/faas-provider) for containerd - CRUD for functions and services, implements the OpenFaaS REST API
* [Prometheus](https://github.com/prometheus/prometheus) - for monitoring of services, metrics, scaling and dashboards
* [OpenFaaS Gateway](https://github.com/openfaas/faas/tree/master/gateway) - the UI portal, CLI, and other OpenFaaS tooling can talk to this.
* [OpenFaaS queue-worker for NATS](https://github.com/openfaas/nats-queue-worker) - run your invocations in the background without adding any code. See also: [asynchronous invocations](https://docs.openfaas.com/reference/triggers/#async-nats-streaming)
* [NATS](https://nats.io) for asynchronous processing and queues
You can use the standard [faas-cli](https://github.com/openfaas/faas-cli) with faasd along with pre-packaged functions in the Function Store, or build your own with the template store.
You'll also need:
### faasd supports:
* [CNI](https://github.com/containernetworking/plugins)
* [containerd](https://github.com/containerd/containerd)
* [runc](https://github.com/opencontainers/runc)
You can use the standard [faas-cli](https://github.com/openfaas/faas-cli) along with pre-packaged functions from *the Function Store*, or build your own using any OpenFaaS template.
## Tutorials
### Get started on DigitalOcean, or any other IaaS
If your IaaS supports `user_data` aka "cloud-init", then this guide is for you. If not, then checkout the approach and feel free to run each step manually.
* [Build a Serverless appliance with faasd](https://blog.alexellis.io/deploy-serverless-faasd-with-cloud-init/)
### Run locally on MacOS, Linux, or Windows with Multipass.run
* [Get up and running with your own faasd installation on your Mac/Ubuntu or Windows with cloud-config](https://gist.github.com/alexellis/6d297e678c9243d326c151028a3ad7b9)
### Get started on armhf / Raspberry Pi
You can run this tutorial on your Raspberry Pi, or adapt the steps for a regular Linux VM/VPS host.
* [faasd - lightweight Serverless for your Raspberry Pi](https://blog.alexellis.io/faasd-for-lightweight-serverless/)
### Terraform for DigitalOcean
Automate everything within < 60 seconds and get a public URL and IP address back. Customise as required, or adapt to your preferred cloud such as AWS EC2.
* [Provision faasd 0.7.5 on DigitalOcean with Terraform 0.12.0](https://gist.github.com/alexellis/fd618bd2f957eb08c44d086ef2fc3906)
### A note on private repos / registries
To use private image repos, `~/.docker/config.json` needs to be copied to `/var/lib/faasd/.docker/config.json`.
If you'd like to set up your own private registry, [see this tutorial](https://blog.alexellis.io/get-a-tls-enabled-docker-registry-in-5-minutes/).
Beware that running `docker login` on MacOS and Windows may create an empty file with your credentials stored in the system helper.
Alternatively, use you can use the `registry-login` command from the OpenFaaS Cloud bootstrap tool (ofc-bootstrap):
```bash
curl -sLSf https://raw.githubusercontent.com/openfaas-incubator/ofc-bootstrap/master/get.sh | sudo sh
ofc-bootstrap registry-login --username <your-registry-username> --password-stdin
# (the enter your password and hit return)
```
The file will be created in `./credentials/`
### Logs for functions
You can view the logs of functions using `journalctl`:
```bash
journalctl -t openfaas-fn:FUNCTION_NAME
faas-cli store deploy figlet
journalctl -t openfaas-fn:figlet -f &
echo logs | faas-cli invoke figlet
```
### Manual / developer instructions
See [here for manual / developer instructions](docs/DEV.md)
## Getting help
### Docs
The [OpenFaaS docs](https://docs.openfaas.com/) provide a wealth of information and are kept up to date with new features.
### Function and template store
For community functions see `faas-cli store --help`
For templates built by the community see: `faas-cli template store list`, you can also use the `dockerfile` template if you just want to migrate an existing service without the benefits of using a template.
### Workshop
[The OpenFaaS workshop](https://github.com/openfaas/workshop/) is a set of 12 self-paced labs and provides a great starting point
### Community support
An active community of almost 3000 users awaits you on Slack. Over 250 of those users are also contributors and help maintain the code.
* [Join Slack](https://slack.openfaas.io/)
## Backlog
### Supported operations
* `faas login`
* `faas up`
* `faas list`
* `faas describe`
* `faas deploy --update=true --replace=false`
* `faas invoke --async`
* `faas invoke`
* `faas rm`
* `faas login`
* `faas store list/deploy/inspect`
* `faas up`
* `faas version`
* `faas invoke --async`
* `faas namespace`
* `faas secret`
Scale from and to zero is also supported. On a Dell XPS with a small, pre-pulled image unpausing an existing task took 0.19s and starting a task for a killed function took 0.39s. There may be further optimizations to be gained.
Other operations are pending development in the provider such as:
* `faas logs`
* `faas secret`
* `faas auth`
* `faas logs` - to stream logs on-demand for a known function, for the time being you can find logs via `journalctl -u faasd-provider`
* `faas auth` - supported for Basic Authentication, but OAuth2 & OIDC require a patch
### Pre-reqs
* Linux
PC / Cloud - any Linux that containerd works on should be fair game, but faasd is tested with Ubuntu 18.04
For Raspberry Pi Raspbian Stretch or newer also works fine
For MacOS users try [multipass.run](https://multipass.run) or [Vagrant](https://www.vagrantup.com/)
For Windows users, install [Git Bash](https://git-scm.com/downloads) along with multipass or vagrant. You can also use WSL1 or WSL2 which provides a Linux environment.
You will also need [containerd v1.3.2](https://github.com/containerd/containerd) and the [CNI plugins v0.8.5](https://github.com/containernetworking/plugins)
[faas-cli](https://github.com/openfaas/faas-cli) is optional, but recommended.
## Backlog
## Todo
Pending:
@ -75,6 +151,7 @@ Pending:
Done:
* [x] Provide a cloud-config.txt file for automated deployments of `faasd`
* [x] Inject / manage IPs between core components for service to service communication - i.e. so Prometheus can scrape the OpenFaaS gateway - done via `/etc/hosts` mount
* [x] Add queue-worker and NATS
* [x] Create faasd.service and faasd-provider.service
@ -86,217 +163,3 @@ Done:
* [x] Setup custom working directory for faasd `/var/lib/faasd/`
* [x] Use CNI to create network namespaces and adapters
## Tutorial: Get started on armhf / Raspberry Pi
You can run this tutorial on your Raspberry Pi, or adapt the steps for a regular Linux VM/VPS host.
* [faasd - lightweight Serverless for your Raspberry Pi](https://blog.alexellis.io/faasd-for-lightweight-serverless/)
## Tutorial: Multipass & KVM for MacOS/Linux, or Windows (with cloud-config)
* [Get up and running with your own faasd installation on your Mac/Ubuntu or Windows with cloud-config](https://gist.github.com/alexellis/6d297e678c9243d326c151028a3ad7b9)
## Tutorial: Manual installation
### Get containerd
You have three options - binaries for PC, binaries for armhf, or build from source.
* Install containerd `x86_64` only
```sh
export VER=1.3.2
curl -sLSf https://github.com/containerd/containerd/releases/download/v$VER/containerd-$VER.linux-amd64.tar.gz > /tmp/containerd.tar.gz \
&& sudo tar -xvf /tmp/containerd.tar.gz -C /usr/local/bin/ --strip-components=1
containerd -version
```
* Or get my containerd binaries for armhf
Building containerd on armhf is extremely slow.
```sh
curl -sSL https://github.com/alexellis/containerd-armhf/releases/download/v1.3.2/containerd.tgz | sudo tar -xvz --strip-components=2 -C /usr/local/bin/
```
* Or clone / build / install [containerd](https://github.com/containerd/containerd) from source:
```sh
export GOPATH=$HOME/go/
mkdir -p $GOPATH/src/github.com/containerd
cd $GOPATH/src/github.com/containerd
git clone https://github.com/containerd/containerd
cd containerd
git fetch origin --tags
git checkout v1.3.2
make
sudo make install
containerd --version
```
Kill any old containerd version:
```sh
# Kill any old version
sudo killall containerd
sudo systemctl disable containerd
```
Start containerd in a new terminal:
```sh
sudo containerd &
```
#### Enable forwarding
> This is required to allow containers in containerd to access the Internet via your computer's primary network interface.
```sh
sudo /sbin/sysctl -w net.ipv4.conf.all.forwarding=1
```
Make the setting permanent:
```sh
echo "net.ipv4.conf.all.forwarding=1" | sudo tee -a /etc/sysctl.conf
```
### Hacking (build from source)
#### Get build packages
```sh
sudo apt update \
&& sudo apt install -qy \
runc \
bridge-utils
```
You may find alternatives for CentOS and other distributions.
#### Install Go 1.13 (x86_64)
```sh
curl -sSLf https://dl.google.com/go/go1.13.6.linux-amd64.tar.gz > go.tgz
sudo rm -rf /usr/local/go/
sudo mkdir -p /usr/local/go/
sudo tar -xvf go.tgz -C /usr/local/go/ --strip-components=1
export GOPATH=$HOME/go/
export PATH=$PATH:/usr/local/go/bin/
go version
```
#### Or on Raspberry Pi (armhf)
```sh
curl -SLsf https://dl.google.com/go/go1.13.6.linux-armv6l.tar.gz > go.tgz
sudo rm -rf /usr/local/go/
sudo mkdir -p /usr/local/go/
sudo tar -xvf go.tgz -C /usr/local/go/ --strip-components=1
export GOPATH=$HOME/go/
export PATH=$PATH:/usr/local/go/bin/
go version
```
#### Install the CNI plugins:
* For PC run `export ARCH=amd64`
* For RPi/armhf run `export ARCH=arm`
* For arm64 run `export ARCH=arm64`
Then run:
```sh
export ARCH=amd64
export CNI_VERSION=v0.8.5
sudo mkdir -p /opt/cni/bin
curl -sSL https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz | sudo tar -xz -C /opt/cni/bin
```
Run or install faasd, which brings up the gateway and Prometheus as containers
```sh
cd $GOPATH/src/github.com/alexellis/faasd
go build
# Install with systemd
# sudo ./faasd install
# Or run interactively
# sudo ./faasd up
```
#### Build and run `faasd` (binaries)
```sh
# For x86_64
sudo curl -fSLs "https://github.com/alexellis/faasd/releases/download/0.6.2/faasd" \
-o "/usr/local/bin/faasd" \
&& sudo chmod a+x "/usr/local/bin/faasd"
# armhf
sudo curl -fSLs "https://github.com/alexellis/faasd/releases/download/0.6.2/faasd-armhf" \
-o "/usr/local/bin/faasd" \
&& sudo chmod a+x "/usr/local/bin/faasd"
# arm64
sudo curl -fSLs "https://github.com/alexellis/faasd/releases/download/0.6.2/faasd-arm64" \
-o "/usr/local/bin/faasd" \
&& sudo chmod a+x "/usr/local/bin/faasd"
```
#### At run-time
Look in `hosts` in the current working folder or in `/var/lib/faasd/` to get the IP for the gateway or Prometheus
```sh
127.0.0.1 localhost
10.62.0.1 faasd-provider
10.62.0.2 prometheus
10.62.0.3 gateway
10.62.0.4 nats
10.62.0.5 queue-worker
```
The IP addresses are dynamic and may change on every launch.
Since faasd-provider uses containerd heavily it is not running as a container, but as a stand-alone process. Its port is available via the bridge interface, i.e. `openfaas0`
* Prometheus will run on the Prometheus IP plus port 8080 i.e. http://[prometheus_ip]:9090/targets
* faasd-provider runs on 10.62.0.1:8081, i.e. directly on the host, and accessible via the bridge interface from CNI.
* Now go to the gateway's IP address as shown above on port 8080, i.e. http://[gateway_ip]:8080 - you can also use this address to deploy OpenFaaS Functions via the `faas-cli`.
* basic-auth
You will then need to get the basic-auth password, it is written to `/var/lib/faasd/secrets/basic-auth-password` if you followed the above instructions.
The default Basic Auth username is `admin`, which is written to `/var/lib/faasd/secrets/basic-auth-user`, if you wish to use a non-standard user then create this file and add your username (no newlines or other characters)
#### Installation with systemd
* `faasd install` - install faasd and containerd with systemd, this must be run from `$GOPATH/src/github.com/alexellis/faasd`
* `journalctl -u faasd -f` - faasd service logs
* `journalctl -u faasd-provider -f` - faasd-provider service logs
### Appendix
#### Links
https://github.com/renatofq/ctrofb/blob/31968e4b4893f3603e9998f21933c4131523bb5d/cmd/network.go
https://github.com/renatofq/catraia/blob/c4f62c86bddbfadbead38cd2bfe6d920fba26dce/catraia-net/network.go
https://github.com/containernetworking/plugins
https://github.com/containerd/go-cni

View File

@ -11,13 +11,14 @@ runcmd:
- curl -sLSf https://github.com/containerd/containerd/releases/download/v1.3.2/containerd-1.3.2.linux-amd64.tar.gz > /tmp/containerd.tar.gz && tar -xvf /tmp/containerd.tar.gz -C /usr/local/bin/ --strip-components=1
- curl -SLfs https://raw.githubusercontent.com/containerd/containerd/v1.3.2/containerd.service | tee /etc/systemd/system/containerd.service
- systemctl daemon-reload && systemctl start containerd
- systemctl enable containerd
- /sbin/sysctl -w net.ipv4.conf.all.forwarding=1
- mkdir -p /opt/cni/bin
- curl -sSL https://github.com/containernetworking/plugins/releases/download/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tgz | tar -xz -C /opt/cni/bin
- mkdir -p /go/src/github.com/alexellis/
- cd /go/src/github.com/alexellis/ && git clone https://github.com/alexellis/faasd
- curl -fSLs "https://github.com/alexellis/faasd/releases/download/0.6.2/faasd" --output "/usr/local/bin/faasd" && chmod a+x "/usr/local/bin/faasd"
- cd /go/src/github.com/alexellis/faasd/ && /usr/local/bin/faasd install
- mkdir -p /go/src/github.com/openfaas/
- cd /go/src/github.com/openfaas/ && git clone https://github.com/openfaas/faasd
- curl -fSLs "https://github.com/openfaas/faasd/releases/download/0.7.7/faasd" --output "/usr/local/bin/faasd" && chmod a+x "/usr/local/bin/faasd"
- cd /go/src/github.com/openfaas/faasd/ && /usr/local/bin/faasd install
- systemctl status -l containerd --no-pager
- journalctl -u faasd-provider --no-pager
- systemctl status -l faasd-provider --no-pager

60
cmd/collect.go Normal file
View File

@ -0,0 +1,60 @@
package cmd
import (
"bufio"
"context"
"fmt"
"io"
"sync"
"github.com/containerd/containerd/runtime/v2/logging"
"github.com/coreos/go-systemd/journal"
"github.com/spf13/cobra"
)
func CollectCommand() *cobra.Command {
return collectCmd
}
var collectCmd = &cobra.Command{
Use: "collect",
Short: "Collect logs to the journal",
RunE: runCollect,
}
func runCollect(_ *cobra.Command, _ []string) error {
logging.Run(logStdio)
return nil
}
// logStdio copied from
// https://github.com/containerd/containerd/pull/3085
// https://github.com/stellarproject/orbit
func logStdio(ctx context.Context, config *logging.Config, ready func() error) error {
// construct any log metadata for the container
vars := map[string]string{
"SYSLOG_IDENTIFIER": fmt.Sprintf("%s:%s", config.Namespace, config.ID),
}
var wg sync.WaitGroup
wg.Add(2)
// forward both stdout and stderr to the journal
go copy(&wg, config.Stdout, journal.PriInfo, vars)
go copy(&wg, config.Stderr, journal.PriErr, vars)
// signal that we are ready and setup for the container to be started
if err := ready(); err != nil {
return err
}
wg.Wait()
return nil
}
func copy(wg *sync.WaitGroup, r io.Reader, pri journal.Priority, vars map[string]string) {
defer wg.Done()
s := bufio.NewScanner(r)
for s.Scan() {
if s.Err() != nil {
return
}
journal.Send(s.Text(), pri, vars)
}
}

View File

@ -6,7 +6,7 @@ import (
"os"
"path"
systemd "github.com/alexellis/faasd/pkg/systemd"
systemd "github.com/openfaas/faasd/pkg/systemd"
"github.com/pkg/errors"
"github.com/spf13/cobra"
@ -18,7 +18,10 @@ var installCmd = &cobra.Command{
RunE: runInstall,
}
const workingDirectoryPermission = 0644
const faasdwd = "/var/lib/faasd"
const faasdProviderWd = "/var/lib/faasd-provider"
func runInstall(_ *cobra.Command, _ []string) error {
@ -102,7 +105,7 @@ func binExists(folder, name string) error {
func ensureWorkingDir(folder string) error {
if _, err := os.Stat(folder); err != nil {
err = os.MkdirAll(folder, 0600)
err = os.MkdirAll(folder, workingDirectoryPermission)
if err != nil {
return err
}

View File

@ -9,18 +9,19 @@ import (
"os"
"path"
"github.com/alexellis/faasd/pkg/provider/config"
"github.com/alexellis/faasd/pkg/provider/handlers"
"github.com/containerd/containerd"
bootstrap "github.com/openfaas/faas-provider"
"github.com/openfaas/faas-provider/proxy"
"github.com/openfaas/faas-provider/types"
"github.com/openfaas/faasd/pkg/cninetwork"
"github.com/openfaas/faasd/pkg/provider/config"
"github.com/openfaas/faasd/pkg/provider/handlers"
"github.com/spf13/cobra"
)
var providerCmd = &cobra.Command{
Use: "provider",
Short: "Run the faasd faas-provider",
Short: "Run the faasd-provider",
RunE: runProvider,
}
@ -31,7 +32,7 @@ func runProvider(_ *cobra.Command, _ []string) error {
return err
}
log.Printf("faas-containerd starting..\tService Timeout: %s\n", config.WriteTimeout.String())
log.Printf("faasd-provider starting..\tService Timeout: %s\n", config.WriteTimeout.String())
wd, err := os.Getwd()
if err != nil {
@ -39,20 +40,20 @@ func runProvider(_ *cobra.Command, _ []string) error {
}
writeHostsErr := ioutil.WriteFile(path.Join(wd, "hosts"),
[]byte(`127.0.0.1 localhost`), 0644)
[]byte(`127.0.0.1 localhost`), workingDirectoryPermission)
if writeHostsErr != nil {
return fmt.Errorf("cannot write hosts file: %s", writeHostsErr)
}
writeResolvErr := ioutil.WriteFile(path.Join(wd, "resolv.conf"),
[]byte(`nameserver 8.8.8.8`), 0644)
[]byte(`nameserver 8.8.8.8`), workingDirectoryPermission)
if writeResolvErr != nil {
return fmt.Errorf("cannot write resolv.conf file: %s", writeResolvErr)
}
cni, err := handlers.InitNetwork()
cni, err := cninetwork.InitNetwork()
if err != nil {
return err
}
@ -80,6 +81,14 @@ func runProvider(_ *cobra.Command, _ []string) error {
InfoHandler: handlers.MakeInfoHandler(Version, GitCommit),
ListNamespaceHandler: listNamespaces(),
SecretHandler: handlers.MakeSecretHandler(client, userSecretPath),
LogHandler: func(w http.ResponseWriter, r *http.Request) {
if r.Body != nil {
defer r.Body.Close()
}
w.WriteHeader(http.StatusNotImplemented)
w.Write([]byte(`Logs are not implemented for faasd`))
},
}
log.Printf("Listening on TCP port: %d\n", *config.TCPPort)

View File

@ -15,6 +15,11 @@ func init() {
rootCommand.AddCommand(upCmd)
rootCommand.AddCommand(installCmd)
rootCommand.AddCommand(providerCmd)
rootCommand.AddCommand(collectCmd)
}
func RootCommand() *cobra.Command {
return rootCommand
}
var (

View File

@ -2,7 +2,6 @@ package cmd
import (
"fmt"
"io"
"io/ioutil"
"log"
"os"
@ -15,7 +14,7 @@ import (
"github.com/pkg/errors"
"github.com/alexellis/faasd/pkg"
"github.com/openfaas/faasd/pkg"
"github.com/alexellis/k3sup/pkg/env"
"github.com/sethvargo/go-password/password"
"github.com/spf13/cobra"
@ -27,32 +26,6 @@ var upCmd = &cobra.Command{
RunE: runUp,
}
// defaultCNIConf is a CNI configuration that enables network access to containers (docker-bridge style)
var defaultCNIConf = fmt.Sprintf(`
{
"cniVersion": "0.4.0",
"name": "%s",
"plugins": [
{
"type": "bridge",
"bridge": "%s",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "%s",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
},
{
"type": "firewall"
}
]
}
`, pkg.DefaultNetworkName, pkg.DefaultBridgeName, pkg.DefaultSubnet)
const containerSecretMountDir = "/run/secrets"
func runUp(_ *cobra.Command, _ []string) error {
@ -80,10 +53,6 @@ func runUp(_ *cobra.Command, _ []string) error {
return errors.Wrap(basicAuthErr, "cannot create basic-auth-* files")
}
if makeNetworkErr := makeNetworkConfig(); makeNetworkErr != nil {
return errors.Wrap(makeNetworkErr, "error creating network config")
}
services := makeServiceDefinitions(clientSuffix)
start := time.Now()
@ -193,7 +162,7 @@ func makeFile(filePath, fileContents string) error {
return nil
} else if os.IsNotExist(err) {
log.Printf("Writing to: %q\n", filePath)
return ioutil.WriteFile(filePath, []byte(fileContents), 0644)
return ioutil.WriteFile(filePath, []byte(fileContents), workingDirectoryPermission)
} else {
return err
}
@ -248,7 +217,7 @@ func makeServiceDefinitions(archSuffix string) []pkg.Service {
Name: "gateway",
Env: []string{
"basic_auth=true",
"functions_provider_url=http://faas-containerd:8081/",
"functions_provider_url=http://faasd-provider:8081/",
"direct_functions=false",
"read_timeout=60s",
"write_timeout=60s",
@ -301,56 +270,3 @@ func makeServiceDefinitions(archSuffix string) []pkg.Service {
},
}
}
func makeNetworkConfig() error {
netConfig := path.Join(pkg.CNIConfDir, pkg.DefaultCNIConfFilename)
log.Printf("Writing network config...\n")
if !dirExists(pkg.CNIConfDir) {
if err := os.MkdirAll(pkg.CNIConfDir, 0755); err != nil {
return fmt.Errorf("cannot create directory: %s", pkg.CNIConfDir)
}
}
if err := ioutil.WriteFile(netConfig, []byte(defaultCNIConf), 644); err != nil {
return fmt.Errorf("cannot write network config: %s", pkg.DefaultCNIConfFilename)
}
return nil
}
func dirEmpty(dirname string) (b bool) {
if !dirExists(dirname) {
return
}
f, err := os.Open(dirname)
if err != nil {
return
}
defer func() { _ = f.Close() }()
// If the first file is EOF, the directory is empty
if _, err = f.Readdir(1); err == io.EOF {
b = true
}
return
}
func dirExists(dirname string) bool {
exists, info := pathExists(dirname)
if !exists {
return false
}
return info.IsDir()
}
func pathExists(path string) (bool, os.FileInfo) {
info, err := os.Stat(path)
if os.IsNotExist(err) {
return false, nil
}
return true, info
}

261
docs/DEV.md Normal file
View File

@ -0,0 +1,261 @@
## Manual installation of faasd for development
> Note: if you're just wanting to try out faasd, then it's likely that you're on the wrong page. This is a detailed set of instructions for those wanting to contribute or customise faasd. Feel free to go back to the homepage and pick a tutorial instead.
### Pre-reqs
* Linux
PC / Cloud - any Linux that containerd works on should be fair game, but faasd is tested with Ubuntu 18.04
For Raspberry Pi Raspbian Stretch or newer also works fine
For MacOS users try [multipass.run](https://multipass.run) or [Vagrant](https://www.vagrantup.com/)
For Windows users, install [Git Bash](https://git-scm.com/downloads) along with multipass or vagrant. You can also use WSL1 or WSL2 which provides a Linux environment.
You will also need [containerd v1.3.2](https://github.com/containerd/containerd) and the [CNI plugins v0.8.5](https://github.com/containernetworking/plugins)
[faas-cli](https://github.com/openfaas/faas-cli) is optional, but recommended.
### Get containerd
You have three options - binaries for PC, binaries for armhf, or build from source.
* Install containerd `x86_64` only
```sh
export VER=1.3.2
curl -sLSf https://github.com/containerd/containerd/releases/download/v$VER/containerd-$VER.linux-amd64.tar.gz > /tmp/containerd.tar.gz \
&& sudo tar -xvf /tmp/containerd.tar.gz -C /usr/local/bin/ --strip-components=1
containerd -version
```
* Or get my containerd binaries for Raspberry Pi (armhf)
Building `containerd` on armhf is extremely slow, so I've provided binaries for you.
```sh
curl -sSL https://github.com/alexellis/containerd-armhf/releases/download/v1.3.2/containerd.tgz | sudo tar -xvz --strip-components=2 -C /usr/local/bin/
```
* Or clone / build / install [containerd](https://github.com/containerd/containerd) from source:
```sh
export GOPATH=$HOME/go/
mkdir -p $GOPATH/src/github.com/containerd
cd $GOPATH/src/github.com/containerd
git clone https://github.com/containerd/containerd
cd containerd
git fetch origin --tags
git checkout v1.3.2
make
sudo make install
containerd --version
```
#### Ensure containerd is running
```sh
curl -sLS https://raw.githubusercontent.com/containerd/containerd/master/containerd.service > /tmp/containerd.service
sudo cp /tmp/containerd.service /lib/systemd/system/
sudo systemctl enable containerd
sudo systemctl daemon-reload
sudo systemctl restart containerd
```
Or run ad-hoc:
```sh
sudo containerd &
```
#### Enable forwarding
> This is required to allow containers in containerd to access the Internet via your computer's primary network interface.
```sh
sudo /sbin/sysctl -w net.ipv4.conf.all.forwarding=1
```
Make the setting permanent:
```sh
echo "net.ipv4.conf.all.forwarding=1" | sudo tee -a /etc/sysctl.conf
```
### Hacking (build from source)
#### Get build packages
```sh
sudo apt update \
&& sudo apt install -qy \
runc \
bridge-utils \
make
```
You may find alternative package names for CentOS and other Linux distributions.
#### Install Go 1.13 (x86_64)
```sh
curl -sSLf https://dl.google.com/go/go1.13.6.linux-amd64.tar.gz > go.tgz
sudo rm -rf /usr/local/go/
sudo mkdir -p /usr/local/go/
sudo tar -xvf go.tgz -C /usr/local/go/ --strip-components=1
export GOPATH=$HOME/go/
export PATH=$PATH:/usr/local/go/bin/
go version
```
You should also add the following to `~/.bash_profile`:
```sh
export GOPATH=$HOME/go/
export PATH=$PATH:/usr/local/go/bin/
```
#### Or on Raspberry Pi (armhf)
```sh
curl -SLsf https://dl.google.com/go/go1.13.6.linux-armv6l.tar.gz > go.tgz
sudo rm -rf /usr/local/go/
sudo mkdir -p /usr/local/go/
sudo tar -xvf go.tgz -C /usr/local/go/ --strip-components=1
export GOPATH=$HOME/go/
export PATH=$PATH:/usr/local/go/bin/
go version
```
#### Install the CNI plugins:
* For PC run `export ARCH=amd64`
* For RPi/armhf run `export ARCH=arm`
* For arm64 run `export ARCH=arm64`
Then run:
```sh
export ARCH=amd64
export CNI_VERSION=v0.8.5
sudo mkdir -p /opt/cni/bin
curl -sSL https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz | sudo tar -xz -C /opt/cni/bin
```
#### Clone faasd and its systemd unit files
```sh
mkdir -p $GOPATH/src/github.com/openfaas/
cd $GOPATH/src/github.com/openfaas/
git clone https://github.com/openfaas/faasd
```
#### Build `faasd` from source (optional)
```sh
cd $GOPATH/src/github.com/openfaas/faasd
cd faasd
make local
```
#### Build and run `faasd` (binaries)
```sh
# For x86_64
sudo curl -fSLs "https://github.com/openfaas/faasd/releases/download/0.7.4/faasd" \
-o "/usr/local/bin/faasd" \
&& sudo chmod a+x "/usr/local/bin/faasd"
# armhf
sudo curl -fSLs "https://github.com/openfaas/faasd/releases/download/0.7.4/faasd-armhf" \
-o "/usr/local/bin/faasd" \
&& sudo chmod a+x "/usr/local/bin/faasd"
# arm64
sudo curl -fSLs "https://github.com/openfaas/faasd/releases/download/0.7.4/faasd-arm64" \
-o "/usr/local/bin/faasd" \
&& sudo chmod a+x "/usr/local/bin/faasd"
```
#### Install `faasd`
```sh
# Install with systemd
sudo cp bin/faasd /usr/local/bin
sudo faasd install
2020/02/17 17:38:06 Writing to: "/var/lib/faasd/secrets/basic-auth-password"
2020/02/17 17:38:06 Writing to: "/var/lib/faasd/secrets/basic-auth-user"
Login with:
sudo cat /var/lib/faasd/secrets/basic-auth-password | faas-cli login -s
```
You can now log in either from this machine or a remote machine using the OpenFaaS UI, or CLI.
Check that faasd is ready:
```
sudo journalctl -u faasd
```
You should see output like:
```
Feb 17 17:46:35 gold-survive faasd[4140]: 2020/02/17 17:46:35 Starting faasd proxy on 8080
Feb 17 17:46:35 gold-survive faasd[4140]: Gateway: 10.62.0.5:8080
Feb 17 17:46:35 gold-survive faasd[4140]: 2020/02/17 17:46:35 [proxy] Wait for done
Feb 17 17:46:35 gold-survive faasd[4140]: 2020/02/17 17:46:35 [proxy] Begin listen on 8080
```
To get the CLI for the command above run:
```sh
curl -sSLf https://cli.openfaas.com | sudo sh
```
#### At run-time
Look in `hosts` in the current working folder or in `/var/lib/faasd/` to get the IP for the gateway or Prometheus
```sh
127.0.0.1 localhost
10.62.0.1 faasd-provider
10.62.0.2 prometheus
10.62.0.3 gateway
10.62.0.4 nats
10.62.0.5 queue-worker
```
The IP addresses are dynamic and may change on every launch.
Since faasd-provider uses containerd heavily it is not running as a container, but as a stand-alone process. Its port is available via the bridge interface, i.e. `openfaas0`
* Prometheus will run on the Prometheus IP plus port 8080 i.e. http://[prometheus_ip]:9090/targets
* faasd-provider runs on 10.62.0.1:8081, i.e. directly on the host, and accessible via the bridge interface from CNI.
* Now go to the gateway's IP address as shown above on port 8080, i.e. http://[gateway_ip]:8080 - you can also use this address to deploy OpenFaaS Functions via the `faas-cli`.
* basic-auth
You will then need to get the basic-auth password, it is written to `/var/lib/faasd/secrets/basic-auth-password` if you followed the above instructions.
The default Basic Auth username is `admin`, which is written to `/var/lib/faasd/secrets/basic-auth-user`, if you wish to use a non-standard user then create this file and add your username (no newlines or other characters)
#### Installation with systemd
* `faasd install` - install faasd and containerd with systemd, this must be run from `$GOPATH/src/github.com/openfaas/faasd`
* `journalctl -u faasd -f` - faasd service logs
* `journalctl -u faasd-provider -f` - faasd-provider service logs

View File

@ -1,6 +1,6 @@
[Unit]
Description=faasd
After=faas-containerd.service
After=faasd-provider.service
[Service]
MemoryLimit=500M

18
main.go
View File

@ -1,9 +1,10 @@
package main
import (
"fmt"
"os"
"github.com/alexellis/faasd/cmd"
"github.com/openfaas/faasd/cmd"
)
// These values will be injected into these variables at the build time.
@ -15,6 +16,21 @@ var (
)
func main() {
if _, ok := os.LookupEnv("CONTAINER_ID"); ok {
collect := cmd.RootCommand()
collect.SetArgs([]string{"collect"})
collect.SilenceUsage = true
collect.SilenceErrors = true
err := collect.Execute()
if err != nil {
fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}
os.Exit(0)
}
if err := cmd.Execute(Version, GitCommit); err != nil {
os.Exit(1)
}

View File

@ -1,4 +1,4 @@
package handlers
package cninetwork
import (
"context"
@ -23,7 +23,8 @@ const (
CNIConfDir = "/etc/cni/net.d"
// NetNSPathFmt gives the path to the a process network namespace, given the pid
NetNSPathFmt = "/proc/%d/ns/net"
// CNIResultsDir is the directory CNI stores allocated IP for containers
CNIResultsDir = "/var/lib/cni/results"
// defaultCNIConfFilename is the vanity filename of default CNI configuration file
defaultCNIConfFilename = "10-openfaas.conflist"
// defaultNetworkName names the "docker-bridge"-like CNI plugin-chain installed when no other CNI configuration is present.
@ -94,8 +95,8 @@ func InitNetwork() (gocni.CNI, error) {
// CreateCNINetwork creates a CNI network interface and attaches it to the context
func CreateCNINetwork(ctx context.Context, cni gocni.CNI, task containerd.Task, labels map[string]string) (*gocni.CNIResult, error) {
id := NetID(task)
netns := NetNamespace(task)
id := netID(task)
netns := netNamespace(task)
result, err := cni.Setup(ctx, id, netns, gocni.WithLabels(labels))
if err != nil {
return nil, errors.Wrapf(err, "Failed to setup network for task %q: %v", id, err)
@ -116,8 +117,8 @@ func DeleteCNINetwork(ctx context.Context, cni gocni.CNI, client *containerd.Cli
log.Printf("[Delete] removing CNI network for: %s\n", task.ID())
id := NetID(task)
netns := NetNamespace(task)
id := netID(task)
netns := netNamespace(task)
if err := cni.Remove(ctx, id, netns); err != nil {
return errors.Wrapf(err, "Failed to remove network for task: %q, %v", id, err)
@ -135,7 +136,7 @@ func GetIPAddress(result *gocni.CNIResult, task containerd.Task) (net.IP, error)
// Get the IP of the created interface
var ip net.IP
for ifName, config := range result.Interfaces {
if config.Sandbox == NetNamespace(task) {
if config.Sandbox == netNamespace(task) {
for _, ipConfig := range config.IPConfigs {
if ifName != "lo" && ipConfig.IP.To4() != nil {
ip = ipConfig.IP
@ -169,13 +170,24 @@ func GetIPfromPID(pid int) (*net.IP, error) {
return nil, fmt.Errorf("no IP found for function")
}
// NetID generates the network IF based on task name and task PID
func NetID(task containerd.Task) string {
// CNIGateway returns the gateway for default subnet
func CNIGateway() (string, error) {
ip, _, err := net.ParseCIDR(defaultSubnet)
if err != nil {
return "", fmt.Errorf("error formatting gateway for network %s", defaultSubnet)
}
ip = ip.To4()
ip[3] = 1
return ip.String(), nil
}
// netID generates the network IF based on task name and task PID
func netID(task containerd.Task) string {
return fmt.Sprintf("%s-%d", task.ID(), task.Pid())
}
// NetNamespace generates the namespace path based on task PID.
func NetNamespace(task containerd.Task) string {
// netNamespace generates the namespace path based on task PID.
func netNamespace(task containerd.Task) string {
return fmt.Sprintf(NetNSPathFmt, task.Pid())
}

View File

@ -3,7 +3,7 @@
// Copyright Weaveworks
// github.com/weaveworks/weave/net
package handlers
package cninetwork
import (
"errors"

View File

@ -1,7 +1,7 @@
// Copyright Weaveworks
// github.com/weaveworks/weave/net
package weave
package cninetwork
import (
"fmt"

View File

@ -8,7 +8,8 @@ import (
"log"
"net/http"
"github.com/alexellis/faasd/pkg/service"
cninetwork "github.com/openfaas/faasd/pkg/cninetwork"
"github.com/openfaas/faasd/pkg/service"
"github.com/containerd/containerd"
"github.com/containerd/containerd/namespaces"
gocni "github.com/containerd/go-cni"
@ -52,7 +53,7 @@ func MakeDeleteHandler(client *containerd.Client, cni gocni.CNI) func(w http.Res
// TODO: this needs to still happen if the task is paused
if function.replicas != 0 {
err = DeleteCNINetwork(ctx, cni, client, name)
err = cninetwork.DeleteCNINetwork(ctx, cni, client, name)
if err != nil {
log.Printf("[Delete] error removing CNI network for %s, %s\n", name, err)
}

View File

@ -9,16 +9,17 @@ import (
"net/http"
"os"
"path"
"strings"
"github.com/alexellis/faasd/pkg/service"
"github.com/containerd/containerd"
"github.com/containerd/containerd/cio"
"github.com/containerd/containerd/namespaces"
"github.com/containerd/containerd/oci"
gocni "github.com/containerd/go-cni"
"github.com/docker/distribution/reference"
"github.com/opencontainers/runtime-spec/specs-go"
"github.com/openfaas/faas-provider/types"
cninetwork "github.com/openfaas/faasd/pkg/cninetwork"
"github.com/openfaas/faasd/pkg/service"
"github.com/pkg/errors"
)
@ -63,11 +64,11 @@ func MakeDeployHandler(client *containerd.Client, cni gocni.CNI, secretMountPath
}
func deploy(ctx context.Context, req types.FunctionDeployment, client *containerd.Client, cni gocni.CNI, secretMountPath string) error {
imgRef := "docker.io/" + req.Image
if strings.Index(req.Image, ":") == -1 {
imgRef = imgRef + ":latest"
r, err := reference.ParseNormalizedNamed(req.Image)
if err != nil {
return err
}
imgRef := reference.TagNameOnly(r).String()
snapshotter := ""
if val, ok := os.LookupEnv("snapshotter"); ok {
@ -101,11 +102,12 @@ func deploy(ctx context.Context, req types.FunctionDeployment, client *container
name,
containerd.WithImage(image),
containerd.WithSnapshotter(snapshotter),
containerd.WithNewSnapshot(req.Service+"-snapshot", image),
containerd.WithNewSnapshot(name+"-snapshot", image),
containerd.WithNewSpec(oci.WithImageConfig(image),
oci.WithCapabilities([]string{"CAP_NET_RAW"}),
oci.WithMounts(mounts),
oci.WithEnv(envs)),
containerd.WithContainerLabels(*req.Labels),
)
if err != nil {
@ -119,7 +121,10 @@ func deploy(ctx context.Context, req types.FunctionDeployment, client *container
func createTask(ctx context.Context, client *containerd.Client, container containerd.Container, cni gocni.CNI) error {
name := container.ID()
task, taskErr := container.NewTask(ctx, cio.NewCreator(cio.WithStdio))
// task, taskErr := container.NewTask(ctx, cio.NewCreator(cio.WithStdio))
task, taskErr := container.NewTask(ctx, cio.BinaryIO("/usr/local/bin/faasd", nil))
if taskErr != nil {
return fmt.Errorf("unable to start task: %s, error: %s", name, taskErr)
}
@ -127,13 +132,13 @@ func createTask(ctx context.Context, client *containerd.Client, container contai
log.Printf("Container ID: %s\tTask ID %s:\tTask PID: %d\t\n", name, task.ID(), task.Pid())
labels := map[string]string{}
network, err := CreateCNINetwork(ctx, cni, task, labels)
network, err := cninetwork.CreateCNINetwork(ctx, cni, task, labels)
if err != nil {
return err
}
ip, err := GetIPAddress(network, task)
ip, err := cninetwork.GetIPAddress(network, task)
if err != nil {
return err
}

View File

@ -3,9 +3,11 @@ package handlers
import (
"context"
"fmt"
"log"
"github.com/containerd/containerd"
"github.com/containerd/containerd/namespaces"
"github.com/openfaas/faasd/pkg/cninetwork"
)
type Function struct {
@ -15,6 +17,7 @@ type Function struct {
pid uint32
replicas int
IP string
labels map[string]string
}
const (
@ -43,10 +46,18 @@ func GetFunction(client *containerd.Client, name string) (Function, error) {
if err == nil {
image, _ := c.Image(ctx)
containerName := c.ID()
labels, labelErr := c.Labels(ctx)
if labelErr != nil {
log.Printf("cannot list container %s labels: %s", containerName, labelErr.Error())
}
f := Function{
name: c.ID(),
name: containerName,
namespace: FunctionNamespace,
image: image.Name(),
labels: labels,
}
replicas := 0
@ -62,11 +73,10 @@ func GetFunction(client *containerd.Client, name string) (Function, error) {
f.pid = task.Pid()
// Get container IP address
ip, getIPErr := GetIPfromPID(int(task.Pid()))
if getIPErr != nil {
return Function{}, getIPErr
ip, err := cninetwork.GetIPfromPID(int(task.Pid()))
if err != nil {
return Function{}, err
}
f.IP = ip.String()
}
} else {

View File

@ -26,6 +26,7 @@ func MakeReadHandler(client *containerd.Client) func(w http.ResponseWriter, r *h
Image: function.image,
Replicas: uint64(function.replicas),
Namespace: function.namespace,
Labels: &function.labels,
})
}

View File

@ -95,12 +95,6 @@ func deleteSecret(c *containerd.Client, w http.ResponseWriter, r *http.Request,
return
}
if err != nil {
log.Printf("[secret] error %s", err.Error())
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
err = os.Remove(path.Join(mountPath, secret.Name))
if err != nil {

View File

@ -8,7 +8,8 @@ import (
"log"
"net/http"
"github.com/alexellis/faasd/pkg/service"
"github.com/openfaas/faasd/pkg/cninetwork"
"github.com/openfaas/faasd/pkg/service"
"github.com/containerd/containerd"
"github.com/containerd/containerd/namespaces"
gocni "github.com/containerd/go-cni"
@ -54,7 +55,7 @@ func MakeUpdateHandler(client *containerd.Client, cni gocni.CNI, secretMountPath
ctx := namespaces.WithNamespace(context.Background(), FunctionNamespace)
if function.replicas != 0 {
err = DeleteCNINetwork(ctx, cni, client, name)
err = cninetwork.DeleteCNINetwork(ctx, cni, client, name)
if err != nil {
log.Printf("[Update] error removing CNI network for %s, %s\n", name, err)
}

View File

@ -1,131 +0,0 @@
// Copyright Weaveworks
// github.com/weaveworks/weave/net
package handlers
import (
"fmt"
"net"
"os"
"github.com/vishvananda/netlink"
"github.com/vishvananda/netns"
)
type Dev struct {
Name string `json:"Name,omitempty"`
MAC net.HardwareAddr `json:"MAC,omitempty"`
CIDRs []*net.IPNet `json:"CIDRs,omitempty"`
}
func linkToNetDev(link netlink.Link) (Dev, error) {
addrs, err := netlink.AddrList(link, netlink.FAMILY_V4)
if err != nil {
return Dev{}, err
}
netDev := Dev{Name: link.Attrs().Name, MAC: link.Attrs().HardwareAddr}
for _, addr := range addrs {
netDev.CIDRs = append(netDev.CIDRs, addr.IPNet)
}
return netDev, nil
}
// ConnectedToBridgeVethPeerIds returns peer indexes of veth links connected to
// the given bridge. The peer index is used to query from a container netns
// whether the container is connected to the bridge.
func ConnectedToBridgeVethPeerIds(bridgeName string) ([]int, error) {
var ids []int
br, err := netlink.LinkByName(bridgeName)
if err != nil {
return nil, err
}
links, err := netlink.LinkList()
if err != nil {
return nil, err
}
for _, link := range links {
if _, isveth := link.(*netlink.Veth); isveth && link.Attrs().MasterIndex == br.Attrs().Index {
peerID := link.Attrs().ParentIndex
if peerID == 0 {
// perhaps running on an older kernel where ParentIndex doesn't work.
// as fall-back, assume the peers are consecutive
peerID = link.Attrs().Index - 1
}
ids = append(ids, peerID)
}
}
return ids, nil
}
// Lookup the weave interface of a container
func GetWeaveNetDevs(processID int) ([]Dev, error) {
peerIDs, err := ConnectedToBridgeVethPeerIds("weave")
if err != nil {
return nil, err
}
return GetNetDevsByVethPeerIds(processID, peerIDs)
}
func GetNetDevsByVethPeerIds(processID int, peerIDs []int) ([]Dev, error) {
// Bail out if this container is running in the root namespace
netnsRoot, err := netns.GetFromPid(1)
if err != nil {
return nil, fmt.Errorf("unable to open root namespace: %s", err)
}
defer netnsRoot.Close()
netnsContainer, err := netns.GetFromPid(processID)
if err != nil {
// Unable to find a namespace for this process - just return nothing
if os.IsNotExist(err) {
return nil, nil
}
return nil, fmt.Errorf("unable to open process %d namespace: %s", processID, err)
}
defer netnsContainer.Close()
if netnsRoot.Equal(netnsContainer) {
return nil, nil
}
// convert list of peerIDs into a map for faster lookup
indexes := make(map[int]struct{})
for _, id := range peerIDs {
indexes[id] = struct{}{}
}
var netdevs []Dev
err = WithNetNS(netnsContainer, func() error {
links, err := netlink.LinkList()
if err != nil {
return err
}
for _, link := range links {
if _, found := indexes[link.Attrs().Index]; found {
netdev, err := linkToNetDev(link)
if err != nil {
return err
}
netdevs = append(netdevs, netdev)
}
}
return nil
})
return netdevs, err
}
// Get the weave bridge interface.
// NB: Should be called from the root network namespace.
func GetBridgeNetDev(bridgeName string) (Dev, error) {
link, err := netlink.LinkByName(bridgeName)
if err != nil {
return Dev{}, err
}
return linkToNetDev(link)
}

View File

@ -4,14 +4,23 @@ import (
"context"
"fmt"
"log"
"os"
"path/filepath"
"sync"
"time"
"github.com/containerd/containerd"
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/remotes"
"github.com/containerd/containerd/remotes/docker"
"github.com/docker/cli/cli/config"
"github.com/docker/cli/cli/config/configfile"
"golang.org/x/sys/unix"
)
// dockerConfigDir contains "config.json"
const dockerConfigDir = "/var/lib/faasd/.docker/"
// Remove removes a container
func Remove(ctx context.Context, client *containerd.Client, name string) error {
@ -90,16 +99,59 @@ func killTask(ctx context.Context, task containerd.Task) error {
return err
}
func PrepareImage(ctx context.Context, client *containerd.Client, imageName, snapshotter string) (containerd.Image, error) {
func getResolver(ctx context.Context, configFile *configfile.ConfigFile) (remotes.Resolver, error) {
// credsFunc is based on https://github.com/moby/buildkit/blob/0b130cca040246d2ddf55117eeff34f546417e40/session/auth/authprovider/authprovider.go#L35
credFunc := func(host string) (string, string, error) {
if host == "registry-1.docker.io" {
host = "https://index.docker.io/v1/"
}
ac, err := configFile.GetAuthConfig(host)
if err != nil {
return "", "", err
}
if ac.IdentityToken != "" {
return "", ac.IdentityToken, nil
}
return ac.Username, ac.Password, nil
}
authOpts := []docker.AuthorizerOpt{docker.WithAuthCreds(credFunc)}
authorizer := docker.NewDockerAuthorizer(authOpts...)
opts := docker.ResolverOptions{
Hosts: docker.ConfigureDefaultRegistries(docker.WithAuthorizer(authorizer)),
}
return docker.NewResolver(opts), nil
}
func PrepareImage(ctx context.Context, client *containerd.Client, imageName, snapshotter string) (containerd.Image, error) {
var (
empty containerd.Image
resolver remotes.Resolver
)
if _, stErr := os.Stat(filepath.Join(dockerConfigDir, config.ConfigFileName)); stErr == nil {
configFile, err := config.Load(dockerConfigDir)
if err != nil {
return nil, err
}
resolver, err = getResolver(ctx, configFile)
if err != nil {
return empty, err
}
} else if !os.IsNotExist(stErr) {
return empty, stErr
}
var empty containerd.Image
image, err := client.GetImage(ctx, imageName)
if err != nil {
if !errdefs.IsNotFound(err) {
return empty, err
}
img, err := client.Pull(ctx, imageName, containerd.WithPullUnpack)
rOpts := []containerd.RemoteOpt{
containerd.WithPullUnpack,
}
if resolver != nil {
rOpts = append(rOpts, containerd.WithResolver(resolver))
}
img, err := client.Pull(ctx, imageName, rOpts...)
if err != nil {
return empty, fmt.Errorf("cannot pull: %s", err)
}

View File

@ -5,97 +5,83 @@ import (
"fmt"
"io/ioutil"
"log"
"net"
"os"
"path"
"path/filepath"
"github.com/alexellis/faasd/pkg/service"
"github.com/alexellis/faasd/pkg/weave"
"github.com/openfaas/faasd/pkg/cninetwork"
"github.com/openfaas/faasd/pkg/service"
"github.com/containerd/containerd"
"github.com/containerd/containerd/cio"
"github.com/containerd/containerd/containers"
"github.com/containerd/containerd/oci"
gocni "github.com/containerd/go-cni"
"github.com/google/uuid"
"github.com/pkg/errors"
"github.com/containerd/containerd/namespaces"
"github.com/containerd/containerd/oci"
"github.com/opencontainers/runtime-spec/specs-go"
)
const defaultSnapshotter = "overlayfs"
const (
// TODO: CNIBinDir and CNIConfDir should maybe be globally configurable?
// CNIBinDir describes the directory where the CNI binaries are stored
CNIBinDir = "/opt/cni/bin"
// CNIConfDir describes the directory where the CNI plugin's configuration is stored
CNIConfDir = "/etc/cni/net.d"
// netNSPathFmt gives the path to the a process network namespace, given the pid
NetNSPathFmt = "/proc/%d/ns/net"
// defaultCNIConfFilename is the vanity filename of default CNI configuration file
DefaultCNIConfFilename = "10-openfaas.conflist"
// defaultNetworkName names the "docker-bridge"-like CNI plugin-chain installed when no other CNI configuration is present.
// This value appears in iptables comments created by CNI.
DefaultNetworkName = "openfaas-cni-bridge"
// defaultBridgeName is the default bridge device name used in the defaultCNIConf
DefaultBridgeName = "openfaas0"
// defaultSubnet is the default subnet used in the defaultCNIConf -- this value is set to not collide with common container networking subnets:
DefaultSubnet = "10.62.0.0/16"
defaultSnapshotter = "overlayfs"
workingDirectoryPermission = 0644
// faasdNamespace is the containerd namespace services are created
faasdNamespace = "default"
)
type Service struct {
Image string
Env []string
Name string
Mounts []Mount
Caps []string
Args []string
}
type Mount struct {
Src string
Dest string
}
type Supervisor struct {
client *containerd.Client
cni gocni.CNI
}
func NewSupervisor(sock string) (*Supervisor, error) {
client, err := containerd.New(sock)
if err != nil {
panic(err)
return nil, err
}
cni, err := cninetwork.InitNetwork()
if err != nil {
return nil, err
}
return &Supervisor{
client: client,
cni: cni,
}, nil
}
func (s *Supervisor) Close() {
defer s.client.Close()
}
func (s *Supervisor) Remove(svcs []Service) error {
ctx := namespaces.WithNamespace(context.Background(), "default")
for _, svc := range svcs {
err := service.Remove(ctx, s.client, svc.Name)
if err != nil {
return err
}
}
return nil
}
func (s *Supervisor) Start(svcs []Service) error {
ctx := namespaces.WithNamespace(context.Background(), "default")
ctx := namespaces.WithNamespace(context.Background(), faasdNamespace)
wd, _ := os.Getwd()
ip, _, _ := net.ParseCIDR(DefaultSubnet)
ip = ip.To4()
ip[3] = 1
ip.String()
gw, err := cninetwork.CNIGateway()
if err != nil {
return err
}
hosts := fmt.Sprintf(`
127.0.0.1 localhost
%s faas-containerd`, ip)
%s faasd-provider`, gw)
writeHostsErr := ioutil.WriteFile(path.Join(wd, "hosts"),
[]byte(hosts), 0644)
[]byte(hosts), workingDirectoryPermission)
if writeHostsErr != nil {
return fmt.Errorf("cannot write hosts file: %s", writeHostsErr)
}
// os.Chown("hosts", 101, 101)
images := map[string]containerd.Image{}
@ -173,40 +159,25 @@ func (s *Supervisor) Start(svcs []Service) error {
return err
}
id := uuid.New().String()
netns := fmt.Sprintf(NetNSPathFmt, task.Pid())
cni, err := gocni.New(gocni.WithPluginConfDir(CNIConfDir),
gocni.WithPluginDir([]string{CNIBinDir}))
if err != nil {
return errors.Wrapf(err, "error creating CNI instance")
}
// Load the cni configuration
if err := cni.Load(gocni.WithLoNetwork, gocni.WithConfListFile(filepath.Join(CNIConfDir, DefaultCNIConfFilename))); err != nil {
return errors.Wrapf(err, "failed to load cni configuration: %v", err)
}
labels := map[string]string{}
network, err := cninetwork.CreateCNINetwork(ctx, s.cni, task, labels)
_, err = cni.Setup(ctx, id, netns, gocni.WithLabels(labels))
if err != nil {
return errors.Wrapf(err, "failed to setup network for namespace %q: %v", id, err)
return err
}
// Get the IP of the default interface.
// defaultInterface := gocni.DefaultPrefix + "0"
// ip := &result.Interfaces[defaultInterface].IPConfigs[0].IP
ip := getIP(newContainer.ID(), task.Pid())
log.Printf("%s has IP: %s\n", newContainer.ID(), ip)
ip, err := cninetwork.GetIPAddress(network, task)
if err != nil {
return err
}
log.Printf("%s has IP: %s\n", newContainer.ID(), ip.String())
hosts, _ := ioutil.ReadFile("hosts")
hosts = []byte(string(hosts) + fmt.Sprintf(`
%s %s
`, ip, svc.Name))
writeErr := ioutil.WriteFile("hosts", hosts, 0644)
writeErr := ioutil.WriteFile("hosts", hosts, workingDirectoryPermission)
if writeErr != nil {
log.Printf("Error writing file %s %s\n", "hosts", writeErr)
@ -231,37 +202,26 @@ func (s *Supervisor) Start(svcs []Service) error {
return nil
}
func getIP(containerID string, taskPID uint32) string {
// https://github.com/weaveworks/weave/blob/master/net/netdev.go
peerIDs, err := weave.ConnectedToBridgeVethPeerIds(DefaultBridgeName)
if err != nil {
log.Fatal(err)
}
addrs, addrsErr := weave.GetNetDevsByVethPeerIds(int(taskPID), peerIDs)
if addrsErr != nil {
log.Fatal(addrsErr)
}
if len(addrs) > 0 {
return addrs[0].CIDRs[0].IP.String()
}
return ""
func (s *Supervisor) Close() {
defer s.client.Close()
}
type Service struct {
Image string
Env []string
Name string
Mounts []Mount
Caps []string
Args []string
}
func (s *Supervisor) Remove(svcs []Service) error {
ctx := namespaces.WithNamespace(context.Background(), faasdNamespace)
type Mount struct {
Src string
Dest string
for _, svc := range svcs {
err := cninetwork.DeleteCNINetwork(ctx, s.cni, s.client, svc.Name)
if err != nil {
log.Printf("[Delete] error removing CNI network for %s, %s\n", svc.Name, err)
return err
}
err = service.Remove(ctx, s.client, svc.Name)
if err != nil {
return err
}
}
return nil
}
func withOCIArgs(args []string) oci.SpecOpts {
@ -270,8 +230,6 @@ func withOCIArgs(args []string) oci.SpecOpts {
}
return func(_ context.Context, _ oci.Client, _ *containers.Container, s *oci.Spec) error {
return nil
}
}

View File

@ -1,66 +0,0 @@
// +build go1.10
// Copyright Weaveworks
// github.com/weaveworks/weave/net
package weave
import (
"errors"
"fmt"
"path/filepath"
"runtime"
"github.com/vishvananda/netlink"
"github.com/vishvananda/netns"
)
var ErrLinkNotFound = errors.New("Link not found")
func WithNetNS(ns netns.NsHandle, work func() error) error {
runtime.LockOSThread()
defer runtime.UnlockOSThread()
oldNs, err := netns.Get()
if err == nil {
defer oldNs.Close()
err = netns.Set(ns)
if err == nil {
defer netns.Set(oldNs)
err = work()
}
}
return err
}
func WithNetNSLink(ns netns.NsHandle, ifName string, work func(link netlink.Link) error) error {
return WithNetNS(ns, func() error {
link, err := netlink.LinkByName(ifName)
if err != nil {
if err.Error() == errors.New("Link not found").Error() {
return ErrLinkNotFound
}
return err
}
return work(link)
})
}
func WithNetNSByPath(path string, work func() error) error {
ns, err := netns.GetFromPath(path)
if err != nil {
return err
}
return WithNetNS(ns, work)
}
func NSPathByPid(pid int) string {
return NSPathByPidWithRoot("/", pid)
}
func NSPathByPidWithRoot(root string, pid int) string {
return filepath.Join(root, fmt.Sprintf("/proc/%d/ns/net", pid))
}

View File

@ -0,0 +1,57 @@
package osversion
import (
"fmt"
"golang.org/x/sys/windows"
)
// OSVersion is a wrapper for Windows version information
// https://msdn.microsoft.com/en-us/library/windows/desktop/ms724439(v=vs.85).aspx
type OSVersion struct {
Version uint32
MajorVersion uint8
MinorVersion uint8
Build uint16
}
// https://msdn.microsoft.com/en-us/library/windows/desktop/ms724833(v=vs.85).aspx
type osVersionInfoEx struct {
OSVersionInfoSize uint32
MajorVersion uint32
MinorVersion uint32
BuildNumber uint32
PlatformID uint32
CSDVersion [128]uint16
ServicePackMajor uint16
ServicePackMinor uint16
SuiteMask uint16
ProductType byte
Reserve byte
}
// Get gets the operating system version on Windows.
// The calling application must be manifested to get the correct version information.
func Get() OSVersion {
var err error
osv := OSVersion{}
osv.Version, err = windows.GetVersion()
if err != nil {
// GetVersion never fails.
panic(err)
}
osv.MajorVersion = uint8(osv.Version & 0xFF)
osv.MinorVersion = uint8(osv.Version >> 8 & 0xFF)
osv.Build = uint16(osv.Version >> 16)
return osv
}
// Build gets the build-number on Windows
// The calling application must be manifested to get the correct version information.
func Build() uint16 {
return Get().Build
}
func (osv OSVersion) ToString() string {
return fmt.Sprintf("%d.%d.%d", osv.MajorVersion, osv.MinorVersion, osv.Build)
}

View File

@ -0,0 +1,23 @@
package osversion
const (
// RS1 (version 1607, codename "Redstone 1") corresponds to Windows Server
// 2016 (ltsc2016) and Windows 10 (Anniversary Update).
RS1 = 14393
// RS2 (version 1703, codename "Redstone 2") was a client-only update, and
// corresponds to Windows 10 (Creators Update).
RS2 = 15063
// RS3 (version 1709, codename "Redstone 3") corresponds to Windows Server
// 1709 (Semi-Annual Channel (SAC)), and Windows 10 (Fall Creators Update).
RS3 = 16299
// RS4 (version 1803, codename "Redstone 4") corresponds to Windows Server
// 1803 (Semi-Annual Channel (SAC)), and Windows 10 (April 2018 Update).
RS4 = 17134
// RS5 (version 1809, codename "Redstone 5") corresponds to Windows Server
// 2019 (ltsc2019), and Windows 10 (October 2018 Update).
RS5 = 17763
)

View File

@ -0,0 +1,77 @@
// +build !windows
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package logging
import (
"context"
"fmt"
"io"
"os"
"os/signal"
"golang.org/x/sys/unix"
)
// Config of the container logs
type Config struct {
ID string
Namespace string
Stdout io.Reader
Stderr io.Reader
}
// LoggerFunc is implemented by custom v2 logging binaries
type LoggerFunc func(context.Context, *Config, func() error) error
// Run the logging driver
func Run(fn LoggerFunc) {
ctx, cancel := context.WithCancel(context.Background())
config := &Config{
ID: os.Getenv("CONTAINER_ID"),
Namespace: os.Getenv("CONTAINER_NAMESPACE"),
Stdout: os.NewFile(3, "CONTAINER_STDOUT"),
Stderr: os.NewFile(4, "CONTAINER_STDERR"),
}
var (
s = make(chan os.Signal, 32)
errCh = make(chan error, 1)
wait = os.NewFile(5, "CONTAINER_WAIT")
)
signal.Notify(s, unix.SIGTERM)
go func() {
if err := fn(ctx, config, wait.Close); err != nil {
errCh <- err
}
errCh <- nil
}()
for {
select {
case <-s:
cancel()
case err := <-errCh:
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
os.Exit(0)
}
}
}

View File

@ -0,0 +1,101 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package pathdriver
import (
"path/filepath"
)
// PathDriver provides all of the path manipulation functions in a common
// interface. The context should call these and never use the `filepath`
// package or any other package to manipulate paths.
type PathDriver interface {
Join(paths ...string) string
IsAbs(path string) bool
Rel(base, target string) (string, error)
Base(path string) string
Dir(path string) string
Clean(path string) string
Split(path string) (dir, file string)
Separator() byte
Abs(path string) (string, error)
Walk(string, filepath.WalkFunc) error
FromSlash(path string) string
ToSlash(path string) string
Match(pattern, name string) (matched bool, err error)
}
// pathDriver is a simple default implementation calls the filepath package.
type pathDriver struct{}
// LocalPathDriver is the exported pathDriver struct for convenience.
var LocalPathDriver PathDriver = &pathDriver{}
func (*pathDriver) Join(paths ...string) string {
return filepath.Join(paths...)
}
func (*pathDriver) IsAbs(path string) bool {
return filepath.IsAbs(path)
}
func (*pathDriver) Rel(base, target string) (string, error) {
return filepath.Rel(base, target)
}
func (*pathDriver) Base(path string) string {
return filepath.Base(path)
}
func (*pathDriver) Dir(path string) string {
return filepath.Dir(path)
}
func (*pathDriver) Clean(path string) string {
return filepath.Clean(path)
}
func (*pathDriver) Split(path string) (dir, file string) {
return filepath.Split(path)
}
func (*pathDriver) Separator() byte {
return filepath.Separator
}
func (*pathDriver) Abs(path string) (string, error) {
return filepath.Abs(path)
}
// Note that filepath.Walk calls os.Stat, so if the context wants to
// to call Driver.Stat() for Walk, they need to create a new struct that
// overrides this method.
func (*pathDriver) Walk(root string, walkFn filepath.WalkFunc) error {
return filepath.Walk(root, walkFn)
}
func (*pathDriver) FromSlash(path string) string {
return filepath.FromSlash(path)
}
func (*pathDriver) ToSlash(path string) string {
return filepath.ToSlash(path)
}
func (*pathDriver) Match(pattern, name string) (bool, error) {
return filepath.Match(pattern, name)
}

191
vendor/github.com/coreos/go-systemd/LICENSE generated vendored Normal file
View File

@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and
distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright
owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities
that control, are controlled by, or are under common control with that entity.
For the purposes of this definition, "control" means (i) the power, direct or
indirect, to cause the direction or management of such entity, whether by
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising
permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including
but not limited to software source code, documentation source, and configuration
files.
"Object" form shall mean any form resulting from mechanical transformation or
translation of a Source form, including but not limited to compiled object code,
generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made
available under the License, as indicated by a copyright notice that is included
in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that
is based on (or derived from) the Work and for which the editorial revisions,
annotations, elaborations, or other modifications represent, as a whole, an
original work of authorship. For the purposes of this License, Derivative Works
shall not include works that remain separable from, or merely link (or bind by
name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version
of the Work and any modifications or additions to that Work or Derivative Works
thereof, that is intentionally submitted to Licensor for inclusion in the Work
by the copyright owner or by an individual or Legal Entity authorized to submit
on behalf of the copyright owner. For the purposes of this definition,
"submitted" means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems, and
issue tracking systems that are managed by, or on behalf of, the Licensor for
the purpose of discussing and improving the Work, but excluding communication
that is conspicuously marked or otherwise designated in writing by the copyright
owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
of whom a Contribution has been received by Licensor and subsequently
incorporated within the Work.
2. Grant of Copyright License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the Work and such
Derivative Works in Source or Object form.
3. Grant of Patent License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable (except as stated in this section) patent license to make, have
made, use, offer to sell, sell, import, and otherwise transfer the Work, where
such license applies only to those patent claims licensable by such Contributor
that are necessarily infringed by their Contribution(s) alone or by combination
of their Contribution(s) with the Work to which such Contribution(s) was
submitted. If You institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work or a
Contribution incorporated within the Work constitutes direct or contributory
patent infringement, then any patent licenses granted to You under this License
for that Work shall terminate as of the date such litigation is filed.
4. Redistribution.
You may reproduce and distribute copies of the Work or Derivative Works thereof
in any medium, with or without modifications, and in Source or Object form,
provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of
this License; and
You must cause any modified files to carry prominent notices stating that You
changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute,
all copyright, patent, trademark, and attribution notices from the Source form
of the Work, excluding those notices that do not pertain to any part of the
Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any
Derivative Works that You distribute must include a readable copy of the
attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the
following places: within a NOTICE text file distributed as part of the
Derivative Works; within the Source form or documentation, if provided along
with the Derivative Works; or, within a display generated by the Derivative
Works, if and wherever such third-party notices normally appear. The contents of
the NOTICE file are for informational purposes only and do not modify the
License. You may add Your own attribution notices within Derivative Works that
You distribute, alongside or as an addendum to the NOTICE text from the Work,
provided that such additional attribution notices cannot be construed as
modifying the License.
You may add Your own copyright statement to Your modifications and may provide
additional or different license terms and conditions for use, reproduction, or
distribution of Your modifications, or for any such Derivative Works as a whole,
provided Your use, reproduction, and distribution of the Work otherwise complies
with the conditions stated in this License.
5. Submission of Contributions.
Unless You explicitly state otherwise, any Contribution intentionally submitted
for inclusion in the Work by You to the Licensor shall be under the terms and
conditions of this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify the terms of
any separate license agreement you may have executed with Licensor regarding
such Contributions.
6. Trademarks.
This License does not grant permission to use the trade names, trademarks,
service marks, or product names of the Licensor, except as required for
reasonable and customary use in describing the origin of the Work and
reproducing the content of the NOTICE file.
7. Disclaimer of Warranty.
Unless required by applicable law or agreed to in writing, Licensor provides the
Work (and each Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE,
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are
solely responsible for determining the appropriateness of using or
redistributing the Work and assume any risks associated with Your exercise of
permissions under this License.
8. Limitation of Liability.
In no event and under no legal theory, whether in tort (including negligence),
contract, or otherwise, unless required by applicable law (such as deliberate
and grossly negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this License or
out of the use or inability to use the Work (including but not limited to
damages for loss of goodwill, work stoppage, computer failure or malfunction, or
any and all other commercial damages or losses), even if such Contributor has
been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability.
While redistributing the Work or Derivative Works thereof, You may choose to
offer, and charge a fee for, acceptance of support, warranty, indemnity, or
other liability obligations and/or rights consistent with this License. However,
in accepting such obligations, You may act only on Your own behalf and on Your
sole responsibility, not on behalf of any other Contributor, and only if You
agree to indemnify, defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason of your
accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate
notice, with the fields enclosed by brackets "[]" replaced with your own
identifying information. (Don't include the brackets!) The text should be
enclosed in the appropriate comment syntax for the file format. We also
recommend that a file or class name and description of purpose be included on
the same "printed page" as the copyright notice for easier identification within
third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

5
vendor/github.com/coreos/go-systemd/NOTICE generated vendored Normal file
View File

@ -0,0 +1,5 @@
CoreOS Project
Copyright 2018 CoreOS, Inc
This product includes software developed at CoreOS, Inc.
(http://www.coreos.com/).

225
vendor/github.com/coreos/go-systemd/journal/journal.go generated vendored Normal file
View File

@ -0,0 +1,225 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package journal provides write bindings to the local systemd journal.
// It is implemented in pure Go and connects to the journal directly over its
// unix socket.
//
// To read from the journal, see the "sdjournal" package, which wraps the
// sd-journal a C API.
//
// http://www.freedesktop.org/software/systemd/man/systemd-journald.service.html
package journal
import (
"bytes"
"encoding/binary"
"errors"
"fmt"
"io"
"io/ioutil"
"net"
"os"
"strconv"
"strings"
"sync"
"sync/atomic"
"syscall"
"unsafe"
)
// Priority of a journal message
type Priority int
const (
PriEmerg Priority = iota
PriAlert
PriCrit
PriErr
PriWarning
PriNotice
PriInfo
PriDebug
)
var (
// This can be overridden at build-time:
// https://github.com/golang/go/wiki/GcToolchainTricks#including-build-information-in-the-executable
journalSocket = "/run/systemd/journal/socket"
// unixConnPtr atomically holds the local unconnected Unix-domain socket.
// Concrete safe pointer type: *net.UnixConn
unixConnPtr unsafe.Pointer
// onceConn ensures that unixConnPtr is initialized exactly once.
onceConn sync.Once
)
func init() {
onceConn.Do(initConn)
}
// Enabled checks whether the local systemd journal is available for logging.
func Enabled() bool {
onceConn.Do(initConn)
if (*net.UnixConn)(atomic.LoadPointer(&unixConnPtr)) == nil {
return false
}
if _, err := net.Dial("unixgram", journalSocket); err != nil {
return false
}
return true
}
// Send a message to the local systemd journal. vars is a map of journald
// fields to values. Fields must be composed of uppercase letters, numbers,
// and underscores, but must not start with an underscore. Within these
// restrictions, any arbitrary field name may be used. Some names have special
// significance: see the journalctl documentation
// (http://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html)
// for more details. vars may be nil.
func Send(message string, priority Priority, vars map[string]string) error {
conn := (*net.UnixConn)(atomic.LoadPointer(&unixConnPtr))
if conn == nil {
return errors.New("could not initialize socket to journald")
}
socketAddr := &net.UnixAddr{
Name: journalSocket,
Net: "unixgram",
}
data := new(bytes.Buffer)
appendVariable(data, "PRIORITY", strconv.Itoa(int(priority)))
appendVariable(data, "MESSAGE", message)
for k, v := range vars {
appendVariable(data, k, v)
}
_, _, err := conn.WriteMsgUnix(data.Bytes(), nil, socketAddr)
if err == nil {
return nil
}
if !isSocketSpaceError(err) {
return err
}
// Large log entry, send it via tempfile and ancillary-fd.
file, err := tempFd()
if err != nil {
return err
}
defer file.Close()
_, err = io.Copy(file, data)
if err != nil {
return err
}
rights := syscall.UnixRights(int(file.Fd()))
_, _, err = conn.WriteMsgUnix([]byte{}, rights, socketAddr)
if err != nil {
return err
}
return nil
}
// Print prints a message to the local systemd journal using Send().
func Print(priority Priority, format string, a ...interface{}) error {
return Send(fmt.Sprintf(format, a...), priority, nil)
}
func appendVariable(w io.Writer, name, value string) {
if err := validVarName(name); err != nil {
fmt.Fprintf(os.Stderr, "variable name %s contains invalid character, ignoring\n", name)
}
if strings.ContainsRune(value, '\n') {
/* When the value contains a newline, we write:
* - the variable name, followed by a newline
* - the size (in 64bit little endian format)
* - the data, followed by a newline
*/
fmt.Fprintln(w, name)
binary.Write(w, binary.LittleEndian, uint64(len(value)))
fmt.Fprintln(w, value)
} else {
/* just write the variable and value all on one line */
fmt.Fprintf(w, "%s=%s\n", name, value)
}
}
// validVarName validates a variable name to make sure journald will accept it.
// The variable name must be in uppercase and consist only of characters,
// numbers and underscores, and may not begin with an underscore:
// https://www.freedesktop.org/software/systemd/man/sd_journal_print.html
func validVarName(name string) error {
if name == "" {
return errors.New("Empty variable name")
} else if name[0] == '_' {
return errors.New("Variable name begins with an underscore")
}
for _, c := range name {
if !(('A' <= c && c <= 'Z') || ('0' <= c && c <= '9') || c == '_') {
return errors.New("Variable name contains invalid characters")
}
}
return nil
}
// isSocketSpaceError checks whether the error is signaling
// an "overlarge message" condition.
func isSocketSpaceError(err error) bool {
opErr, ok := err.(*net.OpError)
if !ok || opErr == nil {
return false
}
sysErr, ok := opErr.Err.(*os.SyscallError)
if !ok || sysErr == nil {
return false
}
return sysErr.Err == syscall.EMSGSIZE || sysErr.Err == syscall.ENOBUFS
}
// tempFd creates a temporary, unlinked file under `/dev/shm`.
func tempFd() (*os.File, error) {
file, err := ioutil.TempFile("/dev/shm/", "journal.XXXXX")
if err != nil {
return nil, err
}
err = syscall.Unlink(file.Name())
if err != nil {
return nil, err
}
return file, nil
}
// initConn initializes the global `unixConnPtr` socket.
// It is meant to be called exactly once, at program startup.
func initConn() {
autobind, err := net.ResolveUnixAddr("unixgram", "")
if err != nil {
return
}
sock, err := net.ListenUnixgram("unixgram", autobind)
if err != nil {
return
}
atomic.StorePointer(&unixConnPtr, unsafe.Pointer(sock))
}

716
vendor/github.com/docker/cli/AUTHORS generated vendored Normal file
View File

@ -0,0 +1,716 @@
# This file lists all individuals having contributed content to the repository.
# For how it is generated, see `scripts/docs/generate-authors.sh`.
Aanand Prasad <aanand.prasad@gmail.com>
Aaron L. Xu <liker.xu@foxmail.com>
Aaron Lehmann <aaron.lehmann@docker.com>
Aaron.L.Xu <likexu@harmonycloud.cn>
Abdur Rehman <abdur_rehman@mentor.com>
Abhinandan Prativadi <abhi@docker.com>
Abin Shahab <ashahab@altiscale.com>
Ace Tang <aceapril@126.com>
Addam Hardy <addam.hardy@gmail.com>
Adolfo Ochagavía <aochagavia92@gmail.com>
Adrien Duermael <adrien@duermael.com>
Adrien Folie <folie.adrien@gmail.com>
Ahmet Alp Balkan <ahmetb@microsoft.com>
Aidan Feldman <aidan.feldman@gmail.com>
Aidan Hobson Sayers <aidanhs@cantab.net>
AJ Bowen <aj@gandi.net>
Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
Akim Demaille <akim.demaille@docker.com>
Alan Thompson <cloojure@gmail.com>
Albert Callarisa <shark234@gmail.com>
Aleksa Sarai <asarai@suse.de>
Alessandro Boch <aboch@tetrationanalytics.com>
Alex Mavrogiannis <alex.mavrogiannis@docker.com>
Alex Mayer <amayer5125@gmail.com>
Alexander Boyd <alex@opengroove.org>
Alexander Larsson <alexl@redhat.com>
Alexander Morozov <lk4d4@docker.com>
Alexander Ryabov <i@sepa.spb.ru>
Alexandre González <agonzalezro@gmail.com>
Alfred Landrum <alfred.landrum@docker.com>
Alicia Lauerman <alicia@eta.im>
Allen Sun <allensun.shl@alibaba-inc.com>
Alvin Deng <alvin.q.deng@utexas.edu>
Amen Belayneh <amenbelayneh@gmail.com>
Amir Goldstein <amir73il@aquasec.com>
Amit Krishnan <amit.krishnan@oracle.com>
Amit Shukla <amit.shukla@docker.com>
Amy Lindburg <amy.lindburg@docker.com>
Anda Xu <anda.xu@docker.com>
Andrea Luzzardi <aluzzardi@gmail.com>
Andreas Köhler <andi5.py@gmx.net>
Andrew France <andrew@avito.co.uk>
Andrew Hsu <andrewhsu@docker.com>
Andrew Macpherson <hopscotch23@gmail.com>
Andrew McDonnell <bugs@andrewmcdonnell.net>
Andrew Po <absourd.noise@gmail.com>
Andrey Petrov <andrey.petrov@shazow.net>
André Martins <aanm90@gmail.com>
Andy Goldstein <agoldste@redhat.com>
Andy Rothfusz <github@developersupport.net>
Anil Madhavapeddy <anil@recoil.org>
Ankush Agarwal <ankushagarwal11@gmail.com>
Anne Henmi <anne.henmi@docker.com>
Anton Polonskiy <anton.polonskiy@gmail.com>
Antonio Murdaca <antonio.murdaca@gmail.com>
Antonis Kalipetis <akalipetis@gmail.com>
Anusha Ragunathan <anusha.ragunathan@docker.com>
Ao Li <la9249@163.com>
Arash Deshmeh <adeshmeh@ca.ibm.com>
Arnaud Porterie <arnaud.porterie@docker.com>
Ashwini Oruganti <ashwini.oruganti@gmail.com>
Azat Khuyiyakhmetov <shadow_uz@mail.ru>
Bardia Keyoumarsi <bkeyouma@ucsc.edu>
Barnaby Gray <barnaby@pickle.me.uk>
Bastiaan Bakker <bbakker@xebia.com>
BastianHofmann <bastianhofmann@me.com>
Ben Bonnefoy <frenchben@docker.com>
Ben Creasy <ben@bencreasy.com>
Ben Firshman <ben@firshman.co.uk>
Benjamin Boudreau <boudreau.benjamin@gmail.com>
Benoit Sigoure <tsunanet@gmail.com>
Bhumika Bayani <bhumikabayani@gmail.com>
Bill Wang <ozbillwang@gmail.com>
Bin Liu <liubin0329@gmail.com>
Bingshen Wang <bingshen.wbs@alibaba-inc.com>
Boaz Shuster <ripcurld.github@gmail.com>
Bogdan Anton <contact@bogdananton.ro>
Boris Pruessmann <boris@pruessmann.org>
Bradley Cicenas <bradley.cicenas@gmail.com>
Brandon Mitchell <git@bmitch.net>
Brandon Philips <brandon.philips@coreos.com>
Brent Salisbury <brent.salisbury@docker.com>
Bret Fisher <bret@bretfisher.com>
Brian (bex) Exelbierd <bexelbie@redhat.com>
Brian Goff <cpuguy83@gmail.com>
Bryan Bess <squarejaw@bsbess.com>
Bryan Boreham <bjboreham@gmail.com>
Bryan Murphy <bmurphy1976@gmail.com>
bryfry <bryon.fryer@gmail.com>
Cameron Spear <cameronspear@gmail.com>
Cao Weiwei <cao.weiwei30@zte.com.cn>
Carlo Mion <mion00@gmail.com>
Carlos Alexandro Becker <caarlos0@gmail.com>
Ce Gao <ce.gao@outlook.com>
Cedric Davies <cedricda@microsoft.com>
Cezar Sa Espinola <cezarsa@gmail.com>
Chad Faragher <wyckster@hotmail.com>
Chao Wang <wangchao.fnst@cn.fujitsu.com>
Charles Chan <charleswhchan@users.noreply.github.com>
Charles Law <claw@conduce.com>
Charles Smith <charles.smith@docker.com>
Charlie Drage <charlie@charliedrage.com>
ChaYoung You <yousbe@gmail.com>
Chen Chuanliang <chen.chuanliang@zte.com.cn>
Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
Chen Mingjie <chenmingjie0828@163.com>
Chen Qiu <cheney-90@hotmail.com>
Chris Gavin <chris@chrisgavin.me>
Chris Gibson <chris@chrisg.io>
Chris McKinnel <chrismckinnel@gmail.com>
Chris Snow <chsnow123@gmail.com>
Chris Weyl <cweyl@alumni.drew.edu>
Christian Persson <saser@live.se>
Christian Stefanescu <st.chris@gmail.com>
Christophe Robin <crobin@nekoo.com>
Christophe Vidal <kriss@krizalys.com>
Christopher Biscardi <biscarch@sketcht.com>
Christopher Crone <christopher.crone@docker.com>
Christopher Jones <tophj@linux.vnet.ibm.com>
Christy Norman <christy@linux.vnet.ibm.com>
Chun Chen <ramichen@tencent.com>
Clinton Kitson <clintonskitson@gmail.com>
Coenraad Loubser <coenraad@wish.org.za>
Colin Hebert <hebert.colin@gmail.com>
Collin Guarino <collin.guarino@gmail.com>
Colm Hally <colmhally@gmail.com>
Corey Farrell <git@cfware.com>
Corey Quon <corey.quon@docker.com>
Craig Wilhite <crwilhit@microsoft.com>
Cristian Staretu <cristian.staretu@gmail.com>
Daehyeok Mun <daehyeok@gmail.com>
Dafydd Crosby <dtcrsby@gmail.com>
dalanlan <dalanlan925@gmail.com>
Damien Nadé <github@livna.org>
Dan Cotora <dan@bluevision.ro>
Daniel Dao <dqminh@cloudflare.com>
Daniel Farrell <dfarrell@redhat.com>
Daniel Gasienica <daniel@gasienica.ch>
Daniel Goosen <daniel.goosen@surveysampling.com>
Daniel Hiltgen <daniel.hiltgen@docker.com>
Daniel J Walsh <dwalsh@redhat.com>
Daniel Nephin <dnephin@docker.com>
Daniel Norberg <dano@spotify.com>
Daniel Watkins <daniel@daniel-watkins.co.uk>
Daniel Zhang <jmzwcn@gmail.com>
Danny Berger <dpb587@gmail.com>
Darren Shepherd <darren.s.shepherd@gmail.com>
Darren Stahl <darst@microsoft.com>
Dattatraya Kumbhar <dattatraya.kumbhar@gslab.com>
Dave Goodchild <buddhamagnet@gmail.com>
Dave Henderson <dhenderson@gmail.com>
Dave Tucker <dt@docker.com>
David Beitey <david@davidjb.com>
David Calavera <david.calavera@gmail.com>
David Cramer <davcrame@cisco.com>
David Dooling <dooling@gmail.com>
David Gageot <david@gageot.net>
David Lechner <david@lechnology.com>
David Scott <dave@recoil.org>
David Sheets <dsheets@docker.com>
David Williamson <david.williamson@docker.com>
David Xia <dxia@spotify.com>
David Young <yangboh@cn.ibm.com>
Deng Guangxing <dengguangxing@huawei.com>
Denis Defreyne <denis@soundcloud.com>
Denis Gladkikh <denis@gladkikh.email>
Denis Ollier <larchunix@users.noreply.github.com>
Dennis Docter <dennis@d23.nl>
Derek McGowan <derek@mcgstyle.net>
Deshi Xiao <dxiao@redhat.com>
Dharmit Shah <shahdharmit@gmail.com>
Dhawal Yogesh Bhanushali <dbhanushali@vmware.com>
Dieter Reuter <dieter.reuter@me.com>
Dima Stopel <dima@twistlock.com>
Dimitry Andric <d.andric@activevideo.com>
Ding Fei <dingfei@stars.org.cn>
Diogo Monica <diogo@docker.com>
Dmitry Gusev <dmitry.gusev@gmail.com>
Dmitry Smirnov <onlyjob@member.fsf.org>
Dmitry V. Krivenok <krivenok.dmitry@gmail.com>
Don Kjer <don.kjer@gmail.com>
Dong Chen <dongluo.chen@docker.com>
Doug Davis <dug@us.ibm.com>
Drew Erny <drew.erny@docker.com>
Ed Costello <epc@epcostello.com>
Elango Sivanandam <elango.siva@docker.com>
Eli Uriegas <eli.uriegas@docker.com>
Eli Uriegas <seemethere101@gmail.com>
Elias Faxö <elias.faxo@tre.se>
Elliot Luo <956941328@qq.com>
Eric Curtin <ericcurtin17@gmail.com>
Eric G. Noriega <enoriega@vizuri.com>
Eric Rosenberg <ehaydenr@gmail.com>
Eric Sage <eric.david.sage@gmail.com>
Eric-Olivier Lamey <eo@lamey.me>
Erica Windisch <erica@windisch.us>
Erik Hollensbe <github@hollensbe.org>
Erik St. Martin <alakriti@gmail.com>
Essam A. Hassan <es.hassan187@gmail.com>
Ethan Haynes <ethanhaynes@alumni.harvard.edu>
Euan Kemp <euank@euank.com>
Eugene Yakubovich <eugene.yakubovich@coreos.com>
Evan Allrich <evan@unguku.com>
Evan Hazlett <ejhazlett@gmail.com>
Evan Krall <krall@yelp.com>
Evelyn Xu <evelynhsu21@gmail.com>
Everett Toews <everett.toews@rackspace.com>
Fabio Falci <fabiofalci@gmail.com>
Fabrizio Soppelsa <fsoppelsa@mirantis.com>
Felix Hupfeld <felix@quobyte.com>
Felix Rabe <felix@rabe.io>
Filip Jareš <filipjares@gmail.com>
Flavio Crisciani <flavio.crisciani@docker.com>
Florian Klein <florian.klein@free.fr>
Foysal Iqbal <foysal.iqbal.fb@gmail.com>
François Scala <francois.scala@swiss-as.com>
Fred Lifton <fred.lifton@docker.com>
Frederic Hemberger <mail@frederic-hemberger.de>
Frederick F. Kautz IV <fkautz@redhat.com>
Frederik Nordahl Jul Sabroe <frederikns@gmail.com>
Frieder Bluemle <frieder.bluemle@gmail.com>
Gabriel Nicolas Avellaneda <avellaneda.gabriel@gmail.com>
Gaetan de Villele <gdevillele@gmail.com>
Gang Qiao <qiaohai8866@gmail.com>
Gary Schaetz <gary@schaetzkc.com>
Genki Takiuchi <genki@s21g.com>
George MacRorie <gmacr31@gmail.com>
George Xie <georgexsh@gmail.com>
Gianluca Borello <g.borello@gmail.com>
Gildas Cuisinier <gildas.cuisinier@gcuisinier.net>
Gou Rao <gou@portworx.com>
Grant Reaber <grant.reaber@gmail.com>
Greg Pflaum <gpflaum@users.noreply.github.com>
Guilhem Lettron <guilhem+github@lettron.fr>
Guillaume J. Charmes <guillaume.charmes@docker.com>
Guillaume Le Floch <glfloch@gmail.com>
gwx296173 <gaojing3@huawei.com>
Günther Jungbluth <gunther@gameslabs.net>
Hakan Özler <hakan.ozler@kodcu.com>
Hao Zhang <21521210@zju.edu.cn>
Harald Albers <github@albersweb.de>
Harold Cooper <hrldcpr@gmail.com>
Harry Zhang <harryz@hyper.sh>
He Simei <hesimei@zju.edu.cn>
Helen Xie <chenjg@harmonycloud.cn>
Henning Sprang <henning.sprang@gmail.com>
Henry N <henrynmail-github@yahoo.de>
Hernan Garcia <hernandanielg@gmail.com>
Hongbin Lu <hongbin034@gmail.com>
Hu Keping <hukeping@huawei.com>
Huayi Zhang <irachex@gmail.com>
huqun <huqun@zju.edu.cn>
Huu Nguyen <huu@prismskylabs.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com>
Ian Campbell <ian.campbell@docker.com>
Ian Philpot <ian.philpot@microsoft.com>
Ignacio Capurro <icapurrofagian@gmail.com>
Ilya Dmitrichenko <errordeveloper@gmail.com>
Ilya Khlopotov <ilya.khlopotov@gmail.com>
Ilya Sotkov <ilya@sotkov.com>
Ioan Eugen Stan <eu@ieugen.ro>
Isabel Jimenez <contact.isabeljimenez@gmail.com>
Ivan Grcic <igrcic@gmail.com>
Ivan Markin <sw@nogoegst.net>
Jacob Atzen <jacob@jacobatzen.dk>
Jacob Tomlinson <jacob@tom.linson.uk>
Jaivish Kothari <janonymous.codevulture@gmail.com>
Jake Lambert <jake.lambert@volusion.com>
Jake Sanders <jsand@google.com>
James Nesbitt <james.nesbitt@wunderkraut.com>
James Turnbull <james@lovedthanlost.net>
Jamie Hannaford <jamie@limetree.org>
Jan Koprowski <jan.koprowski@gmail.com>
Jan Pazdziora <jpazdziora@redhat.com>
Jan-Jaap Driessen <janjaapdriessen@gmail.com>
Jana Radhakrishnan <mrjana@docker.com>
Jared Hocutt <jaredh@netapp.com>
Jasmine Hegman <jasmine@jhegman.com>
Jason Heiss <jheiss@aput.net>
Jason Plum <jplum@devonit.com>
Jay Kamat <github@jgkamat.33mail.com>
Jean Rouge <rougej+github@gmail.com>
Jean-Christophe Sirot <jean-christophe.sirot@docker.com>
Jean-Pierre Huynh <jean-pierre.huynh@ounet.fr>
Jeff Lindsay <progrium@gmail.com>
Jeff Nickoloff <jeff.nickoloff@gmail.com>
Jeff Silberman <jsilberm@gmail.com>
Jeremy Chambers <jeremy@thehipbot.com>
Jeremy Unruh <jeremybunruh@gmail.com>
Jeremy Yallop <yallop@docker.com>
Jeroen Franse <jeroenfranse@gmail.com>
Jesse Adametz <jesseadametz@gmail.com>
Jessica Frazelle <jessfraz@google.com>
Jezeniel Zapanta <jpzapanta22@gmail.com>
Jian Zhang <zhangjian.fnst@cn.fujitsu.com>
Jie Luo <luo612@zju.edu.cn>
Jilles Oldenbeuving <ojilles@gmail.com>
Jim Galasyn <jim.galasyn@docker.com>
Jimmy Leger <jimmy.leger@gmail.com>
Jimmy Song <rootsongjc@gmail.com>
jimmyxian <jimmyxian2004@yahoo.com.cn>
Jintao Zhang <zhangjintao9020@gmail.com>
Joao Fernandes <joao.fernandes@docker.com>
Joe Doliner <jdoliner@pachyderm.io>
Joe Gordon <joe.gordon0@gmail.com>
Joel Handwell <joelhandwell@gmail.com>
Joey Geiger <jgeiger@gmail.com>
Joffrey F <joffrey@docker.com>
Johan Euphrosine <proppy@google.com>
Johannes 'fish' Ziemke <github@freigeist.org>
John Feminella <jxf@jxf.me>
John Harris <john@johnharris.io>
John Howard (VM) <John.Howard@microsoft.com>
John Laswell <john.n.laswell@gmail.com>
John Maguire <jmaguire@duosecurity.com>
John Mulhausen <john@docker.com>
John Starks <jostarks@microsoft.com>
John Stephens <johnstep@docker.com>
John Tims <john.k.tims@gmail.com>
John V. Martinez <jvmatl@gmail.com>
John Willis <john.willis@docker.com>
Jonathan Boulle <jonathanboulle@gmail.com>
Jonathan Lee <jonjohn1232009@gmail.com>
Jonathan Lomas <jonathan@floatinglomas.ca>
Jonathan McCrohan <jmccrohan@gmail.com>
Jonh Wendell <jonh.wendell@redhat.com>
Jordan Jennings <jjn2009@gmail.com>
Joseph Kern <jkern@semafour.net>
Josh Bodah <jb3689@yahoo.com>
Josh Chorlton <jchorlton@gmail.com>
Josh Hawn <josh.hawn@docker.com>
Josh Horwitz <horwitz@addthis.com>
Josh Soref <jsoref@gmail.com>
Julien Barbier <write0@gmail.com>
Julien Kassar <github@kassisol.com>
Julien Maitrehenry <julien.maitrehenry@me.com>
Justas Brazauskas <brazauskasjustas@gmail.com>
Justin Cormack <justin.cormack@docker.com>
Justin Simonelis <justin.p.simonelis@gmail.com>
Justyn Temme <justyntemme@gmail.com>
Jyrki Puttonen <jyrkiput@gmail.com>
Jérémie Drouet <jeremie.drouet@gmail.com>
Jérôme Petazzoni <jerome.petazzoni@docker.com>
Jörg Thalheim <joerg@higgsboson.tk>
Kai Blin <kai@samba.org>
Kai Qiang Wu (Kennan) <wkq5325@gmail.com>
Kara Alexandra <kalexandra@us.ibm.com>
Kareem Khazem <karkhaz@karkhaz.com>
Karthik Nayak <Karthik.188@gmail.com>
Kat Samperi <kat.samperi@gmail.com>
Katie McLaughlin <katie@glasnt.com>
Ke Xu <leonhartx.k@gmail.com>
Kei Ohmura <ohmura.kei@gmail.com>
Keith Hudgins <greenman@greenman.org>
Ken Cochrane <kencochrane@gmail.com>
Ken ICHIKAWA <ichikawa.ken@jp.fujitsu.com>
Kenfe-Mickaël Laventure <mickael.laventure@gmail.com>
Kevin Burke <kev@inburke.com>
Kevin Feyrer <kevin.feyrer@btinternet.com>
Kevin Kern <kaiwentan@harmonycloud.cn>
Kevin Kirsche <Kev.Kirsche+GitHub@gmail.com>
Kevin Meredith <kevin.m.meredith@gmail.com>
Kevin Richardson <kevin@kevinrichardson.co>
khaled souf <khaled.souf@gmail.com>
Kim Eik <kim@heldig.org>
Kir Kolyshkin <kolyshkin@gmail.com>
Kotaro Yoshimatsu <kotaro.yoshimatsu@gmail.com>
Krasi Georgiev <krasi@vip-consult.solutions>
Kris-Mikael Krister <krismikael@protonmail.com>
Kun Zhang <zkazure@gmail.com>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp>
Kyle Spiers <kyle@spiers.me>
Lachlan Cooper <lachlancooper@gmail.com>
Lai Jiangshan <jiangshanlai@gmail.com>
Lars Kellogg-Stedman <lars@redhat.com>
Laura Frank <ljfrank@gmail.com>
Laurent Erignoux <lerignoux@gmail.com>
Lee Gaines <eightlimbed@gmail.com>
Lei Jitang <leijitang@huawei.com>
Lennie <github@consolejunkie.net>
Leo Gallucci <elgalu3@gmail.com>
Lewis Daly <lewisdaly@me.com>
Li Yi <denverdino@gmail.com>
Li Yi <weiyuan.yl@alibaba-inc.com>
Liang-Chi Hsieh <viirya@gmail.com>
Lifubang <lifubang@acmcoder.com>
Lihua Tang <lhtang@alauda.io>
Lily Guo <lily.guo@docker.com>
Lin Lu <doraalin@163.com>
Linus Heckemann <lheckemann@twig-world.com>
Liping Xue <lipingxue@gmail.com>
Liron Levin <liron@twistlock.com>
liwenqi <vikilwq@zju.edu.cn>
lixiaobing10051267 <li.xiaobing1@zte.com.cn>
Lloyd Dewolf <foolswisdom@gmail.com>
Lorenzo Fontana <lo@linux.com>
Louis Opter <kalessin@kalessin.fr>
Luca Favatella <luca.favatella@erlang-solutions.com>
Luca Marturana <lucamarturana@gmail.com>
Lucas Chan <lucas-github@lucaschan.com>
Luka Hartwig <mail@lukahartwig.de>
Lukasz Zajaczkowski <Lukasz.Zajaczkowski@ts.fujitsu.com>
Lydell Manganti <LydellManganti@users.noreply.github.com>
Lénaïc Huard <lhuard@amadeus.com>
Ma Shimiao <mashimiao.fnst@cn.fujitsu.com>
Mabin <bin.ma@huawei.com>
Madhav Puri <madhav.puri@gmail.com>
Madhu Venugopal <madhu@socketplane.io>
Malte Janduda <mail@janduda.net>
Manjunath A Kumatagi <mkumatag@in.ibm.com>
Mansi Nahar <mmn4185@rit.edu>
mapk0y <mapk0y@gmail.com>
Marc Bihlmaier <marc.bihlmaier@reddoxx.com>
Marco Mariani <marco.mariani@alterway.fr>
Marco Vedovati <mvedovati@suse.com>
Marcus Martins <marcus@docker.com>
Marianna Tessel <mtesselh@gmail.com>
Marius Sturm <marius@graylog.com>
Mark Oates <fl0yd@me.com>
Marsh Macy <marsma@microsoft.com>
Martin Mosegaard Amdisen <martin.amdisen@praqma.com>
Mary Anthony <mary.anthony@docker.com>
Mason Fish <mason.fish@docker.com>
Mason Malone <mason.malone@gmail.com>
Mateusz Major <apkd@users.noreply.github.com>
Mathieu Champlon <mathieu.champlon@docker.com>
Matt Gucci <matt9ucci@gmail.com>
Matt Robenolt <matt@ydekproductions.com>
Matteo Orefice <matteo.orefice@bites4bits.software>
Matthew Heon <mheon@redhat.com>
Matthieu Hauglustaine <matt.hauglustaine@gmail.com>
Mauro Porras P <mauroporrasp@gmail.com>
Max Shytikov <mshytikov@gmail.com>
Maxime Petazzoni <max@signalfuse.com>
Mei ChunTao <mei.chuntao@zte.com.cn>
Micah Zoltu <micah@newrelic.com>
Michael A. Smith <michael@smith-li.com>
Michael Bridgen <mikeb@squaremobius.net>
Michael Crosby <michael@docker.com>
Michael Friis <friism@gmail.com>
Michael Irwin <mikesir87@gmail.com>
Michael Käufl <docker@c.michael-kaeufl.de>
Michael Prokop <github@michael-prokop.at>
Michael Scharf <github@scharf.gr>
Michael Spetsiotis <michael_spets@hotmail.com>
Michael Steinert <mike.steinert@gmail.com>
Michael West <mwest@mdsol.com>
Michal Minář <miminar@redhat.com>
Michał Czeraszkiewicz <czerasz@gmail.com>
Miguel Angel Alvarez Cabrerizo <doncicuto@gmail.com>
Mihai Borobocea <MihaiBorob@gmail.com>
Mihuleacc Sergiu <mihuleac.sergiu@gmail.com>
Mike Brown <brownwm@us.ibm.com>
Mike Casas <mkcsas0@gmail.com>
Mike Danese <mikedanese@google.com>
Mike Dillon <mike@embody.org>
Mike Goelzer <mike.goelzer@docker.com>
Mike MacCana <mike.maccana@gmail.com>
mikelinjie <294893458@qq.com>
Mikhail Vasin <vasin@cloud-tv.ru>
Milind Chawre <milindchawre@gmail.com>
Mindaugas Rukas <momomg@gmail.com>
Misty Stanley-Jones <misty@docker.com>
Mohammad Banikazemi <mb@us.ibm.com>
Mohammed Aaqib Ansari <maaquib@gmail.com>
Mohini Anne Dsouza <mohini3917@gmail.com>
Moorthy RS <rsmoorthy@gmail.com>
Morgan Bauer <mbauer@us.ibm.com>
Moysés Borges <moysesb@gmail.com>
Mrunal Patel <mrunalp@gmail.com>
muicoder <muicoder@gmail.com>
Muthukumar R <muthur@gmail.com>
Máximo Cuadros <mcuadros@gmail.com>
Mårten Cassel <marten.cassel@gmail.com>
Nace Oroz <orkica@gmail.com>
Nahum Shalman <nshalman@omniti.com>
Nalin Dahyabhai <nalin@redhat.com>
Nao YONASHIRO <owan.orisano@gmail.com>
Nassim 'Nass' Eddequiouaq <eddequiouaq.nassim@gmail.com>
Natalie Parker <nparker@omnifone.com>
Nate Brennand <nate.brennand@clever.com>
Nathan Hsieh <hsieh.nathan@gmail.com>
Nathan LeClaire <nathan.leclaire@docker.com>
Nathan McCauley <nathan.mccauley@docker.com>
Neil Peterson <neilpeterson@outlook.com>
Nick Adcock <nick.adcock@docker.com>
Nico Stapelbroek <nstapelbroek@gmail.com>
Nicola Kabar <nicolaka@gmail.com>
Nicolas Borboën <ponsfrilus@gmail.com>
Nicolas De Loof <nicolas.deloof@gmail.com>
Nikhil Chawla <chawlanikhil24@gmail.com>
Nikolas Garofil <nikolas.garofil@uantwerpen.be>
Nikolay Milovanov <nmil@itransformers.net>
Nir Soffer <nsoffer@redhat.com>
Nishant Totla <nishanttotla@gmail.com>
NIWA Hideyuki <niwa.niwa@nifty.ne.jp>
Noah Treuhaft <noah.treuhaft@docker.com>
O.S. Tezer <ostezer@gmail.com>
ohmystack <jun.jiang02@ele.me>
Olle Jonsson <olle.jonsson@gmail.com>
Olli Janatuinen <olli.janatuinen@gmail.com>
Otto Kekäläinen <otto@seravo.fi>
Ovidio Mallo <ovidio.mallo@gmail.com>
Pascal Borreli <pascal@borreli.com>
Patrick Böänziger <patrick.baenziger@bsi-software.com>
Patrick Hemmer <patrick.hemmer@gmail.com>
Patrick Lang <plang@microsoft.com>
Paul <paul9869@gmail.com>
Paul Kehrer <paul.l.kehrer@gmail.com>
Paul Lietar <paul@lietar.net>
Paul Weaver <pauweave@cisco.com>
Pavel Pospisil <pospispa@gmail.com>
Paweł Szczekutowicz <pszczekutowicz@gmail.com>
Peeyush Gupta <gpeeyush@linux.vnet.ibm.com>
Per Lundberg <per.lundberg@ecraft.com>
Peter Edge <peter.edge@gmail.com>
Peter Hsu <shhsu@microsoft.com>
Peter Jaffe <pjaffe@nevo.com>
Peter Kehl <peter.kehl@gmail.com>
Peter Nagy <xificurC@gmail.com>
Peter Salvatore <peter@psftw.com>
Peter Waller <p@pwaller.net>
Phil Estes <estesp@linux.vnet.ibm.com>
Philip Alexander Etling <paetling@gmail.com>
Philipp Gillé <philipp.gille@gmail.com>
Philipp Schmied <pschmied@schutzwerk.com>
pidster <pid@pidster.com>
pixelistik <pixelistik@users.noreply.github.com>
Pratik Karki <prertik@outlook.com>
Prayag Verma <prayag.verma@gmail.com>
Preston Cowley <preston.cowley@sony.com>
Pure White <daniel48@126.com>
Qiang Huang <h.huangqiang@huawei.com>
Qinglan Peng <qinglanpeng@zju.edu.cn>
qudongfang <qudongfang@gmail.com>
Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Ray Tsang <rayt@google.com>
Reficul <xuzhenglun@gmail.com>
Remy Suen <remy.suen@gmail.com>
Renaud Gaubert <rgaubert@nvidia.com>
Ricardo N Feliciano <FelicianoTech@gmail.com>
Rich Moyse <rich@moyse.us>
Richard Mathie <richard.mathie@amey.co.uk>
Richard Scothern <richard.scothern@gmail.com>
Rick Wieman <git@rickw.nl>
Ritesh H Shukla <sritesh@vmware.com>
Riyaz Faizullabhoy <riyaz.faizullabhoy@docker.com>
Robert Wallis <smilingrob@gmail.com>
Robin Naundorf <r.naundorf@fh-muenster.de>
Robin Speekenbrink <robin@kingsquare.nl>
Rodolfo Ortiz <rodolfo.ortiz@definityfirst.com>
Rogelio Canedo <rcanedo@mappy.priv>
Roland Kammerer <roland.kammerer@linbit.com>
Roman Dudin <katrmr@gmail.com>
Rory Hunter <roryhunter2@gmail.com>
Ross Boucher <rboucher@gmail.com>
Rubens Figueiredo <r.figueiredo.52@gmail.com>
Rui Cao <ruicao@alauda.io>
Ryan Belgrave <rmb1993@gmail.com>
Ryan Detzel <ryan.detzel@gmail.com>
Ryan Stelly <ryan.stelly@live.com>
Ryan Wilson-Perkin <ryanwilsonperkin@gmail.com>
Ryan Zhang <ryan.zhang@docker.com>
Sainath Grandhi <sainath.grandhi@intel.com>
Sakeven Jiang <jc5930@sina.cn>
Sally O'Malley <somalley@redhat.com>
Sam Neirinck <sam@samneirinck.com>
Sambuddha Basu <sambuddhabasu1@gmail.com>
Sami Tabet <salph.tabet@gmail.com>
Samuel Karp <skarp@amazon.com>
Santhosh Manohar <santhosh@docker.com>
Scott Brenner <scott@scottbrenner.me>
Scott Collier <emailscottcollier@gmail.com>
Sean Christopherson <sean.j.christopherson@intel.com>
Sean Rodman <srodman7689@gmail.com>
Sebastiaan van Stijn <github@gone.nl>
Sergey Tryuber <Sergeant007@users.noreply.github.com>
Serhat Gülçiçek <serhat25@gmail.com>
Sevki Hasirci <s@sevki.org>
Shaun Kaasten <shaunk@gmail.com>
Sheng Yang <sheng@yasker.org>
Shijiang Wei <mountkin@gmail.com>
Shishir Mahajan <shishir.mahajan@redhat.com>
Shoubhik Bose <sbose78@gmail.com>
Shukui Yang <yangshukui@huawei.com>
Sian Lerk Lau <kiawin@gmail.com>
Sidhartha Mani <sidharthamn@gmail.com>
sidharthamani <sid@rancher.com>
Silvin Lubecki <silvin.lubecki@docker.com>
Simei He <hesimei@zju.edu.cn>
Simon Ferquel <simon.ferquel@docker.com>
Sindhu S <sindhus@live.in>
Slava Semushin <semushin@redhat.com>
Solomon Hykes <solomon@docker.com>
Song Gao <song@gao.io>
Spencer Brown <spencer@spencerbrown.org>
squeegels <1674195+squeegels@users.noreply.github.com>
Srini Brahmaroutu <srbrahma@us.ibm.com>
Stefan S. <tronicum@user.github.com>
Stefan Scherer <stefan.scherer@docker.com>
Stefan Weil <sw@weilnetz.de>
Stephane Jeandeaux <stephane.jeandeaux@gmail.com>
Stephen Day <stevvooe@gmail.com>
Stephen Rust <srust@blockbridge.com>
Steve Durrheimer <s.durrheimer@gmail.com>
Steve Richards <steve.richards@docker.com>
Steven Burgess <steven.a.burgess@hotmail.com>
Subhajit Ghosh <isubuz.g@gmail.com>
Sun Jianbo <wonderflow.sun@gmail.com>
Sune Keller <absukl@almbrand.dk>
Sungwon Han <sungwon.han@navercorp.com>
Sunny Gogoi <indiasuny000@gmail.com>
Sven Dowideit <SvenDowideit@home.org.au>
Sylvain Baubeau <sbaubeau@redhat.com>
Sébastien HOUZÉ <cto@verylastroom.com>
T K Sourabh <sourabhtk37@gmail.com>
TAGOMORI Satoshi <tagomoris@gmail.com>
taiji-tech <csuhqg@foxmail.com>
Taylor Jones <monitorjbl@gmail.com>
Tejaswini Duggaraju <naduggar@microsoft.com>
Thatcher Peskens <thatcher@docker.com>
Thomas Gazagnaire <thomas@gazagnaire.org>
Thomas Krzero <thomas.kovatchitch@gmail.com>
Thomas Leonard <thomas.leonard@docker.com>
Thomas Léveil <thomasleveil@gmail.com>
Thomas Riccardi <thomas@deepomatic.com>
Thomas Swift <tgs242@gmail.com>
Tianon Gravi <admwiggin@gmail.com>
Tianyi Wang <capkurmagati@gmail.com>
Tibor Vass <teabee89@gmail.com>
Tim Dettrick <t.dettrick@uq.edu.au>
Tim Hockin <thockin@google.com>
Tim Smith <timbot@google.com>
Tim Waugh <twaugh@redhat.com>
Tim Wraight <tim.wraight@tangentlabs.co.uk>
timfeirg <kkcocogogo@gmail.com>
Timothy Hobbs <timothyhobbs@seznam.cz>
Tobias Bradtke <webwurst@gmail.com>
Tobias Gesellchen <tobias@gesellix.de>
Todd Whiteman <todd.whiteman@joyent.com>
Tom Denham <tom@tomdee.co.uk>
Tom Fotherby <tom+github@peopleperhour.com>
Tom Klingenberg <tklingenberg@lastflood.net>
Tom Milligan <code@tommilligan.net>
Tom X. Tobin <tomxtobin@tomxtobin.com>
Tomas Tomecek <ttomecek@redhat.com>
Tomasz Kopczynski <tomek@kopczynski.net.pl>
Tomáš Hrčka <thrcka@redhat.com>
Tony Abboud <tdabboud@hotmail.com>
Tõnis Tiigi <tonistiigi@gmail.com>
Trapier Marshall <trapier.marshall@docker.com>
Travis Cline <travis.cline@gmail.com>
Tristan Carel <tristan@cogniteev.com>
Tycho Andersen <tycho@docker.com>
Tycho Andersen <tycho@tycho.ws>
uhayate <uhayate.gong@daocloud.io>
Ulysses Souza <ulysses.souza@docker.com>
Umesh Yadav <umesh4257@gmail.com>
Valentin Lorentz <progval+git@progval.net>
Veres Lajos <vlajos@gmail.com>
Victor Vieux <victor.vieux@docker.com>
Victoria Bialas <victoria.bialas@docker.com>
Viktor Stanchev <me@viktorstanchev.com>
Vimal Raghubir <vraghubir0418@gmail.com>
Vincent Batts <vbatts@redhat.com>
Vincent Bernat <Vincent.Bernat@exoscale.ch>
Vincent Demeester <vincent.demeester@docker.com>
Vincent Woo <me@vincentwoo.com>
Vishnu Kannan <vishnuk@google.com>
Vivek Goyal <vgoyal@redhat.com>
Wang Jie <wangjie5@chinaskycloud.com>
Wang Lei <wanglei@tenxcloud.com>
Wang Long <long.wanglong@huawei.com>
Wang Ping <present.wp@icloud.com>
Wang Xing <hzwangxing@corp.netease.com>
Wang Yuexiao <wang.yuexiao@zte.com.cn>
Wataru Ishida <ishida.wataru@lab.ntt.co.jp>
Wayne Song <wsong@docker.com>
Wen Cheng Ma <wenchma@cn.ibm.com>
Wenzhi Liang <wenzhi.liang@gmail.com>
Wes Morgan <cap10morgan@gmail.com>
Wewang Xiaorenfine <wang.xiaoren@zte.com.cn>
William Henry <whenry@redhat.com>
Xianglin Gao <xlgao@zju.edu.cn>
Xiaodong Zhang <a4012017@sina.com>
Xiaoxi He <xxhe@alauda.io>
Xinbo Weng <xihuanbo_0521@zju.edu.cn>
Xuecong Liao <satorulogic@gmail.com>
Yan Feng <yanfeng2@huawei.com>
Yanqiang Miao <miao.yanqiang@zte.com.cn>
Yassine Tijani <yasstij11@gmail.com>
Yi EungJun <eungjun.yi@navercorp.com>
Ying Li <ying.li@docker.com>
Yong Tang <yong.tang.github@outlook.com>
Yosef Fertel <yfertel@gmail.com>
Yu Peng <yu.peng36@zte.com.cn>
Yuan Sun <sunyuan3@huawei.com>
Yue Zhang <zy675793960@yeah.net>
Yunxiang Huang <hyxqshk@vip.qq.com>
Zachary Romero <zacromero3@gmail.com>
zebrilee <zebrilee@gmail.com>
Zhang Kun <zkazure@gmail.com>
Zhang Wei <zhangwei555@huawei.com>
Zhang Wentao <zhangwentao234@huawei.com>
ZhangHang <stevezhang2014@gmail.com>
zhenghenghuo <zhenghenghuo@zju.edu.cn>
Zhou Hao <zhouhao@cn.fujitsu.com>
Zhoulin Xie <zhoulin.xie@daocloud.io>
Zhu Guihua <zhugh.fnst@cn.fujitsu.com>
Álex González <agonzalezro@gmail.com>
Álvaro Lázaro <alvaro.lazaro.g@gmail.com>
Átila Camurça Alves <camurca.home@gmail.com>
徐俊杰 <paco.xu@daocloud.io>

191
vendor/github.com/docker/cli/LICENSE generated vendored Normal file
View File

@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2013-2017 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

19
vendor/github.com/docker/cli/NOTICE generated vendored Normal file
View File

@ -0,0 +1,19 @@
Docker
Copyright 2012-2017 Docker, Inc.
This product includes software developed at Docker, Inc. (https://www.docker.com).
This product contains software (https://github.com/kr/pty) developed
by Keith Rarick, licensed under the MIT License.
The following is courtesy of our legal counsel:
Use and transfer of Docker may be subject to certain restrictions by the
United States and other governments.
It is your responsibility to ensure that your use and/or transfer does not
violate applicable laws.
For more information, please see https://www.bis.doc.gov
See also https://www.apache.org/dev/crypto.html and/or seek legal counsel.

136
vendor/github.com/docker/cli/cli/config/config.go generated vendored Normal file
View File

@ -0,0 +1,136 @@
package config
import (
"fmt"
"io"
"os"
"path/filepath"
"strings"
"github.com/docker/cli/cli/config/configfile"
"github.com/docker/cli/cli/config/credentials"
"github.com/docker/cli/cli/config/types"
"github.com/docker/docker/pkg/homedir"
"github.com/pkg/errors"
)
const (
// ConfigFileName is the name of config file
ConfigFileName = "config.json"
configFileDir = ".docker"
oldConfigfile = ".dockercfg"
contextsDir = "contexts"
)
var (
configDir = os.Getenv("DOCKER_CONFIG")
)
func init() {
if configDir == "" {
configDir = filepath.Join(homedir.Get(), configFileDir)
}
}
// Dir returns the directory the configuration file is stored in
func Dir() string {
return configDir
}
// ContextStoreDir returns the directory the docker contexts are stored in
func ContextStoreDir() string {
return filepath.Join(Dir(), contextsDir)
}
// SetDir sets the directory the configuration file is stored in
func SetDir(dir string) {
configDir = filepath.Clean(dir)
}
// Path returns the path to a file relative to the config dir
func Path(p ...string) (string, error) {
path := filepath.Join(append([]string{Dir()}, p...)...)
if !strings.HasPrefix(path, Dir()+string(filepath.Separator)) {
return "", errors.Errorf("path %q is outside of root config directory %q", path, Dir())
}
return path, nil
}
// LegacyLoadFromReader is a convenience function that creates a ConfigFile object from
// a non-nested reader
func LegacyLoadFromReader(configData io.Reader) (*configfile.ConfigFile, error) {
configFile := configfile.ConfigFile{
AuthConfigs: make(map[string]types.AuthConfig),
}
err := configFile.LegacyLoadFromReader(configData)
return &configFile, err
}
// LoadFromReader is a convenience function that creates a ConfigFile object from
// a reader
func LoadFromReader(configData io.Reader) (*configfile.ConfigFile, error) {
configFile := configfile.ConfigFile{
AuthConfigs: make(map[string]types.AuthConfig),
}
err := configFile.LoadFromReader(configData)
return &configFile, err
}
// Load reads the configuration files in the given directory, and sets up
// the auth config information and returns values.
// FIXME: use the internal golang config parser
func Load(configDir string) (*configfile.ConfigFile, error) {
if configDir == "" {
configDir = Dir()
}
filename := filepath.Join(configDir, ConfigFileName)
configFile := configfile.New(filename)
// Try happy path first - latest config file
if _, err := os.Stat(filename); err == nil {
file, err := os.Open(filename)
if err != nil {
return configFile, errors.Wrap(err, filename)
}
defer file.Close()
err = configFile.LoadFromReader(file)
if err != nil {
err = errors.Wrap(err, filename)
}
return configFile, err
} else if !os.IsNotExist(err) {
// if file is there but we can't stat it for any reason other
// than it doesn't exist then stop
return configFile, errors.Wrap(err, filename)
}
// Can't find latest config file so check for the old one
confFile := filepath.Join(homedir.Get(), oldConfigfile)
if _, err := os.Stat(confFile); err != nil {
return configFile, nil //missing file is not an error
}
file, err := os.Open(confFile)
if err != nil {
return configFile, errors.Wrap(err, filename)
}
defer file.Close()
err = configFile.LegacyLoadFromReader(file)
if err != nil {
return configFile, errors.Wrap(err, filename)
}
return configFile, nil
}
// LoadDefaultConfigFile attempts to load the default config file and returns
// an initialized ConfigFile struct if none is found.
func LoadDefaultConfigFile(stderr io.Writer) *configfile.ConfigFile {
configFile, err := Load(Dir())
if err != nil {
fmt.Fprintf(stderr, "WARNING: Error loading config file: %v\n", err)
}
if !configFile.ContainsAuth() {
configFile.CredentialsStore = credentials.DetectDefaultStore(configFile.CredentialsStore)
}
return configFile
}

View File

@ -0,0 +1,385 @@
package configfile
import (
"encoding/base64"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"strings"
"github.com/docker/cli/cli/config/credentials"
"github.com/docker/cli/cli/config/types"
"github.com/pkg/errors"
)
const (
// This constant is only used for really old config files when the
// URL wasn't saved as part of the config file and it was just
// assumed to be this value.
defaultIndexServer = "https://index.docker.io/v1/"
)
// ConfigFile ~/.docker/config.json file info
type ConfigFile struct {
AuthConfigs map[string]types.AuthConfig `json:"auths"`
HTTPHeaders map[string]string `json:"HttpHeaders,omitempty"`
PsFormat string `json:"psFormat,omitempty"`
ImagesFormat string `json:"imagesFormat,omitempty"`
NetworksFormat string `json:"networksFormat,omitempty"`
PluginsFormat string `json:"pluginsFormat,omitempty"`
VolumesFormat string `json:"volumesFormat,omitempty"`
StatsFormat string `json:"statsFormat,omitempty"`
DetachKeys string `json:"detachKeys,omitempty"`
CredentialsStore string `json:"credsStore,omitempty"`
CredentialHelpers map[string]string `json:"credHelpers,omitempty"`
Filename string `json:"-"` // Note: for internal use only
ServiceInspectFormat string `json:"serviceInspectFormat,omitempty"`
ServicesFormat string `json:"servicesFormat,omitempty"`
TasksFormat string `json:"tasksFormat,omitempty"`
SecretFormat string `json:"secretFormat,omitempty"`
ConfigFormat string `json:"configFormat,omitempty"`
NodesFormat string `json:"nodesFormat,omitempty"`
PruneFilters []string `json:"pruneFilters,omitempty"`
Proxies map[string]ProxyConfig `json:"proxies,omitempty"`
Experimental string `json:"experimental,omitempty"`
StackOrchestrator string `json:"stackOrchestrator,omitempty"`
Kubernetes *KubernetesConfig `json:"kubernetes,omitempty"`
CurrentContext string `json:"currentContext,omitempty"`
CLIPluginsExtraDirs []string `json:"cliPluginsExtraDirs,omitempty"`
Plugins map[string]map[string]string `json:"plugins,omitempty"`
Aliases map[string]string `json:"aliases,omitempty"`
}
// ProxyConfig contains proxy configuration settings
type ProxyConfig struct {
HTTPProxy string `json:"httpProxy,omitempty"`
HTTPSProxy string `json:"httpsProxy,omitempty"`
NoProxy string `json:"noProxy,omitempty"`
FTPProxy string `json:"ftpProxy,omitempty"`
}
// KubernetesConfig contains Kubernetes orchestrator settings
type KubernetesConfig struct {
AllNamespaces string `json:"allNamespaces,omitempty"`
}
// New initializes an empty configuration file for the given filename 'fn'
func New(fn string) *ConfigFile {
return &ConfigFile{
AuthConfigs: make(map[string]types.AuthConfig),
HTTPHeaders: make(map[string]string),
Filename: fn,
Plugins: make(map[string]map[string]string),
Aliases: make(map[string]string),
}
}
// LegacyLoadFromReader reads the non-nested configuration data given and sets up the
// auth config information with given directory and populates the receiver object
func (configFile *ConfigFile) LegacyLoadFromReader(configData io.Reader) error {
b, err := ioutil.ReadAll(configData)
if err != nil {
return err
}
if err := json.Unmarshal(b, &configFile.AuthConfigs); err != nil {
arr := strings.Split(string(b), "\n")
if len(arr) < 2 {
return errors.Errorf("The Auth config file is empty")
}
authConfig := types.AuthConfig{}
origAuth := strings.Split(arr[0], " = ")
if len(origAuth) != 2 {
return errors.Errorf("Invalid Auth config file")
}
authConfig.Username, authConfig.Password, err = decodeAuth(origAuth[1])
if err != nil {
return err
}
authConfig.ServerAddress = defaultIndexServer
configFile.AuthConfigs[defaultIndexServer] = authConfig
} else {
for k, authConfig := range configFile.AuthConfigs {
authConfig.Username, authConfig.Password, err = decodeAuth(authConfig.Auth)
if err != nil {
return err
}
authConfig.Auth = ""
authConfig.ServerAddress = k
configFile.AuthConfigs[k] = authConfig
}
}
return nil
}
// LoadFromReader reads the configuration data given and sets up the auth config
// information with given directory and populates the receiver object
func (configFile *ConfigFile) LoadFromReader(configData io.Reader) error {
if err := json.NewDecoder(configData).Decode(&configFile); err != nil {
return err
}
var err error
for addr, ac := range configFile.AuthConfigs {
ac.Username, ac.Password, err = decodeAuth(ac.Auth)
if err != nil {
return err
}
ac.Auth = ""
ac.ServerAddress = addr
configFile.AuthConfigs[addr] = ac
}
return checkKubernetesConfiguration(configFile.Kubernetes)
}
// ContainsAuth returns whether there is authentication configured
// in this file or not.
func (configFile *ConfigFile) ContainsAuth() bool {
return configFile.CredentialsStore != "" ||
len(configFile.CredentialHelpers) > 0 ||
len(configFile.AuthConfigs) > 0
}
// GetAuthConfigs returns the mapping of repo to auth configuration
func (configFile *ConfigFile) GetAuthConfigs() map[string]types.AuthConfig {
return configFile.AuthConfigs
}
// SaveToWriter encodes and writes out all the authorization information to
// the given writer
func (configFile *ConfigFile) SaveToWriter(writer io.Writer) error {
// Encode sensitive data into a new/temp struct
tmpAuthConfigs := make(map[string]types.AuthConfig, len(configFile.AuthConfigs))
for k, authConfig := range configFile.AuthConfigs {
authCopy := authConfig
// encode and save the authstring, while blanking out the original fields
authCopy.Auth = encodeAuth(&authCopy)
authCopy.Username = ""
authCopy.Password = ""
authCopy.ServerAddress = ""
tmpAuthConfigs[k] = authCopy
}
saveAuthConfigs := configFile.AuthConfigs
configFile.AuthConfigs = tmpAuthConfigs
defer func() { configFile.AuthConfigs = saveAuthConfigs }()
data, err := json.MarshalIndent(configFile, "", "\t")
if err != nil {
return err
}
_, err = writer.Write(data)
return err
}
// Save encodes and writes out all the authorization information
func (configFile *ConfigFile) Save() error {
if configFile.Filename == "" {
return errors.Errorf("Can't save config with empty filename")
}
dir := filepath.Dir(configFile.Filename)
if err := os.MkdirAll(dir, 0700); err != nil {
return err
}
temp, err := ioutil.TempFile(dir, filepath.Base(configFile.Filename))
if err != nil {
return err
}
err = configFile.SaveToWriter(temp)
temp.Close()
if err != nil {
os.Remove(temp.Name())
return err
}
return os.Rename(temp.Name(), configFile.Filename)
}
// ParseProxyConfig computes proxy configuration by retrieving the config for the provided host and
// then checking this against any environment variables provided to the container
func (configFile *ConfigFile) ParseProxyConfig(host string, runOpts map[string]*string) map[string]*string {
var cfgKey string
if _, ok := configFile.Proxies[host]; !ok {
cfgKey = "default"
} else {
cfgKey = host
}
config := configFile.Proxies[cfgKey]
permitted := map[string]*string{
"HTTP_PROXY": &config.HTTPProxy,
"HTTPS_PROXY": &config.HTTPSProxy,
"NO_PROXY": &config.NoProxy,
"FTP_PROXY": &config.FTPProxy,
}
m := runOpts
if m == nil {
m = make(map[string]*string)
}
for k := range permitted {
if *permitted[k] == "" {
continue
}
if _, ok := m[k]; !ok {
m[k] = permitted[k]
}
if _, ok := m[strings.ToLower(k)]; !ok {
m[strings.ToLower(k)] = permitted[k]
}
}
return m
}
// encodeAuth creates a base64 encoded string to containing authorization information
func encodeAuth(authConfig *types.AuthConfig) string {
if authConfig.Username == "" && authConfig.Password == "" {
return ""
}
authStr := authConfig.Username + ":" + authConfig.Password
msg := []byte(authStr)
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(msg)))
base64.StdEncoding.Encode(encoded, msg)
return string(encoded)
}
// decodeAuth decodes a base64 encoded string and returns username and password
func decodeAuth(authStr string) (string, string, error) {
if authStr == "" {
return "", "", nil
}
decLen := base64.StdEncoding.DecodedLen(len(authStr))
decoded := make([]byte, decLen)
authByte := []byte(authStr)
n, err := base64.StdEncoding.Decode(decoded, authByte)
if err != nil {
return "", "", err
}
if n > decLen {
return "", "", errors.Errorf("Something went wrong decoding auth config")
}
arr := strings.SplitN(string(decoded), ":", 2)
if len(arr) != 2 {
return "", "", errors.Errorf("Invalid auth configuration file")
}
password := strings.Trim(arr[1], "\x00")
return arr[0], password, nil
}
// GetCredentialsStore returns a new credentials store from the settings in the
// configuration file
func (configFile *ConfigFile) GetCredentialsStore(registryHostname string) credentials.Store {
if helper := getConfiguredCredentialStore(configFile, registryHostname); helper != "" {
return newNativeStore(configFile, helper)
}
return credentials.NewFileStore(configFile)
}
// var for unit testing.
var newNativeStore = func(configFile *ConfigFile, helperSuffix string) credentials.Store {
return credentials.NewNativeStore(configFile, helperSuffix)
}
// GetAuthConfig for a repository from the credential store
func (configFile *ConfigFile) GetAuthConfig(registryHostname string) (types.AuthConfig, error) {
return configFile.GetCredentialsStore(registryHostname).Get(registryHostname)
}
// getConfiguredCredentialStore returns the credential helper configured for the
// given registry, the default credsStore, or the empty string if neither are
// configured.
func getConfiguredCredentialStore(c *ConfigFile, registryHostname string) string {
if c.CredentialHelpers != nil && registryHostname != "" {
if helper, exists := c.CredentialHelpers[registryHostname]; exists {
return helper
}
}
return c.CredentialsStore
}
// GetAllCredentials returns all of the credentials stored in all of the
// configured credential stores.
func (configFile *ConfigFile) GetAllCredentials() (map[string]types.AuthConfig, error) {
auths := make(map[string]types.AuthConfig)
addAll := func(from map[string]types.AuthConfig) {
for reg, ac := range from {
auths[reg] = ac
}
}
defaultStore := configFile.GetCredentialsStore("")
newAuths, err := defaultStore.GetAll()
if err != nil {
return nil, err
}
addAll(newAuths)
// Auth configs from a registry-specific helper should override those from the default store.
for registryHostname := range configFile.CredentialHelpers {
newAuth, err := configFile.GetAuthConfig(registryHostname)
if err != nil {
return nil, err
}
auths[registryHostname] = newAuth
}
return auths, nil
}
// GetFilename returns the file name that this config file is based on.
func (configFile *ConfigFile) GetFilename() string {
return configFile.Filename
}
// PluginConfig retrieves the requested option for the given plugin.
func (configFile *ConfigFile) PluginConfig(pluginname, option string) (string, bool) {
if configFile.Plugins == nil {
return "", false
}
pluginConfig, ok := configFile.Plugins[pluginname]
if !ok {
return "", false
}
value, ok := pluginConfig[option]
return value, ok
}
// SetPluginConfig sets the option to the given value for the given
// plugin. Passing a value of "" will remove the option. If removing
// the final config item for a given plugin then also cleans up the
// overall plugin entry.
func (configFile *ConfigFile) SetPluginConfig(pluginname, option, value string) {
if configFile.Plugins == nil {
configFile.Plugins = make(map[string]map[string]string)
}
pluginConfig, ok := configFile.Plugins[pluginname]
if !ok {
pluginConfig = make(map[string]string)
configFile.Plugins[pluginname] = pluginConfig
}
if value != "" {
pluginConfig[option] = value
} else {
delete(pluginConfig, option)
}
if len(pluginConfig) == 0 {
delete(configFile.Plugins, pluginname)
}
}
func checkKubernetesConfiguration(kubeConfig *KubernetesConfig) error {
if kubeConfig == nil {
return nil
}
switch kubeConfig.AllNamespaces {
case "":
case "enabled":
case "disabled":
default:
return fmt.Errorf("invalid 'kubernetes.allNamespaces' value, should be 'enabled' or 'disabled': %s", kubeConfig.AllNamespaces)
}
return nil
}

View File

@ -0,0 +1,17 @@
package credentials
import (
"github.com/docker/cli/cli/config/types"
)
// Store is the interface that any credentials store must implement.
type Store interface {
// Erase removes credentials from the store for a given server.
Erase(serverAddress string) error
// Get retrieves credentials from the store for a given server.
Get(serverAddress string) (types.AuthConfig, error)
// GetAll retrieves all the credentials from the store.
GetAll() (map[string]types.AuthConfig, error)
// Store saves credentials in the store.
Store(authConfig types.AuthConfig) error
}

View File

@ -0,0 +1,21 @@
package credentials
import (
"os/exec"
)
// DetectDefaultStore return the default credentials store for the platform if
// the store executable is available.
func DetectDefaultStore(store string) string {
platformDefault := defaultCredentialsStore()
// user defined or no default for platform
if store != "" || platformDefault == "" {
return store
}
if _, err := exec.LookPath(remoteCredentialsPrefix + platformDefault); err == nil {
return platformDefault
}
return ""
}

View File

@ -0,0 +1,5 @@
package credentials
func defaultCredentialsStore() string {
return "osxkeychain"
}

View File

@ -0,0 +1,13 @@
package credentials
import (
"os/exec"
)
func defaultCredentialsStore() string {
if _, err := exec.LookPath("pass"); err == nil {
return "pass"
}
return "secretservice"
}

View File

@ -0,0 +1,7 @@
// +build !windows,!darwin,!linux
package credentials
func defaultCredentialsStore() string {
return ""
}

View File

@ -0,0 +1,5 @@
package credentials
func defaultCredentialsStore() string {
return "wincred"
}

View File

@ -0,0 +1,81 @@
package credentials
import (
"strings"
"github.com/docker/cli/cli/config/types"
)
type store interface {
Save() error
GetAuthConfigs() map[string]types.AuthConfig
GetFilename() string
}
// fileStore implements a credentials store using
// the docker configuration file to keep the credentials in plain text.
type fileStore struct {
file store
}
// NewFileStore creates a new file credentials store.
func NewFileStore(file store) Store {
return &fileStore{file: file}
}
// Erase removes the given credentials from the file store.
func (c *fileStore) Erase(serverAddress string) error {
delete(c.file.GetAuthConfigs(), serverAddress)
return c.file.Save()
}
// Get retrieves credentials for a specific server from the file store.
func (c *fileStore) Get(serverAddress string) (types.AuthConfig, error) {
authConfig, ok := c.file.GetAuthConfigs()[serverAddress]
if !ok {
// Maybe they have a legacy config file, we will iterate the keys converting
// them to the new format and testing
for r, ac := range c.file.GetAuthConfigs() {
if serverAddress == ConvertToHostname(r) {
return ac, nil
}
}
authConfig = types.AuthConfig{}
}
return authConfig, nil
}
func (c *fileStore) GetAll() (map[string]types.AuthConfig, error) {
return c.file.GetAuthConfigs(), nil
}
// Store saves the given credentials in the file store.
func (c *fileStore) Store(authConfig types.AuthConfig) error {
c.file.GetAuthConfigs()[authConfig.ServerAddress] = authConfig
return c.file.Save()
}
func (c *fileStore) GetFilename() string {
return c.file.GetFilename()
}
func (c *fileStore) IsFileStore() bool {
return true
}
// ConvertToHostname converts a registry url which has http|https prepended
// to just an hostname.
// Copied from github.com/docker/docker/registry.ConvertToHostname to reduce dependencies.
func ConvertToHostname(url string) string {
stripped := url
if strings.HasPrefix(url, "http://") {
stripped = strings.TrimPrefix(url, "http://")
} else if strings.HasPrefix(url, "https://") {
stripped = strings.TrimPrefix(url, "https://")
}
nameParts := strings.SplitN(stripped, "/", 2)
return nameParts[0]
}

View File

@ -0,0 +1,143 @@
package credentials
import (
"github.com/docker/cli/cli/config/types"
"github.com/docker/docker-credential-helpers/client"
"github.com/docker/docker-credential-helpers/credentials"
)
const (
remoteCredentialsPrefix = "docker-credential-"
tokenUsername = "<token>"
)
// nativeStore implements a credentials store
// using native keychain to keep credentials secure.
// It piggybacks into a file store to keep users' emails.
type nativeStore struct {
programFunc client.ProgramFunc
fileStore Store
}
// NewNativeStore creates a new native store that
// uses a remote helper program to manage credentials.
func NewNativeStore(file store, helperSuffix string) Store {
name := remoteCredentialsPrefix + helperSuffix
return &nativeStore{
programFunc: client.NewShellProgramFunc(name),
fileStore: NewFileStore(file),
}
}
// Erase removes the given credentials from the native store.
func (c *nativeStore) Erase(serverAddress string) error {
if err := client.Erase(c.programFunc, serverAddress); err != nil {
return err
}
// Fallback to plain text store to remove email
return c.fileStore.Erase(serverAddress)
}
// Get retrieves credentials for a specific server from the native store.
func (c *nativeStore) Get(serverAddress string) (types.AuthConfig, error) {
// load user email if it exist or an empty auth config.
auth, _ := c.fileStore.Get(serverAddress)
creds, err := c.getCredentialsFromStore(serverAddress)
if err != nil {
return auth, err
}
auth.Username = creds.Username
auth.IdentityToken = creds.IdentityToken
auth.Password = creds.Password
return auth, nil
}
// GetAll retrieves all the credentials from the native store.
func (c *nativeStore) GetAll() (map[string]types.AuthConfig, error) {
auths, err := c.listCredentialsInStore()
if err != nil {
return nil, err
}
// Emails are only stored in the file store.
// This call can be safely eliminated when emails are removed.
fileConfigs, _ := c.fileStore.GetAll()
authConfigs := make(map[string]types.AuthConfig)
for registry := range auths {
creds, err := c.getCredentialsFromStore(registry)
if err != nil {
return nil, err
}
ac := fileConfigs[registry] // might contain Email
ac.Username = creds.Username
ac.Password = creds.Password
ac.IdentityToken = creds.IdentityToken
authConfigs[registry] = ac
}
return authConfigs, nil
}
// Store saves the given credentials in the file store.
func (c *nativeStore) Store(authConfig types.AuthConfig) error {
if err := c.storeCredentialsInStore(authConfig); err != nil {
return err
}
authConfig.Username = ""
authConfig.Password = ""
authConfig.IdentityToken = ""
// Fallback to old credential in plain text to save only the email
return c.fileStore.Store(authConfig)
}
// storeCredentialsInStore executes the command to store the credentials in the native store.
func (c *nativeStore) storeCredentialsInStore(config types.AuthConfig) error {
creds := &credentials.Credentials{
ServerURL: config.ServerAddress,
Username: config.Username,
Secret: config.Password,
}
if config.IdentityToken != "" {
creds.Username = tokenUsername
creds.Secret = config.IdentityToken
}
return client.Store(c.programFunc, creds)
}
// getCredentialsFromStore executes the command to get the credentials from the native store.
func (c *nativeStore) getCredentialsFromStore(serverAddress string) (types.AuthConfig, error) {
var ret types.AuthConfig
creds, err := client.Get(c.programFunc, serverAddress)
if err != nil {
if credentials.IsErrCredentialsNotFound(err) {
// do not return an error if the credentials are not
// in the keychain. Let docker ask for new credentials.
return ret, nil
}
return ret, err
}
if creds.Username == tokenUsername {
ret.IdentityToken = creds.Secret
} else {
ret.Password = creds.Secret
ret.Username = creds.Username
}
ret.ServerAddress = serverAddress
return ret, nil
}
// listCredentialsInStore returns a listing of stored credentials as a map of
// URL -> username.
func (c *nativeStore) listCredentialsInStore() (map[string]string, error) {
return client.List(c.programFunc)
}

View File

@ -0,0 +1,22 @@
package types
// AuthConfig contains authorization information for connecting to a Registry
type AuthConfig struct {
Username string `json:"username,omitempty"`
Password string `json:"password,omitempty"`
Auth string `json:"auth,omitempty"`
// Email is an optional value associated with the username.
// This field is deprecated and will be removed in a later
// version of docker.
Email string `json:"email,omitempty"`
ServerAddress string `json:"serveraddress,omitempty"`
// IdentityToken is used to authenticate the user and get
// an access token for the registry.
IdentityToken string `json:"identitytoken,omitempty"`
// RegistryToken is a bearer token to be sent to a registry
RegistryToken string `json:"registrytoken,omitempty"`
}

View File

@ -0,0 +1,15 @@
#!/usr/bin/env bash
set -e
cd "$(dirname "$(readlink -f "${BASH_SOURCE[*]}")")/../.."
# see also ".mailmap" for how email addresses and names are deduplicated
{
cat <<-'EOH'
# This file lists all individuals having contributed content to the repository.
# For how it is generated, see `scripts/docs/generate-authors.sh`.
EOH
echo
git log --format='%aN <%aE>' | LC_ALL=C.UTF-8 sort -uf
} > AUTHORS

View File

@ -0,0 +1,20 @@
Copyright (c) 2016 David Calavera
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@ -0,0 +1,121 @@
package client
import (
"bytes"
"encoding/json"
"fmt"
"strings"
"github.com/docker/docker-credential-helpers/credentials"
)
// isValidCredsMessage checks if 'msg' contains invalid credentials error message.
// It returns whether the logs are free of invalid credentials errors and the error if it isn't.
// error values can be errCredentialsMissingServerURL or errCredentialsMissingUsername.
func isValidCredsMessage(msg string) error {
if credentials.IsCredentialsMissingServerURLMessage(msg) {
return credentials.NewErrCredentialsMissingServerURL()
}
if credentials.IsCredentialsMissingUsernameMessage(msg) {
return credentials.NewErrCredentialsMissingUsername()
}
return nil
}
// Store uses an external program to save credentials.
func Store(program ProgramFunc, creds *credentials.Credentials) error {
cmd := program("store")
buffer := new(bytes.Buffer)
if err := json.NewEncoder(buffer).Encode(creds); err != nil {
return err
}
cmd.Input(buffer)
out, err := cmd.Output()
if err != nil {
t := strings.TrimSpace(string(out))
if isValidErr := isValidCredsMessage(t); isValidErr != nil {
err = isValidErr
}
return fmt.Errorf("error storing credentials - err: %v, out: `%s`", err, t)
}
return nil
}
// Get executes an external program to get the credentials from a native store.
func Get(program ProgramFunc, serverURL string) (*credentials.Credentials, error) {
cmd := program("get")
cmd.Input(strings.NewReader(serverURL))
out, err := cmd.Output()
if err != nil {
t := strings.TrimSpace(string(out))
if credentials.IsErrCredentialsNotFoundMessage(t) {
return nil, credentials.NewErrCredentialsNotFound()
}
if isValidErr := isValidCredsMessage(t); isValidErr != nil {
err = isValidErr
}
return nil, fmt.Errorf("error getting credentials - err: %v, out: `%s`", err, t)
}
resp := &credentials.Credentials{
ServerURL: serverURL,
}
if err := json.NewDecoder(bytes.NewReader(out)).Decode(resp); err != nil {
return nil, err
}
return resp, nil
}
// Erase executes a program to remove the server credentials from the native store.
func Erase(program ProgramFunc, serverURL string) error {
cmd := program("erase")
cmd.Input(strings.NewReader(serverURL))
out, err := cmd.Output()
if err != nil {
t := strings.TrimSpace(string(out))
if isValidErr := isValidCredsMessage(t); isValidErr != nil {
err = isValidErr
}
return fmt.Errorf("error erasing credentials - err: %v, out: `%s`", err, t)
}
return nil
}
// List executes a program to list server credentials in the native store.
func List(program ProgramFunc) (map[string]string, error) {
cmd := program("list")
cmd.Input(strings.NewReader("unused"))
out, err := cmd.Output()
if err != nil {
t := strings.TrimSpace(string(out))
if isValidErr := isValidCredsMessage(t); isValidErr != nil {
err = isValidErr
}
return nil, fmt.Errorf("error listing credentials - err: %v, out: `%s`", err, t)
}
var resp map[string]string
if err = json.NewDecoder(bytes.NewReader(out)).Decode(&resp); err != nil {
return nil, err
}
return resp, nil
}

View File

@ -0,0 +1,56 @@
package client
import (
"fmt"
"io"
"os"
"os/exec"
)
// Program is an interface to execute external programs.
type Program interface {
Output() ([]byte, error)
Input(in io.Reader)
}
// ProgramFunc is a type of function that initializes programs based on arguments.
type ProgramFunc func(args ...string) Program
// NewShellProgramFunc creates programs that are executed in a Shell.
func NewShellProgramFunc(name string) ProgramFunc {
return NewShellProgramFuncWithEnv(name, nil)
}
// NewShellProgramFuncWithEnv creates programs that are executed in a Shell with environment variables
func NewShellProgramFuncWithEnv(name string, env *map[string]string) ProgramFunc {
return func(args ...string) Program {
return &Shell{cmd: createProgramCmdRedirectErr(name, args, env)}
}
}
func createProgramCmdRedirectErr(commandName string, args []string, env *map[string]string) *exec.Cmd {
programCmd := exec.Command(commandName, args...)
programCmd.Env = os.Environ()
if env != nil {
for k, v := range *env {
programCmd.Env = append(programCmd.Env, fmt.Sprintf("%s=%s", k, v))
}
}
programCmd.Stderr = os.Stderr
return programCmd
}
// Shell invokes shell commands to talk with a remote credentials helper.
type Shell struct {
cmd *exec.Cmd
}
// Output returns responses from the remote credentials helper.
func (s *Shell) Output() ([]byte, error) {
return s.cmd.Output()
}
// Input sets the input to send to a remote credentials helper.
func (s *Shell) Input(in io.Reader) {
s.cmd.Stdin = in
}

View File

@ -0,0 +1,186 @@
package credentials
import (
"bufio"
"bytes"
"encoding/json"
"fmt"
"io"
"os"
"strings"
)
// Credentials holds the information shared between docker and the credentials store.
type Credentials struct {
ServerURL string
Username string
Secret string
}
// isValid checks the integrity of Credentials object such that no credentials lack
// a server URL or a username.
// It returns whether the credentials are valid and the error if it isn't.
// error values can be errCredentialsMissingServerURL or errCredentialsMissingUsername
func (c *Credentials) isValid() (bool, error) {
if len(c.ServerURL) == 0 {
return false, NewErrCredentialsMissingServerURL()
}
if len(c.Username) == 0 {
return false, NewErrCredentialsMissingUsername()
}
return true, nil
}
// CredsLabel holds the way Docker credentials should be labeled as such in credentials stores that allow labelling.
// That label allows to filter out non-Docker credentials too at lookup/search in macOS keychain,
// Windows credentials manager and Linux libsecret. Default value is "Docker Credentials"
var CredsLabel = "Docker Credentials"
// SetCredsLabel is a simple setter for CredsLabel
func SetCredsLabel(label string) {
CredsLabel = label
}
// Serve initializes the credentials helper and parses the action argument.
// This function is designed to be called from a command line interface.
// It uses os.Args[1] as the key for the action.
// It uses os.Stdin as input and os.Stdout as output.
// This function terminates the program with os.Exit(1) if there is an error.
func Serve(helper Helper) {
var err error
if len(os.Args) != 2 {
err = fmt.Errorf("Usage: %s <store|get|erase|list|version>", os.Args[0])
}
if err == nil {
err = HandleCommand(helper, os.Args[1], os.Stdin, os.Stdout)
}
if err != nil {
fmt.Fprintf(os.Stdout, "%v\n", err)
os.Exit(1)
}
}
// HandleCommand uses a helper and a key to run a credential action.
func HandleCommand(helper Helper, key string, in io.Reader, out io.Writer) error {
switch key {
case "store":
return Store(helper, in)
case "get":
return Get(helper, in, out)
case "erase":
return Erase(helper, in)
case "list":
return List(helper, out)
case "version":
return PrintVersion(out)
}
return fmt.Errorf("Unknown credential action `%s`", key)
}
// Store uses a helper and an input reader to save credentials.
// The reader must contain the JSON serialization of a Credentials struct.
func Store(helper Helper, reader io.Reader) error {
scanner := bufio.NewScanner(reader)
buffer := new(bytes.Buffer)
for scanner.Scan() {
buffer.Write(scanner.Bytes())
}
if err := scanner.Err(); err != nil && err != io.EOF {
return err
}
var creds Credentials
if err := json.NewDecoder(buffer).Decode(&creds); err != nil {
return err
}
if ok, err := creds.isValid(); !ok {
return err
}
return helper.Add(&creds)
}
// Get retrieves the credentials for a given server url.
// The reader must contain the server URL to search.
// The writer is used to write the JSON serialization of the credentials.
func Get(helper Helper, reader io.Reader, writer io.Writer) error {
scanner := bufio.NewScanner(reader)
buffer := new(bytes.Buffer)
for scanner.Scan() {
buffer.Write(scanner.Bytes())
}
if err := scanner.Err(); err != nil && err != io.EOF {
return err
}
serverURL := strings.TrimSpace(buffer.String())
if len(serverURL) == 0 {
return NewErrCredentialsMissingServerURL()
}
username, secret, err := helper.Get(serverURL)
if err != nil {
return err
}
resp := Credentials{
ServerURL: serverURL,
Username: username,
Secret: secret,
}
buffer.Reset()
if err := json.NewEncoder(buffer).Encode(resp); err != nil {
return err
}
fmt.Fprint(writer, buffer.String())
return nil
}
// Erase removes credentials from the store.
// The reader must contain the server URL to remove.
func Erase(helper Helper, reader io.Reader) error {
scanner := bufio.NewScanner(reader)
buffer := new(bytes.Buffer)
for scanner.Scan() {
buffer.Write(scanner.Bytes())
}
if err := scanner.Err(); err != nil && err != io.EOF {
return err
}
serverURL := strings.TrimSpace(buffer.String())
if len(serverURL) == 0 {
return NewErrCredentialsMissingServerURL()
}
return helper.Delete(serverURL)
}
//List returns all the serverURLs of keys in
//the OS store as a list of strings
func List(helper Helper, writer io.Writer) error {
accts, err := helper.List()
if err != nil {
return err
}
return json.NewEncoder(writer).Encode(accts)
}
//PrintVersion outputs the current version.
func PrintVersion(writer io.Writer) error {
fmt.Fprintln(writer, Version)
return nil
}

View File

@ -0,0 +1,102 @@
package credentials
const (
// ErrCredentialsNotFound standardizes the not found error, so every helper returns
// the same message and docker can handle it properly.
errCredentialsNotFoundMessage = "credentials not found in native keychain"
// ErrCredentialsMissingServerURL and ErrCredentialsMissingUsername standardize
// invalid credentials or credentials management operations
errCredentialsMissingServerURLMessage = "no credentials server URL"
errCredentialsMissingUsernameMessage = "no credentials username"
)
// errCredentialsNotFound represents an error
// raised when credentials are not in the store.
type errCredentialsNotFound struct{}
// Error returns the standard error message
// for when the credentials are not in the store.
func (errCredentialsNotFound) Error() string {
return errCredentialsNotFoundMessage
}
// NewErrCredentialsNotFound creates a new error
// for when the credentials are not in the store.
func NewErrCredentialsNotFound() error {
return errCredentialsNotFound{}
}
// IsErrCredentialsNotFound returns true if the error
// was caused by not having a set of credentials in a store.
func IsErrCredentialsNotFound(err error) bool {
_, ok := err.(errCredentialsNotFound)
return ok
}
// IsErrCredentialsNotFoundMessage returns true if the error
// was caused by not having a set of credentials in a store.
//
// This function helps to check messages returned by an
// external program via its standard output.
func IsErrCredentialsNotFoundMessage(err string) bool {
return err == errCredentialsNotFoundMessage
}
// errCredentialsMissingServerURL represents an error raised
// when the credentials object has no server URL or when no
// server URL is provided to a credentials operation requiring
// one.
type errCredentialsMissingServerURL struct{}
func (errCredentialsMissingServerURL) Error() string {
return errCredentialsMissingServerURLMessage
}
// errCredentialsMissingUsername represents an error raised
// when the credentials object has no username or when no
// username is provided to a credentials operation requiring
// one.
type errCredentialsMissingUsername struct{}
func (errCredentialsMissingUsername) Error() string {
return errCredentialsMissingUsernameMessage
}
// NewErrCredentialsMissingServerURL creates a new error for
// errCredentialsMissingServerURL.
func NewErrCredentialsMissingServerURL() error {
return errCredentialsMissingServerURL{}
}
// NewErrCredentialsMissingUsername creates a new error for
// errCredentialsMissingUsername.
func NewErrCredentialsMissingUsername() error {
return errCredentialsMissingUsername{}
}
// IsCredentialsMissingServerURL returns true if the error
// was an errCredentialsMissingServerURL.
func IsCredentialsMissingServerURL(err error) bool {
_, ok := err.(errCredentialsMissingServerURL)
return ok
}
// IsCredentialsMissingServerURLMessage checks for an
// errCredentialsMissingServerURL in the error message.
func IsCredentialsMissingServerURLMessage(err string) bool {
return err == errCredentialsMissingServerURLMessage
}
// IsCredentialsMissingUsername returns true if the error
// was an errCredentialsMissingUsername.
func IsCredentialsMissingUsername(err error) bool {
_, ok := err.(errCredentialsMissingUsername)
return ok
}
// IsCredentialsMissingUsernameMessage checks for an
// errCredentialsMissingUsername in the error message.
func IsCredentialsMissingUsernameMessage(err string) bool {
return err == errCredentialsMissingUsernameMessage
}

View File

@ -0,0 +1,14 @@
package credentials
// Helper is the interface a credentials store helper must implement.
type Helper interface {
// Add appends credentials to the store.
Add(*Credentials) error
// Delete removes credentials from the store.
Delete(serverURL string) error
// Get retrieves credentials from the store.
// It returns username and secret as strings.
Get(serverURL string) (string, string, error)
// List returns the stored serverURLs and their associated usernames.
List() (map[string]string, error)
}

View File

@ -0,0 +1,4 @@
package credentials
// Version holds a string describing the current version
const Version = "0.6.3"

2080
vendor/github.com/docker/docker/AUTHORS generated vendored Normal file

File diff suppressed because it is too large Load Diff

191
vendor/github.com/docker/docker/LICENSE generated vendored Normal file
View File

@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2013-2018 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

19
vendor/github.com/docker/docker/NOTICE generated vendored Normal file
View File

@ -0,0 +1,19 @@
Docker
Copyright 2012-2017 Docker, Inc.
This product includes software developed at Docker, Inc. (https://www.docker.com).
This product contains software (https://github.com/creack/pty) developed
by Keith Rarick, licensed under the MIT License.
The following is courtesy of our legal counsel:
Use and transfer of Docker may be subject to certain restrictions by the
United States and other governments.
It is your responsibility to ensure that your use and/or transfer does not
violate applicable laws.
For more information, please see https://www.bis.doc.gov
See also https://www.apache.org/dev/crypto.html and/or seek legal counsel.

View File

@ -0,0 +1,22 @@
Copyright (c) 2013 Honza Pokorny
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

15
vendor/github.com/docker/docker/hack/generate-authors.sh generated vendored Executable file
View File

@ -0,0 +1,15 @@
#!/usr/bin/env bash
set -e
cd "$(dirname "$(readlink -f "$BASH_SOURCE")")/.."
# see also ".mailmap" for how email addresses and names are deduplicated
{
cat <<-'EOH'
# This file lists all individuals having contributed content to the repository.
# For how it is generated, see `hack/generate-authors.sh`.
EOH
echo
git log --format='%aN <%aE>' | LC_ALL=C.UTF-8 sort -uf
} > AUTHORS

View File

@ -0,0 +1 @@
../../../integration/testdata/https/ca.pem

View File

@ -0,0 +1 @@
../../../integration/testdata/https/client-cert.pem

View File

@ -0,0 +1 @@
../../../integration/testdata/https/client-key.pem

View File

@ -0,0 +1 @@
../../../integration/testdata/https/server-cert.pem

View File

@ -0,0 +1 @@
../../../integration/testdata/https/server-key.pem

View File

@ -0,0 +1,109 @@
package homedir // import "github.com/docker/docker/pkg/homedir"
import (
"errors"
"os"
"path/filepath"
"strings"
"github.com/docker/docker/pkg/idtools"
)
// GetStatic returns the home directory for the current user without calling
// os/user.Current(). This is useful for static-linked binary on glibc-based
// system, because a call to os/user.Current() in a static binary leads to
// segfault due to a glibc issue that won't be fixed in a short term.
// (#29344, golang/go#13470, https://sourceware.org/bugzilla/show_bug.cgi?id=19341)
func GetStatic() (string, error) {
uid := os.Getuid()
usr, err := idtools.LookupUID(uid)
if err != nil {
return "", err
}
return usr.Home, nil
}
// GetRuntimeDir returns XDG_RUNTIME_DIR.
// XDG_RUNTIME_DIR is typically configured via pam_systemd.
// GetRuntimeDir returns non-nil error if XDG_RUNTIME_DIR is not set.
//
// See also https://standards.freedesktop.org/basedir-spec/latest/ar01s03.html
func GetRuntimeDir() (string, error) {
if xdgRuntimeDir := os.Getenv("XDG_RUNTIME_DIR"); xdgRuntimeDir != "" {
return xdgRuntimeDir, nil
}
return "", errors.New("could not get XDG_RUNTIME_DIR")
}
// StickRuntimeDirContents sets the sticky bit on files that are under
// XDG_RUNTIME_DIR, so that the files won't be periodically removed by the system.
//
// StickyRuntimeDir returns slice of sticked files.
// StickyRuntimeDir returns nil error if XDG_RUNTIME_DIR is not set.
//
// See also https://standards.freedesktop.org/basedir-spec/latest/ar01s03.html
func StickRuntimeDirContents(files []string) ([]string, error) {
runtimeDir, err := GetRuntimeDir()
if err != nil {
// ignore error if runtimeDir is empty
return nil, nil
}
runtimeDir, err = filepath.Abs(runtimeDir)
if err != nil {
return nil, err
}
var sticked []string
for _, f := range files {
f, err = filepath.Abs(f)
if err != nil {
return sticked, err
}
if strings.HasPrefix(f, runtimeDir+"/") {
if err = stick(f); err != nil {
return sticked, err
}
sticked = append(sticked, f)
}
}
return sticked, nil
}
func stick(f string) error {
st, err := os.Stat(f)
if err != nil {
return err
}
m := st.Mode()
m |= os.ModeSticky
return os.Chmod(f, m)
}
// GetDataHome returns XDG_DATA_HOME.
// GetDataHome returns $HOME/.local/share and nil error if XDG_DATA_HOME is not set.
//
// See also https://standards.freedesktop.org/basedir-spec/latest/ar01s03.html
func GetDataHome() (string, error) {
if xdgDataHome := os.Getenv("XDG_DATA_HOME"); xdgDataHome != "" {
return xdgDataHome, nil
}
home := os.Getenv("HOME")
if home == "" {
return "", errors.New("could not get either XDG_DATA_HOME or HOME")
}
return filepath.Join(home, ".local", "share"), nil
}
// GetConfigHome returns XDG_CONFIG_HOME.
// GetConfigHome returns $HOME/.config and nil error if XDG_CONFIG_HOME is not set.
//
// See also https://standards.freedesktop.org/basedir-spec/latest/ar01s03.html
func GetConfigHome() (string, error) {
if xdgConfigHome := os.Getenv("XDG_CONFIG_HOME"); xdgConfigHome != "" {
return xdgConfigHome, nil
}
home := os.Getenv("HOME")
if home == "" {
return "", errors.New("could not get either XDG_CONFIG_HOME or HOME")
}
return filepath.Join(home, ".config"), nil
}

View File

@ -0,0 +1,33 @@
// +build !linux
package homedir // import "github.com/docker/docker/pkg/homedir"
import (
"errors"
)
// GetStatic is not needed for non-linux systems.
// (Precisely, it is needed only for glibc-based linux systems.)
func GetStatic() (string, error) {
return "", errors.New("homedir.GetStatic() is not supported on this system")
}
// GetRuntimeDir is unsupported on non-linux system.
func GetRuntimeDir() (string, error) {
return "", errors.New("homedir.GetRuntimeDir() is not supported on this system")
}
// StickRuntimeDirContents is unsupported on non-linux system.
func StickRuntimeDirContents(files []string) ([]string, error) {
return nil, errors.New("homedir.StickRuntimeDirContents() is not supported on this system")
}
// GetDataHome is unsupported on non-linux system.
func GetDataHome() (string, error) {
return "", errors.New("homedir.GetDataHome() is not supported on this system")
}
// GetConfigHome is unsupported on non-linux system.
func GetConfigHome() (string, error) {
return "", errors.New("homedir.GetConfigHome() is not supported on this system")
}

View File

@ -0,0 +1,34 @@
// +build !windows
package homedir // import "github.com/docker/docker/pkg/homedir"
import (
"os"
"github.com/opencontainers/runc/libcontainer/user"
)
// Key returns the env var name for the user's home dir based on
// the platform being run on
func Key() string {
return "HOME"
}
// Get returns the home directory of the current user with the help of
// environment variables depending on the target operating system.
// Returned path should be used with "path/filepath" to form new paths.
func Get() string {
home := os.Getenv(Key())
if home == "" {
if u, err := user.CurrentUser(); err == nil {
return u.Home
}
}
return home
}
// GetShortcutString returns the string that is shortcut to user's home directory
// in the native shell of the platform running on.
func GetShortcutString() string {
return "~"
}

View File

@ -0,0 +1,24 @@
package homedir // import "github.com/docker/docker/pkg/homedir"
import (
"os"
)
// Key returns the env var name for the user's home dir based on
// the platform being run on
func Key() string {
return "USERPROFILE"
}
// Get returns the home directory of the current user with the help of
// environment variables depending on the target operating system.
// Returned path should be used with "path/filepath" to form new paths.
func Get() string {
return os.Getenv(Key())
}
// GetShortcutString returns the string that is shortcut to user's home directory
// in the native shell of the platform running on.
func GetShortcutString() string {
return "%USERPROFILE%" // be careful while using in format functions
}

264
vendor/github.com/docker/docker/pkg/idtools/idtools.go generated vendored Normal file
View File

@ -0,0 +1,264 @@
package idtools // import "github.com/docker/docker/pkg/idtools"
import (
"bufio"
"fmt"
"os"
"strconv"
"strings"
)
// IDMap contains a single entry for user namespace range remapping. An array
// of IDMap entries represents the structure that will be provided to the Linux
// kernel for creating a user namespace.
type IDMap struct {
ContainerID int `json:"container_id"`
HostID int `json:"host_id"`
Size int `json:"size"`
}
type subIDRange struct {
Start int
Length int
}
type ranges []subIDRange
func (e ranges) Len() int { return len(e) }
func (e ranges) Swap(i, j int) { e[i], e[j] = e[j], e[i] }
func (e ranges) Less(i, j int) bool { return e[i].Start < e[j].Start }
const (
subuidFileName = "/etc/subuid"
subgidFileName = "/etc/subgid"
)
// MkdirAllAndChown creates a directory (include any along the path) and then modifies
// ownership to the requested uid/gid. If the directory already exists, this
// function will still change ownership to the requested uid/gid pair.
func MkdirAllAndChown(path string, mode os.FileMode, owner Identity) error {
return mkdirAs(path, mode, owner, true, true)
}
// MkdirAndChown creates a directory and then modifies ownership to the requested uid/gid.
// If the directory already exists, this function still changes ownership.
// Note that unlike os.Mkdir(), this function does not return IsExist error
// in case path already exists.
func MkdirAndChown(path string, mode os.FileMode, owner Identity) error {
return mkdirAs(path, mode, owner, false, true)
}
// MkdirAllAndChownNew creates a directory (include any along the path) and then modifies
// ownership ONLY of newly created directories to the requested uid/gid. If the
// directories along the path exist, no change of ownership will be performed
func MkdirAllAndChownNew(path string, mode os.FileMode, owner Identity) error {
return mkdirAs(path, mode, owner, true, false)
}
// GetRootUIDGID retrieves the remapped root uid/gid pair from the set of maps.
// If the maps are empty, then the root uid/gid will default to "real" 0/0
func GetRootUIDGID(uidMap, gidMap []IDMap) (int, int, error) {
uid, err := toHost(0, uidMap)
if err != nil {
return -1, -1, err
}
gid, err := toHost(0, gidMap)
if err != nil {
return -1, -1, err
}
return uid, gid, nil
}
// toContainer takes an id mapping, and uses it to translate a
// host ID to the remapped ID. If no map is provided, then the translation
// assumes a 1-to-1 mapping and returns the passed in id
func toContainer(hostID int, idMap []IDMap) (int, error) {
if idMap == nil {
return hostID, nil
}
for _, m := range idMap {
if (hostID >= m.HostID) && (hostID <= (m.HostID + m.Size - 1)) {
contID := m.ContainerID + (hostID - m.HostID)
return contID, nil
}
}
return -1, fmt.Errorf("Host ID %d cannot be mapped to a container ID", hostID)
}
// toHost takes an id mapping and a remapped ID, and translates the
// ID to the mapped host ID. If no map is provided, then the translation
// assumes a 1-to-1 mapping and returns the passed in id #
func toHost(contID int, idMap []IDMap) (int, error) {
if idMap == nil {
return contID, nil
}
for _, m := range idMap {
if (contID >= m.ContainerID) && (contID <= (m.ContainerID + m.Size - 1)) {
hostID := m.HostID + (contID - m.ContainerID)
return hostID, nil
}
}
return -1, fmt.Errorf("Container ID %d cannot be mapped to a host ID", contID)
}
// Identity is either a UID and GID pair or a SID (but not both)
type Identity struct {
UID int
GID int
SID string
}
// IdentityMapping contains a mappings of UIDs and GIDs
type IdentityMapping struct {
uids []IDMap
gids []IDMap
}
// NewIdentityMapping takes a requested user and group name and
// using the data from /etc/sub{uid,gid} ranges, creates the
// proper uid and gid remapping ranges for that user/group pair
func NewIdentityMapping(username, groupname string) (*IdentityMapping, error) {
subuidRanges, err := parseSubuid(username)
if err != nil {
return nil, err
}
subgidRanges, err := parseSubgid(groupname)
if err != nil {
return nil, err
}
if len(subuidRanges) == 0 {
return nil, fmt.Errorf("No subuid ranges found for user %q", username)
}
if len(subgidRanges) == 0 {
return nil, fmt.Errorf("No subgid ranges found for group %q", groupname)
}
return &IdentityMapping{
uids: createIDMap(subuidRanges),
gids: createIDMap(subgidRanges),
}, nil
}
// NewIDMappingsFromMaps creates a new mapping from two slices
// Deprecated: this is a temporary shim while transitioning to IDMapping
func NewIDMappingsFromMaps(uids []IDMap, gids []IDMap) *IdentityMapping {
return &IdentityMapping{uids: uids, gids: gids}
}
// RootPair returns a uid and gid pair for the root user. The error is ignored
// because a root user always exists, and the defaults are correct when the uid
// and gid maps are empty.
func (i *IdentityMapping) RootPair() Identity {
uid, gid, _ := GetRootUIDGID(i.uids, i.gids)
return Identity{UID: uid, GID: gid}
}
// ToHost returns the host UID and GID for the container uid, gid.
// Remapping is only performed if the ids aren't already the remapped root ids
func (i *IdentityMapping) ToHost(pair Identity) (Identity, error) {
var err error
target := i.RootPair()
if pair.UID != target.UID {
target.UID, err = toHost(pair.UID, i.uids)
if err != nil {
return target, err
}
}
if pair.GID != target.GID {
target.GID, err = toHost(pair.GID, i.gids)
}
return target, err
}
// ToContainer returns the container UID and GID for the host uid and gid
func (i *IdentityMapping) ToContainer(pair Identity) (int, int, error) {
uid, err := toContainer(pair.UID, i.uids)
if err != nil {
return -1, -1, err
}
gid, err := toContainer(pair.GID, i.gids)
return uid, gid, err
}
// Empty returns true if there are no id mappings
func (i *IdentityMapping) Empty() bool {
return len(i.uids) == 0 && len(i.gids) == 0
}
// UIDs return the UID mapping
// TODO: remove this once everything has been refactored to use pairs
func (i *IdentityMapping) UIDs() []IDMap {
return i.uids
}
// GIDs return the UID mapping
// TODO: remove this once everything has been refactored to use pairs
func (i *IdentityMapping) GIDs() []IDMap {
return i.gids
}
func createIDMap(subidRanges ranges) []IDMap {
idMap := []IDMap{}
containerID := 0
for _, idrange := range subidRanges {
idMap = append(idMap, IDMap{
ContainerID: containerID,
HostID: idrange.Start,
Size: idrange.Length,
})
containerID = containerID + idrange.Length
}
return idMap
}
func parseSubuid(username string) (ranges, error) {
return parseSubidFile(subuidFileName, username)
}
func parseSubgid(username string) (ranges, error) {
return parseSubidFile(subgidFileName, username)
}
// parseSubidFile will read the appropriate file (/etc/subuid or /etc/subgid)
// and return all found ranges for a specified username. If the special value
// "ALL" is supplied for username, then all ranges in the file will be returned
func parseSubidFile(path, username string) (ranges, error) {
var rangeList ranges
subidFile, err := os.Open(path)
if err != nil {
return rangeList, err
}
defer subidFile.Close()
s := bufio.NewScanner(subidFile)
for s.Scan() {
if err := s.Err(); err != nil {
return rangeList, err
}
text := strings.TrimSpace(s.Text())
if text == "" || strings.HasPrefix(text, "#") {
continue
}
parts := strings.Split(text, ":")
if len(parts) != 3 {
return rangeList, fmt.Errorf("Cannot parse subuid/gid information: Format not correct for %s file", path)
}
if parts[0] == username || username == "ALL" {
startid, err := strconv.Atoi(parts[1])
if err != nil {
return rangeList, fmt.Errorf("String to int conversion failed during subuid/gid parsing of %s: %v", path, err)
}
length, err := strconv.Atoi(parts[2])
if err != nil {
return rangeList, fmt.Errorf("String to int conversion failed during subuid/gid parsing of %s: %v", path, err)
}
rangeList = append(rangeList, subIDRange{startid, length})
}
}
return rangeList, nil
}

View File

@ -0,0 +1,231 @@
// +build !windows
package idtools // import "github.com/docker/docker/pkg/idtools"
import (
"bytes"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"sync"
"syscall"
"github.com/docker/docker/pkg/system"
"github.com/opencontainers/runc/libcontainer/user"
)
var (
entOnce sync.Once
getentCmd string
)
func mkdirAs(path string, mode os.FileMode, owner Identity, mkAll, chownExisting bool) error {
// make an array containing the original path asked for, plus (for mkAll == true)
// all path components leading up to the complete path that don't exist before we MkdirAll
// so that we can chown all of them properly at the end. If chownExisting is false, we won't
// chown the full directory path if it exists
var paths []string
stat, err := system.Stat(path)
if err == nil {
if !stat.IsDir() {
return &os.PathError{Op: "mkdir", Path: path, Err: syscall.ENOTDIR}
}
if !chownExisting {
return nil
}
// short-circuit--we were called with an existing directory and chown was requested
return lazyChown(path, owner.UID, owner.GID, stat)
}
if os.IsNotExist(err) {
paths = []string{path}
}
if mkAll {
// walk back to "/" looking for directories which do not exist
// and add them to the paths array for chown after creation
dirPath := path
for {
dirPath = filepath.Dir(dirPath)
if dirPath == "/" {
break
}
if _, err := os.Stat(dirPath); err != nil && os.IsNotExist(err) {
paths = append(paths, dirPath)
}
}
if err := system.MkdirAll(path, mode, ""); err != nil {
return err
}
} else {
if err := os.Mkdir(path, mode); err != nil && !os.IsExist(err) {
return err
}
}
// even if it existed, we will chown the requested path + any subpaths that
// didn't exist when we called MkdirAll
for _, pathComponent := range paths {
if err := lazyChown(pathComponent, owner.UID, owner.GID, nil); err != nil {
return err
}
}
return nil
}
// CanAccess takes a valid (existing) directory and a uid, gid pair and determines
// if that uid, gid pair has access (execute bit) to the directory
func CanAccess(path string, pair Identity) bool {
statInfo, err := system.Stat(path)
if err != nil {
return false
}
fileMode := os.FileMode(statInfo.Mode())
permBits := fileMode.Perm()
return accessible(statInfo.UID() == uint32(pair.UID),
statInfo.GID() == uint32(pair.GID), permBits)
}
func accessible(isOwner, isGroup bool, perms os.FileMode) bool {
if isOwner && (perms&0100 == 0100) {
return true
}
if isGroup && (perms&0010 == 0010) {
return true
}
if perms&0001 == 0001 {
return true
}
return false
}
// LookupUser uses traditional local system files lookup (from libcontainer/user) on a username,
// followed by a call to `getent` for supporting host configured non-files passwd and group dbs
func LookupUser(username string) (user.User, error) {
// first try a local system files lookup using existing capabilities
usr, err := user.LookupUser(username)
if err == nil {
return usr, nil
}
// local files lookup failed; attempt to call `getent` to query configured passwd dbs
usr, err = getentUser(fmt.Sprintf("%s %s", "passwd", username))
if err != nil {
return user.User{}, err
}
return usr, nil
}
// LookupUID uses traditional local system files lookup (from libcontainer/user) on a uid,
// followed by a call to `getent` for supporting host configured non-files passwd and group dbs
func LookupUID(uid int) (user.User, error) {
// first try a local system files lookup using existing capabilities
usr, err := user.LookupUid(uid)
if err == nil {
return usr, nil
}
// local files lookup failed; attempt to call `getent` to query configured passwd dbs
return getentUser(fmt.Sprintf("%s %d", "passwd", uid))
}
func getentUser(args string) (user.User, error) {
reader, err := callGetent(args)
if err != nil {
return user.User{}, err
}
users, err := user.ParsePasswd(reader)
if err != nil {
return user.User{}, err
}
if len(users) == 0 {
return user.User{}, fmt.Errorf("getent failed to find passwd entry for %q", strings.Split(args, " ")[1])
}
return users[0], nil
}
// LookupGroup uses traditional local system files lookup (from libcontainer/user) on a group name,
// followed by a call to `getent` for supporting host configured non-files passwd and group dbs
func LookupGroup(groupname string) (user.Group, error) {
// first try a local system files lookup using existing capabilities
group, err := user.LookupGroup(groupname)
if err == nil {
return group, nil
}
// local files lookup failed; attempt to call `getent` to query configured group dbs
return getentGroup(fmt.Sprintf("%s %s", "group", groupname))
}
// LookupGID uses traditional local system files lookup (from libcontainer/user) on a group ID,
// followed by a call to `getent` for supporting host configured non-files passwd and group dbs
func LookupGID(gid int) (user.Group, error) {
// first try a local system files lookup using existing capabilities
group, err := user.LookupGid(gid)
if err == nil {
return group, nil
}
// local files lookup failed; attempt to call `getent` to query configured group dbs
return getentGroup(fmt.Sprintf("%s %d", "group", gid))
}
func getentGroup(args string) (user.Group, error) {
reader, err := callGetent(args)
if err != nil {
return user.Group{}, err
}
groups, err := user.ParseGroup(reader)
if err != nil {
return user.Group{}, err
}
if len(groups) == 0 {
return user.Group{}, fmt.Errorf("getent failed to find groups entry for %q", strings.Split(args, " ")[1])
}
return groups[0], nil
}
func callGetent(args string) (io.Reader, error) {
entOnce.Do(func() { getentCmd, _ = resolveBinary("getent") })
// if no `getent` command on host, can't do anything else
if getentCmd == "" {
return nil, fmt.Errorf("")
}
out, err := execCmd(getentCmd, args)
if err != nil {
exitCode, errC := system.GetExitCode(err)
if errC != nil {
return nil, err
}
switch exitCode {
case 1:
return nil, fmt.Errorf("getent reported invalid parameters/database unknown")
case 2:
terms := strings.Split(args, " ")
return nil, fmt.Errorf("getent unable to find entry %q in %s database", terms[1], terms[0])
case 3:
return nil, fmt.Errorf("getent database doesn't support enumeration")
default:
return nil, err
}
}
return bytes.NewReader(out), nil
}
// lazyChown performs a chown only if the uid/gid don't match what's requested
// Normally a Chown is a no-op if uid/gid match, but in some cases this can still cause an error, e.g. if the
// dir is on an NFS share, so don't call chown unless we absolutely must.
func lazyChown(p string, uid, gid int, stat *system.StatT) error {
if stat == nil {
var err error
stat, err = system.Stat(p)
if err != nil {
return err
}
}
if stat.UID() == uint32(uid) && stat.GID() == uint32(gid) {
return nil
}
return os.Chown(p, uid, gid)
}

View File

@ -0,0 +1,25 @@
package idtools // import "github.com/docker/docker/pkg/idtools"
import (
"os"
"github.com/docker/docker/pkg/system"
)
// This is currently a wrapper around MkdirAll, however, since currently
// permissions aren't set through this path, the identity isn't utilized.
// Ownership is handled elsewhere, but in the future could be support here
// too.
func mkdirAs(path string, mode os.FileMode, owner Identity, mkAll, chownExisting bool) error {
if err := system.MkdirAll(path, mode, ""); err != nil {
return err
}
return nil
}
// CanAccess takes a valid (existing) directory and a uid, gid pair and determines
// if that uid, gid pair has access (execute bit) to the directory
// Windows does not require/support this function, so always return true
func CanAccess(path string, identity Identity) bool {
return true
}

View File

@ -0,0 +1,164 @@
package idtools // import "github.com/docker/docker/pkg/idtools"
import (
"fmt"
"regexp"
"sort"
"strconv"
"strings"
"sync"
)
// add a user and/or group to Linux /etc/passwd, /etc/group using standard
// Linux distribution commands:
// adduser --system --shell /bin/false --disabled-login --disabled-password --no-create-home --group <username>
// useradd -r -s /bin/false <username>
var (
once sync.Once
userCommand string
cmdTemplates = map[string]string{
"adduser": "--system --shell /bin/false --no-create-home --disabled-login --disabled-password --group %s",
"useradd": "-r -s /bin/false %s",
"usermod": "-%s %d-%d %s",
}
idOutRegexp = regexp.MustCompile(`uid=([0-9]+).*gid=([0-9]+)`)
// default length for a UID/GID subordinate range
defaultRangeLen = 65536
defaultRangeStart = 100000
userMod = "usermod"
)
// AddNamespaceRangesUser takes a username and uses the standard system
// utility to create a system user/group pair used to hold the
// /etc/sub{uid,gid} ranges which will be used for user namespace
// mapping ranges in containers.
func AddNamespaceRangesUser(name string) (int, int, error) {
if err := addUser(name); err != nil {
return -1, -1, fmt.Errorf("Error adding user %q: %v", name, err)
}
// Query the system for the created uid and gid pair
out, err := execCmd("id", name)
if err != nil {
return -1, -1, fmt.Errorf("Error trying to find uid/gid for new user %q: %v", name, err)
}
matches := idOutRegexp.FindStringSubmatch(strings.TrimSpace(string(out)))
if len(matches) != 3 {
return -1, -1, fmt.Errorf("Can't find uid, gid from `id` output: %q", string(out))
}
uid, err := strconv.Atoi(matches[1])
if err != nil {
return -1, -1, fmt.Errorf("Can't convert found uid (%s) to int: %v", matches[1], err)
}
gid, err := strconv.Atoi(matches[2])
if err != nil {
return -1, -1, fmt.Errorf("Can't convert found gid (%s) to int: %v", matches[2], err)
}
// Now we need to create the subuid/subgid ranges for our new user/group (system users
// do not get auto-created ranges in subuid/subgid)
if err := createSubordinateRanges(name); err != nil {
return -1, -1, fmt.Errorf("Couldn't create subordinate ID ranges: %v", err)
}
return uid, gid, nil
}
func addUser(userName string) error {
once.Do(func() {
// set up which commands are used for adding users/groups dependent on distro
if _, err := resolveBinary("adduser"); err == nil {
userCommand = "adduser"
} else if _, err := resolveBinary("useradd"); err == nil {
userCommand = "useradd"
}
})
if userCommand == "" {
return fmt.Errorf("Cannot add user; no useradd/adduser binary found")
}
args := fmt.Sprintf(cmdTemplates[userCommand], userName)
out, err := execCmd(userCommand, args)
if err != nil {
return fmt.Errorf("Failed to add user with error: %v; output: %q", err, string(out))
}
return nil
}
func createSubordinateRanges(name string) error {
// first, we should verify that ranges weren't automatically created
// by the distro tooling
ranges, err := parseSubuid(name)
if err != nil {
return fmt.Errorf("Error while looking for subuid ranges for user %q: %v", name, err)
}
if len(ranges) == 0 {
// no UID ranges; let's create one
startID, err := findNextUIDRange()
if err != nil {
return fmt.Errorf("Can't find available subuid range: %v", err)
}
out, err := execCmd(userMod, fmt.Sprintf(cmdTemplates[userMod], "v", startID, startID+defaultRangeLen-1, name))
if err != nil {
return fmt.Errorf("Unable to add subuid range to user: %q; output: %s, err: %v", name, out, err)
}
}
ranges, err = parseSubgid(name)
if err != nil {
return fmt.Errorf("Error while looking for subgid ranges for user %q: %v", name, err)
}
if len(ranges) == 0 {
// no GID ranges; let's create one
startID, err := findNextGIDRange()
if err != nil {
return fmt.Errorf("Can't find available subgid range: %v", err)
}
out, err := execCmd(userMod, fmt.Sprintf(cmdTemplates[userMod], "w", startID, startID+defaultRangeLen-1, name))
if err != nil {
return fmt.Errorf("Unable to add subgid range to user: %q; output: %s, err: %v", name, out, err)
}
}
return nil
}
func findNextUIDRange() (int, error) {
ranges, err := parseSubuid("ALL")
if err != nil {
return -1, fmt.Errorf("Couldn't parse all ranges in /etc/subuid file: %v", err)
}
sort.Sort(ranges)
return findNextRangeStart(ranges)
}
func findNextGIDRange() (int, error) {
ranges, err := parseSubgid("ALL")
if err != nil {
return -1, fmt.Errorf("Couldn't parse all ranges in /etc/subgid file: %v", err)
}
sort.Sort(ranges)
return findNextRangeStart(ranges)
}
func findNextRangeStart(rangeList ranges) (int, error) {
startID := defaultRangeStart
for _, arange := range rangeList {
if wouldOverlap(arange, startID) {
startID = arange.Start + arange.Length
}
}
return startID, nil
}
func wouldOverlap(arange subIDRange, ID int) bool {
low := ID
high := ID + defaultRangeLen
if (low >= arange.Start && low <= arange.Start+arange.Length) ||
(high <= arange.Start+arange.Length && high >= arange.Start) {
return true
}
return false
}

View File

@ -0,0 +1,12 @@
// +build !linux
package idtools // import "github.com/docker/docker/pkg/idtools"
import "fmt"
// AddNamespaceRangesUser takes a name and finds an unused uid, gid pair
// and calls the appropriate helper function to add the group and then
// the user to the group in /etc/group and /etc/passwd respectively.
func AddNamespaceRangesUser(name string) (int, int, error) {
return -1, -1, fmt.Errorf("No support for adding users or groups on this OS")
}

View File

@ -0,0 +1,32 @@
// +build !windows
package idtools // import "github.com/docker/docker/pkg/idtools"
import (
"fmt"
"os/exec"
"path/filepath"
"strings"
)
func resolveBinary(binname string) (string, error) {
binaryPath, err := exec.LookPath(binname)
if err != nil {
return "", err
}
resolvedPath, err := filepath.EvalSymlinks(binaryPath)
if err != nil {
return "", err
}
//only return no error if the final resolved binary basename
//matches what was searched for
if filepath.Base(resolvedPath) == binname {
return resolvedPath, nil
}
return "", fmt.Errorf("Binary %q does not resolve to a binary of that name in $PATH (%q)", binname, resolvedPath)
}
func execCmd(cmd, args string) ([]byte, error) {
execCmd := exec.Command(cmd, strings.Split(args, " ")...)
return execCmd.CombinedOutput()
}

137
vendor/github.com/docker/docker/pkg/mount/flags.go generated vendored Normal file
View File

@ -0,0 +1,137 @@
package mount // import "github.com/docker/docker/pkg/mount"
import (
"fmt"
"strings"
)
var flags = map[string]struct {
clear bool
flag int
}{
"defaults": {false, 0},
"ro": {false, RDONLY},
"rw": {true, RDONLY},
"suid": {true, NOSUID},
"nosuid": {false, NOSUID},
"dev": {true, NODEV},
"nodev": {false, NODEV},
"exec": {true, NOEXEC},
"noexec": {false, NOEXEC},
"sync": {false, SYNCHRONOUS},
"async": {true, SYNCHRONOUS},
"dirsync": {false, DIRSYNC},
"remount": {false, REMOUNT},
"mand": {false, MANDLOCK},
"nomand": {true, MANDLOCK},
"atime": {true, NOATIME},
"noatime": {false, NOATIME},
"diratime": {true, NODIRATIME},
"nodiratime": {false, NODIRATIME},
"bind": {false, BIND},
"rbind": {false, RBIND},
"unbindable": {false, UNBINDABLE},
"runbindable": {false, RUNBINDABLE},
"private": {false, PRIVATE},
"rprivate": {false, RPRIVATE},
"shared": {false, SHARED},
"rshared": {false, RSHARED},
"slave": {false, SLAVE},
"rslave": {false, RSLAVE},
"relatime": {false, RELATIME},
"norelatime": {true, RELATIME},
"strictatime": {false, STRICTATIME},
"nostrictatime": {true, STRICTATIME},
}
var validFlags = map[string]bool{
"": true,
"size": true,
"mode": true,
"uid": true,
"gid": true,
"nr_inodes": true,
"nr_blocks": true,
"mpol": true,
}
var propagationFlags = map[string]bool{
"bind": true,
"rbind": true,
"unbindable": true,
"runbindable": true,
"private": true,
"rprivate": true,
"shared": true,
"rshared": true,
"slave": true,
"rslave": true,
}
// MergeTmpfsOptions merge mount options to make sure there is no duplicate.
func MergeTmpfsOptions(options []string) ([]string, error) {
// We use collisions maps to remove duplicates.
// For flag, the key is the flag value (the key for propagation flag is -1)
// For data=value, the key is the data
flagCollisions := map[int]bool{}
dataCollisions := map[string]bool{}
var newOptions []string
// We process in reverse order
for i := len(options) - 1; i >= 0; i-- {
option := options[i]
if option == "defaults" {
continue
}
if f, ok := flags[option]; ok && f.flag != 0 {
// There is only one propagation mode
key := f.flag
if propagationFlags[option] {
key = -1
}
// Check to see if there is collision for flag
if !flagCollisions[key] {
// We prepend the option and add to collision map
newOptions = append([]string{option}, newOptions...)
flagCollisions[key] = true
}
continue
}
opt := strings.SplitN(option, "=", 2)
if len(opt) != 2 || !validFlags[opt[0]] {
return nil, fmt.Errorf("Invalid tmpfs option %q", opt)
}
if !dataCollisions[opt[0]] {
// We prepend the option and add to collision map
newOptions = append([]string{option}, newOptions...)
dataCollisions[opt[0]] = true
}
}
return newOptions, nil
}
// Parse fstab type mount options into mount() flags
// and device specific data
func parseOptions(options string) (int, string) {
var (
flag int
data []string
)
for _, o := range strings.Split(options, ",") {
// If the option does not exist in the flags table or the flag
// is not supported on the platform,
// then it is a data value for a specific fs type
if f, exists := flags[o]; exists && f.flag != 0 {
if f.clear {
flag &= ^f.flag
} else {
flag |= f.flag
}
} else {
data = append(data, o)
}
}
return flag, strings.Join(data, ",")
}

View File

@ -0,0 +1,49 @@
// +build freebsd,cgo
package mount // import "github.com/docker/docker/pkg/mount"
/*
#include <sys/mount.h>
*/
import "C"
const (
// RDONLY will mount the filesystem as read-only.
RDONLY = C.MNT_RDONLY
// NOSUID will not allow set-user-identifier or set-group-identifier bits to
// take effect.
NOSUID = C.MNT_NOSUID
// NOEXEC will not allow execution of any binaries on the mounted file system.
NOEXEC = C.MNT_NOEXEC
// SYNCHRONOUS will allow any I/O to the file system to be done synchronously.
SYNCHRONOUS = C.MNT_SYNCHRONOUS
// NOATIME will not update the file access time when reading from a file.
NOATIME = C.MNT_NOATIME
)
// These flags are unsupported.
const (
BIND = 0
DIRSYNC = 0
MANDLOCK = 0
NODEV = 0
NODIRATIME = 0
UNBINDABLE = 0
RUNBINDABLE = 0
PRIVATE = 0
RPRIVATE = 0
SHARED = 0
RSHARED = 0
SLAVE = 0
RSLAVE = 0
RBIND = 0
RELATIVE = 0
RELATIME = 0
REMOUNT = 0
STRICTATIME = 0
mntDetach = 0
)

View File

@ -0,0 +1,87 @@
package mount // import "github.com/docker/docker/pkg/mount"
import (
"golang.org/x/sys/unix"
)
const (
// RDONLY will mount the file system read-only.
RDONLY = unix.MS_RDONLY
// NOSUID will not allow set-user-identifier or set-group-identifier bits to
// take effect.
NOSUID = unix.MS_NOSUID
// NODEV will not interpret character or block special devices on the file
// system.
NODEV = unix.MS_NODEV
// NOEXEC will not allow execution of any binaries on the mounted file system.
NOEXEC = unix.MS_NOEXEC
// SYNCHRONOUS will allow I/O to the file system to be done synchronously.
SYNCHRONOUS = unix.MS_SYNCHRONOUS
// DIRSYNC will force all directory updates within the file system to be done
// synchronously. This affects the following system calls: create, link,
// unlink, symlink, mkdir, rmdir, mknod and rename.
DIRSYNC = unix.MS_DIRSYNC
// REMOUNT will attempt to remount an already-mounted file system. This is
// commonly used to change the mount flags for a file system, especially to
// make a readonly file system writeable. It does not change device or mount
// point.
REMOUNT = unix.MS_REMOUNT
// MANDLOCK will force mandatory locks on a filesystem.
MANDLOCK = unix.MS_MANDLOCK
// NOATIME will not update the file access time when reading from a file.
NOATIME = unix.MS_NOATIME
// NODIRATIME will not update the directory access time.
NODIRATIME = unix.MS_NODIRATIME
// BIND remounts a subtree somewhere else.
BIND = unix.MS_BIND
// RBIND remounts a subtree and all possible submounts somewhere else.
RBIND = unix.MS_BIND | unix.MS_REC
// UNBINDABLE creates a mount which cannot be cloned through a bind operation.
UNBINDABLE = unix.MS_UNBINDABLE
// RUNBINDABLE marks the entire mount tree as UNBINDABLE.
RUNBINDABLE = unix.MS_UNBINDABLE | unix.MS_REC
// PRIVATE creates a mount which carries no propagation abilities.
PRIVATE = unix.MS_PRIVATE
// RPRIVATE marks the entire mount tree as PRIVATE.
RPRIVATE = unix.MS_PRIVATE | unix.MS_REC
// SLAVE creates a mount which receives propagation from its master, but not
// vice versa.
SLAVE = unix.MS_SLAVE
// RSLAVE marks the entire mount tree as SLAVE.
RSLAVE = unix.MS_SLAVE | unix.MS_REC
// SHARED creates a mount which provides the ability to create mirrors of
// that mount such that mounts and unmounts within any of the mirrors
// propagate to the other mirrors.
SHARED = unix.MS_SHARED
// RSHARED marks the entire mount tree as SHARED.
RSHARED = unix.MS_SHARED | unix.MS_REC
// RELATIME updates inode access times relative to modify or change time.
RELATIME = unix.MS_RELATIME
// STRICTATIME allows to explicitly request full atime updates. This makes
// it possible for the kernel to default to relatime or noatime but still
// allow userspace to override it.
STRICTATIME = unix.MS_STRICTATIME
mntDetach = unix.MNT_DETACH
)

View File

@ -0,0 +1,31 @@
// +build !linux,!freebsd freebsd,!cgo
package mount // import "github.com/docker/docker/pkg/mount"
// These flags are unsupported.
const (
BIND = 0
DIRSYNC = 0
MANDLOCK = 0
NOATIME = 0
NODEV = 0
NODIRATIME = 0
NOEXEC = 0
NOSUID = 0
UNBINDABLE = 0
RUNBINDABLE = 0
PRIVATE = 0
RPRIVATE = 0
SHARED = 0
RSHARED = 0
SLAVE = 0
RSLAVE = 0
RBIND = 0
RELATIME = 0
RELATIVE = 0
REMOUNT = 0
STRICTATIME = 0
SYNCHRONOUS = 0
RDONLY = 0
mntDetach = 0
)

159
vendor/github.com/docker/docker/pkg/mount/mount.go generated vendored Normal file
View File

@ -0,0 +1,159 @@
package mount // import "github.com/docker/docker/pkg/mount"
import (
"sort"
"strconv"
"strings"
"github.com/sirupsen/logrus"
)
// mountError records an error from mount or unmount operation
type mountError struct {
op string
source, target string
flags uintptr
data string
err error
}
func (e *mountError) Error() string {
out := e.op + " "
if e.source != "" {
out += e.source + ":" + e.target
} else {
out += e.target
}
if e.flags != uintptr(0) {
out += ", flags: 0x" + strconv.FormatUint(uint64(e.flags), 16)
}
if e.data != "" {
out += ", data: " + e.data
}
out += ": " + e.err.Error()
return out
}
// Cause returns the underlying cause of the error
func (e *mountError) Cause() error {
return e.err
}
// FilterFunc is a type defining a callback function
// to filter out unwanted entries. It takes a pointer
// to an Info struct (not fully populated, currently
// only Mountpoint is filled in), and returns two booleans:
// - skip: true if the entry should be skipped
// - stop: true if parsing should be stopped after the entry
type FilterFunc func(*Info) (skip, stop bool)
// PrefixFilter discards all entries whose mount points
// do not start with a prefix specified
func PrefixFilter(prefix string) FilterFunc {
return func(m *Info) (bool, bool) {
skip := !strings.HasPrefix(m.Mountpoint, prefix)
return skip, false
}
}
// SingleEntryFilter looks for a specific entry
func SingleEntryFilter(mp string) FilterFunc {
return func(m *Info) (bool, bool) {
if m.Mountpoint == mp {
return false, true // don't skip, stop now
}
return true, false // skip, keep going
}
}
// ParentsFilter returns all entries whose mount points
// can be parents of a path specified, discarding others.
// For example, given `/var/lib/docker/something`, entries
// like `/var/lib/docker`, `/var` and `/` are returned.
func ParentsFilter(path string) FilterFunc {
return func(m *Info) (bool, bool) {
skip := !strings.HasPrefix(path, m.Mountpoint)
return skip, false
}
}
// GetMounts retrieves a list of mounts for the current running process,
// with an optional filter applied (use nil for no filter).
func GetMounts(f FilterFunc) ([]*Info, error) {
return parseMountTable(f)
}
// Mounted determines if a specified mountpoint has been mounted.
// On Linux it looks at /proc/self/mountinfo.
func Mounted(mountpoint string) (bool, error) {
entries, err := GetMounts(SingleEntryFilter(mountpoint))
if err != nil {
return false, err
}
return len(entries) > 0, nil
}
// Mount will mount filesystem according to the specified configuration, on the
// condition that the target path is *not* already mounted. Options must be
// specified like the mount or fstab unix commands: "opt1=val1,opt2=val2". See
// flags.go for supported option flags.
func Mount(device, target, mType, options string) error {
flag, data := parseOptions(options)
if flag&REMOUNT != REMOUNT {
if mounted, err := Mounted(target); err != nil || mounted {
return err
}
}
return mount(device, target, mType, uintptr(flag), data)
}
// ForceMount will mount a filesystem according to the specified configuration,
// *regardless* if the target path is not already mounted. Options must be
// specified like the mount or fstab unix commands: "opt1=val1,opt2=val2". See
// flags.go for supported option flags.
func ForceMount(device, target, mType, options string) error {
flag, data := parseOptions(options)
return mount(device, target, mType, uintptr(flag), data)
}
// Unmount lazily unmounts a filesystem on supported platforms, otherwise
// does a normal unmount.
func Unmount(target string) error {
return unmount(target, mntDetach)
}
// RecursiveUnmount unmounts the target and all mounts underneath, starting with
// the deepsest mount first.
func RecursiveUnmount(target string) error {
mounts, err := parseMountTable(PrefixFilter(target))
if err != nil {
return err
}
// Make the deepest mount be first
sort.Slice(mounts, func(i, j int) bool {
return len(mounts[i].Mountpoint) > len(mounts[j].Mountpoint)
})
for i, m := range mounts {
logrus.Debugf("Trying to unmount %s", m.Mountpoint)
err = unmount(m.Mountpoint, mntDetach)
if err != nil {
if i == len(mounts)-1 { // last mount
if mounted, e := Mounted(m.Mountpoint); e != nil || mounted {
return err
}
} else {
// This is some submount, we can ignore this error for now, the final unmount will fail if this is a real problem
logrus.WithError(err).Warnf("Failed to unmount submount %s", m.Mountpoint)
}
}
logrus.Debugf("Unmounted %s", m.Mountpoint)
}
return nil
}

View File

@ -0,0 +1,59 @@
package mount // import "github.com/docker/docker/pkg/mount"
/*
#include <errno.h>
#include <stdlib.h>
#include <string.h>
#include <sys/_iovec.h>
#include <sys/mount.h>
#include <sys/param.h>
*/
import "C"
import (
"strings"
"syscall"
"unsafe"
)
func allocateIOVecs(options []string) []C.struct_iovec {
out := make([]C.struct_iovec, len(options))
for i, option := range options {
out[i].iov_base = unsafe.Pointer(C.CString(option))
out[i].iov_len = C.size_t(len(option) + 1)
}
return out
}
func mount(device, target, mType string, flag uintptr, data string) error {
isNullFS := false
xs := strings.Split(data, ",")
for _, x := range xs {
if x == "bind" {
isNullFS = true
}
}
options := []string{"fspath", target}
if isNullFS {
options = append(options, "fstype", "nullfs", "target", device)
} else {
options = append(options, "fstype", mType, "from", device)
}
rawOptions := allocateIOVecs(options)
for _, rawOption := range rawOptions {
defer C.free(rawOption.iov_base)
}
if errno := C.nmount(&rawOptions[0], C.uint(len(options)), C.int(flag)); errno != 0 {
return &mountError{
op: "mount",
source: device,
target: target,
flags: flag,
err: syscall.Errno(errno),
}
}
return nil
}

View File

@ -0,0 +1,73 @@
package mount // import "github.com/docker/docker/pkg/mount"
import (
"golang.org/x/sys/unix"
)
const (
// ptypes is the set propagation types.
ptypes = unix.MS_SHARED | unix.MS_PRIVATE | unix.MS_SLAVE | unix.MS_UNBINDABLE
// pflags is the full set valid flags for a change propagation call.
pflags = ptypes | unix.MS_REC | unix.MS_SILENT
// broflags is the combination of bind and read only
broflags = unix.MS_BIND | unix.MS_RDONLY
)
// isremount returns true if either device name or flags identify a remount request, false otherwise.
func isremount(device string, flags uintptr) bool {
switch {
// We treat device "" and "none" as a remount request to provide compatibility with
// requests that don't explicitly set MS_REMOUNT such as those manipulating bind mounts.
case flags&unix.MS_REMOUNT != 0, device == "", device == "none":
return true
default:
return false
}
}
func mount(device, target, mType string, flags uintptr, data string) error {
oflags := flags &^ ptypes
if !isremount(device, flags) || data != "" {
// Initial call applying all non-propagation flags for mount
// or remount with changed data
if err := unix.Mount(device, target, mType, oflags, data); err != nil {
return &mountError{
op: "mount",
source: device,
target: target,
flags: oflags,
data: data,
err: err,
}
}
}
if flags&ptypes != 0 {
// Change the propagation type.
if err := unix.Mount("", target, "", flags&pflags, ""); err != nil {
return &mountError{
op: "remount",
target: target,
flags: flags & pflags,
err: err,
}
}
}
if oflags&broflags == broflags {
// Remount the bind to apply read only.
if err := unix.Mount("", target, "", oflags|unix.MS_REMOUNT, ""); err != nil {
return &mountError{
op: "remount-ro",
target: target,
flags: oflags | unix.MS_REMOUNT,
err: err,
}
}
}
return nil
}

View File

@ -0,0 +1,7 @@
// +build !linux,!freebsd freebsd,!cgo
package mount // import "github.com/docker/docker/pkg/mount"
func mount(device, target, mType string, flag uintptr, data string) error {
panic("Not implemented")
}

40
vendor/github.com/docker/docker/pkg/mount/mountinfo.go generated vendored Normal file
View File

@ -0,0 +1,40 @@
package mount // import "github.com/docker/docker/pkg/mount"
// Info reveals information about a particular mounted filesystem. This
// struct is populated from the content in the /proc/<pid>/mountinfo file.
type Info struct {
// ID is a unique identifier of the mount (may be reused after umount).
ID int
// Parent indicates the ID of the mount parent (or of self for the top of the
// mount tree).
Parent int
// Major indicates one half of the device ID which identifies the device class.
Major int
// Minor indicates one half of the device ID which identifies a specific
// instance of device.
Minor int
// Root of the mount within the filesystem.
Root string
// Mountpoint indicates the mount point relative to the process's root.
Mountpoint string
// Opts represents mount-specific options.
Opts string
// Optional represents optional fields.
Optional string
// Fstype indicates the type of filesystem, such as EXT3.
Fstype string
// Source indicates filesystem specific information or "none".
Source string
// VfsOpts represents per super block options.
VfsOpts string
}

View File

@ -0,0 +1,55 @@
package mount // import "github.com/docker/docker/pkg/mount"
/*
#include <sys/param.h>
#include <sys/ucred.h>
#include <sys/mount.h>
*/
import "C"
import (
"fmt"
"reflect"
"unsafe"
)
// Parse /proc/self/mountinfo because comparing Dev and ino does not work from
// bind mounts.
func parseMountTable(filter FilterFunc) ([]*Info, error) {
var rawEntries *C.struct_statfs
count := int(C.getmntinfo(&rawEntries, C.MNT_WAIT))
if count == 0 {
return nil, fmt.Errorf("Failed to call getmntinfo")
}
var entries []C.struct_statfs
header := (*reflect.SliceHeader)(unsafe.Pointer(&entries))
header.Cap = count
header.Len = count
header.Data = uintptr(unsafe.Pointer(rawEntries))
var out []*Info
for _, entry := range entries {
var mountinfo Info
var skip, stop bool
mountinfo.Mountpoint = C.GoString(&entry.f_mntonname[0])
if filter != nil {
// filter out entries we're not interested in
skip, stop = filter(p)
if skip {
continue
}
}
mountinfo.Source = C.GoString(&entry.f_mntfromname[0])
mountinfo.Fstype = C.GoString(&entry.f_fstypename[0])
out = append(out, &mountinfo)
if stop {
break
}
}
return out, nil
}

View File

@ -0,0 +1,144 @@
package mount // import "github.com/docker/docker/pkg/mount"
import (
"bufio"
"fmt"
"io"
"os"
"strconv"
"strings"
"github.com/pkg/errors"
)
func parseInfoFile(r io.Reader, filter FilterFunc) ([]*Info, error) {
s := bufio.NewScanner(r)
out := []*Info{}
var err error
for s.Scan() {
if err = s.Err(); err != nil {
return nil, err
}
/*
See http://man7.org/linux/man-pages/man5/proc.5.html
36 35 98:0 /mnt1 /mnt2 rw,noatime master:1 - ext3 /dev/root rw,errors=continue
(1)(2)(3) (4) (5) (6) (7) (8) (9) (10) (11)
(1) mount ID: unique identifier of the mount (may be reused after umount)
(2) parent ID: ID of parent (or of self for the top of the mount tree)
(3) major:minor: value of st_dev for files on filesystem
(4) root: root of the mount within the filesystem
(5) mount point: mount point relative to the process's root
(6) mount options: per mount options
(7) optional fields: zero or more fields of the form "tag[:value]"
(8) separator: marks the end of the optional fields
(9) filesystem type: name of filesystem of the form "type[.subtype]"
(10) mount source: filesystem specific information or "none"
(11) super options: per super block options
*/
text := s.Text()
fields := strings.Split(text, " ")
numFields := len(fields)
if numFields < 10 {
// should be at least 10 fields
return nil, fmt.Errorf("Parsing '%s' failed: not enough fields (%d)", text, numFields)
}
p := &Info{}
// ignore any numbers parsing errors, as there should not be any
p.ID, _ = strconv.Atoi(fields[0])
p.Parent, _ = strconv.Atoi(fields[1])
mm := strings.Split(fields[2], ":")
if len(mm) != 2 {
return nil, fmt.Errorf("Parsing '%s' failed: unexpected minor:major pair %s", text, mm)
}
p.Major, _ = strconv.Atoi(mm[0])
p.Minor, _ = strconv.Atoi(mm[1])
p.Root, err = strconv.Unquote(`"` + fields[3] + `"`)
if err != nil {
return nil, errors.Wrapf(err, "Parsing '%s' failed: unable to unquote root field", fields[3])
}
p.Mountpoint, err = strconv.Unquote(`"` + fields[4] + `"`)
if err != nil {
return nil, errors.Wrapf(err, "Parsing '%s' failed: unable to unquote mount point field", fields[4])
}
p.Opts = fields[5]
var skip, stop bool
if filter != nil {
// filter out entries we're not interested in
skip, stop = filter(p)
if skip {
continue
}
}
// one or more optional fields, when a separator (-)
i := 6
for ; i < numFields && fields[i] != "-"; i++ {
switch i {
case 6:
p.Optional = fields[6]
default:
/* NOTE there might be more optional fields before the such as
fields[7]...fields[N] (where N < sepIndex), although
as of Linux kernel 4.15 the only known ones are
mount propagation flags in fields[6]. The correct
behavior is to ignore any unknown optional fields.
*/
break
}
}
if i == numFields {
return nil, fmt.Errorf("Parsing '%s' failed: missing separator ('-')", text)
}
// There should be 3 fields after the separator...
if i+4 > numFields {
return nil, fmt.Errorf("Parsing '%s' failed: not enough fields after a separator", text)
}
// ... but in Linux <= 3.9 mounting a cifs with spaces in a share name
// (like "//serv/My Documents") _may_ end up having a space in the last field
// of mountinfo (like "unc=//serv/My Documents"). Since kernel 3.10-rc1, cifs
// option unc= is ignored, so a space should not appear. In here we ignore
// those "extra" fields caused by extra spaces.
p.Fstype = fields[i+1]
p.Source = fields[i+2]
p.VfsOpts = fields[i+3]
out = append(out, p)
if stop {
break
}
}
return out, nil
}
// Parse /proc/self/mountinfo because comparing Dev and ino does not work from
// bind mounts
func parseMountTable(filter FilterFunc) ([]*Info, error) {
f, err := os.Open("/proc/self/mountinfo")
if err != nil {
return nil, err
}
defer f.Close()
return parseInfoFile(f, filter)
}
// PidMountInfo collects the mounts for a specific process ID. If the process
// ID is unknown, it is better to use `GetMounts` which will inspect
// "/proc/self/mountinfo" instead.
func PidMountInfo(pid int) ([]*Info, error) {
f, err := os.Open(fmt.Sprintf("/proc/%d/mountinfo", pid))
if err != nil {
return nil, err
}
defer f.Close()
return parseInfoFile(f, nil)
}

View File

@ -0,0 +1,12 @@
// +build !windows,!linux,!freebsd freebsd,!cgo
package mount // import "github.com/docker/docker/pkg/mount"
import (
"fmt"
"runtime"
)
func parseMountTable(f FilterFunc) ([]*Info, error) {
return nil, fmt.Errorf("mount.parseMountTable is not implemented on %s/%s", runtime.GOOS, runtime.GOARCH)
}

View File

@ -0,0 +1,6 @@
package mount // import "github.com/docker/docker/pkg/mount"
func parseMountTable(f FilterFunc) ([]*Info, error) {
// Do NOT return an error!
return nil, nil
}

View File

@ -0,0 +1,71 @@
package mount // import "github.com/docker/docker/pkg/mount"
// MakeShared ensures a mounted filesystem has the SHARED mount option enabled.
// See the supported options in flags.go for further reference.
func MakeShared(mountPoint string) error {
return ensureMountedAs(mountPoint, SHARED)
}
// MakeRShared ensures a mounted filesystem has the RSHARED mount option enabled.
// See the supported options in flags.go for further reference.
func MakeRShared(mountPoint string) error {
return ensureMountedAs(mountPoint, RSHARED)
}
// MakePrivate ensures a mounted filesystem has the PRIVATE mount option enabled.
// See the supported options in flags.go for further reference.
func MakePrivate(mountPoint string) error {
return ensureMountedAs(mountPoint, PRIVATE)
}
// MakeRPrivate ensures a mounted filesystem has the RPRIVATE mount option
// enabled. See the supported options in flags.go for further reference.
func MakeRPrivate(mountPoint string) error {
return ensureMountedAs(mountPoint, RPRIVATE)
}
// MakeSlave ensures a mounted filesystem has the SLAVE mount option enabled.
// See the supported options in flags.go for further reference.
func MakeSlave(mountPoint string) error {
return ensureMountedAs(mountPoint, SLAVE)
}
// MakeRSlave ensures a mounted filesystem has the RSLAVE mount option enabled.
// See the supported options in flags.go for further reference.
func MakeRSlave(mountPoint string) error {
return ensureMountedAs(mountPoint, RSLAVE)
}
// MakeUnbindable ensures a mounted filesystem has the UNBINDABLE mount option
// enabled. See the supported options in flags.go for further reference.
func MakeUnbindable(mountPoint string) error {
return ensureMountedAs(mountPoint, UNBINDABLE)
}
// MakeRUnbindable ensures a mounted filesystem has the RUNBINDABLE mount
// option enabled. See the supported options in flags.go for further reference.
func MakeRUnbindable(mountPoint string) error {
return ensureMountedAs(mountPoint, RUNBINDABLE)
}
// MakeMount ensures that the file or directory given is a mount point,
// bind mounting it to itself it case it is not.
func MakeMount(mnt string) error {
mounted, err := Mounted(mnt)
if err != nil {
return err
}
if mounted {
return nil
}
return mount(mnt, mnt, "none", uintptr(BIND), "")
}
func ensureMountedAs(mnt string, flags int) error {
if err := MakeMount(mnt); err != nil {
return err
}
return mount("", mnt, "none", uintptr(flags), "")
}

View File

@ -0,0 +1,22 @@
// +build !windows
package mount // import "github.com/docker/docker/pkg/mount"
import "golang.org/x/sys/unix"
func unmount(target string, flags int) error {
err := unix.Unmount(target, flags)
if err == nil || err == unix.EINVAL {
// Ignore "not mounted" error here. Note the same error
// can be returned if flags are invalid, so this code
// assumes that the flags value is always correct.
return nil
}
return &mountError{
op: "umount",
target: target,
flags: uintptr(flags),
err: err,
}
}

View File

@ -0,0 +1,7 @@
// +build windows
package mount // import "github.com/docker/docker/pkg/mount"
func unmount(target string, flag int) error {
panic("Not implemented")
}

View File

@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2014-2018 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,4 +1,4 @@
Copyright (c) 2009,2014 Google Inc. All rights reserved.
Copyright (c) 2014-2018 The Docker & Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are

View File

@ -0,0 +1,16 @@
package system // import "github.com/docker/docker/pkg/system"
import (
"strings"
"golang.org/x/sys/windows"
)
// EscapeArgs makes a Windows-style escaped command line from a set of arguments
func EscapeArgs(args []string) string {
escapedArgs := make([]string, len(args))
for i, a := range args {
escapedArgs[i] = windows.EscapeArg(a)
}
return strings.Join(escapedArgs, " ")
}

31
vendor/github.com/docker/docker/pkg/system/chtimes.go generated vendored Normal file
View File

@ -0,0 +1,31 @@
package system // import "github.com/docker/docker/pkg/system"
import (
"os"
"time"
)
// Chtimes changes the access time and modified time of a file at the given path
func Chtimes(name string, atime time.Time, mtime time.Time) error {
unixMinTime := time.Unix(0, 0)
unixMaxTime := maxTime
// If the modified time is prior to the Unix Epoch, or after the
// end of Unix Time, os.Chtimes has undefined behavior
// default to Unix Epoch in this case, just in case
if atime.Before(unixMinTime) || atime.After(unixMaxTime) {
atime = unixMinTime
}
if mtime.Before(unixMinTime) || mtime.After(unixMaxTime) {
mtime = unixMinTime
}
if err := os.Chtimes(name, atime, mtime); err != nil {
return err
}
// Take platform specific action for setting create time.
return setCTime(name, mtime)
}

View File

@ -0,0 +1,14 @@
// +build !windows
package system // import "github.com/docker/docker/pkg/system"
import (
"time"
)
//setCTime will set the create time on a file. On Unix, the create
//time is updated as a side effect of setting the modified time, so
//no action is required.
func setCTime(path string, ctime time.Time) error {
return nil
}

View File

@ -0,0 +1,26 @@
package system // import "github.com/docker/docker/pkg/system"
import (
"time"
"golang.org/x/sys/windows"
)
//setCTime will set the create time on a file. On Windows, this requires
//calling SetFileTime and explicitly including the create time.
func setCTime(path string, ctime time.Time) error {
ctimespec := windows.NsecToTimespec(ctime.UnixNano())
pathp, e := windows.UTF16PtrFromString(path)
if e != nil {
return e
}
h, e := windows.CreateFile(pathp,
windows.FILE_WRITE_ATTRIBUTES, windows.FILE_SHARE_WRITE, nil,
windows.OPEN_EXISTING, windows.FILE_FLAG_BACKUP_SEMANTICS, 0)
if e != nil {
return e
}
defer windows.Close(h)
c := windows.NsecToFiletime(windows.TimespecToNsec(ctimespec))
return windows.SetFileTime(h, &c, nil, nil)
}

13
vendor/github.com/docker/docker/pkg/system/errors.go generated vendored Normal file
View File

@ -0,0 +1,13 @@
package system // import "github.com/docker/docker/pkg/system"
import (
"errors"
)
var (
// ErrNotSupportedPlatform means the platform is not supported.
ErrNotSupportedPlatform = errors.New("platform and architecture is not supported")
// ErrNotSupportedOperatingSystem means the operating system is not supported.
ErrNotSupportedOperatingSystem = errors.New("operating system is not supported")
)

Some files were not shown because too many files have changed in this diff Show More