Compare commits

..

12 Commits
0.9.3 ... 0.9.5

Author SHA1 Message Date
b20e5614c7 Return out of scale handler when hitting an error
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-09-19 21:18:45 +01:00
40829bbf88 Restart stopped tasks
This patch reports stopped tasks as having zero scale, which
means the gateway will send a "scale up" request, the same
way as it does for paused containers, or those which have
no task due to a reboot of the machine.

The scale up logic will now delete the stopped task and
recreate the task.

Tested with nodeinfo and figlet on a Dell XPS with
Ubuntu 16.04. The scaling logic has been re-written, but
re-tested by manually pausing and manually removing
the task of a container.

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-09-19 21:18:45 +01:00
87f49b0289 Add upgrade instructions
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-09-19 19:52:46 +01:00
b817479828 Document logs redirection
Fixes: 106

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-09-18 12:24:39 +01:00
faae82aa1c Move core services logs to the journal
Logs can now be viewed with the following, adding -f to follow
the logs.

journalctl -t default:gateway

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-09-18 12:24:39 +01:00
cddc10acbe Document APIs
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-09-18 12:02:56 +01:00
1c8e8bb615 Fix proxy test
The proxy test needed its own local resovler to pass.

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-09-18 12:02:56 +01:00
6e537d1fde Add docs for compose file
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-09-18 12:02:56 +01:00
c314af4f98 Add local resolver for system containers
System containers can now be proxied to the localhost or to
all adapters using docker-compose.

Tested with NATS and Prometheus to 127.0.0.1 in multipass
and with the gateway to 0.0.0.0.

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-09-18 12:02:56 +01:00
4189cfe52c Expose ports for core services
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-09-18 12:02:56 +01:00
9e2f571cf7 Update README.md 2020-09-16 21:21:23 +01:00
93825e8354 Add null-checking for labels
Fixes an issue introduced in #45 which was undetected. When
users do not pass in "labels" to the deployment - or a valid
empty object, then a nil dereference causes a panic.

Fixes: #101

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2020-09-11 12:15:33 +01:00
13 changed files with 396 additions and 100 deletions

View File

@ -7,7 +7,7 @@ faasd is built for everyone else, for those who have no desire to manage expensi
[![OpenFaaS](https://img.shields.io/badge/openfaas-serverless-blue.svg)](https://www.openfaas.com)
![Downloads](https://img.shields.io/github/downloads/openfaas/faasd/total)
faasd is [OpenFaaS](https://github.com/openfaas/) reimagined, but without the cost and complexity of Kubernetes. It runs on a single host with very modest requirements, making it very fast and easy to manage. Under the hood it uses [containerd](https://containerd.io/) and [Container Networking Interface (CNI)](https://github.com/containernetworking/cni) along with the same core OpenFaaS components from the main project.
faasd is [OpenFaaS](https://github.com/openfaas/) reimagined, but without the cost and complexity of Kubernetes. It runs on a single host with very modest requirements, making it fast and easy to manage. Under the hood it uses [containerd](https://containerd.io/) and [Container Networking Interface (CNI)](https://github.com/containernetworking/cni) along with the same core OpenFaaS components from the main project.
## When should you use faasd over OpenFaaS on Kubernetes?
@ -56,6 +56,8 @@ Automate everything within < 60 seconds and get a public URL and IP address back
* [Provision faasd on DigitalOcean with built-in TLS support](docs/bootstrap/digitalocean-terraform/README.md)
## Operational concerns
### A note on private repos / registries
To use private image repos, `~/.docker/config.json` needs to be copied to `/var/lib/faasd/.docker/config.json`.
@ -87,6 +89,65 @@ journalctl -t openfaas-fn:figlet -f &
echo logs | faas-cli invoke figlet
```
### Logs for the core services
Core services as defined in the docker-compose.yaml file are deployed as containers by faasd.
View the logs for a component by giving its NAME:
```bash
journalctl -t default:NAME
journalctl -t default:gateway
journalctl -t default:queue-worker
```
You can also use `-f` to follow the logs, or `--lines` to tail a number of lines, or `--since` to give a timeframe.
### Exposing core services
The OpenFaaS stack is made up of several core services including NATS and Prometheus. You can expose these through the `docker-compose.yaml` file located at `/var/lib/faasd`.
Expose the gateway to all adapters:
```yaml
gateway:
ports:
- "8080:8080"
```
Expose Prometheus only to 127.0.0.1:
```yaml
prometheus:
ports:
- "127.0.0.1:9090:9090"
```
### Upgrading faasd
To upgrade `faasd` either re-create your VM using Terraform, or simply replace the faasd binary with a newer one.
```bash
systemctl stop faasd-provider
systemctl stop faasd
# Replace /usr/local/bin/faasd with the desired release
# Replace /var/lib/faasd/docker-compose.yaml with the matching version for
# that release.
# Remember to keep any custom patches you make such as exposing additional
# ports, or updating timeout values
systemctl start faasd
systemctl start faasd-provider
```
You could also perform this task over SSH, or use a configuration management tool.
> Note: if you are using Caddy or Let's Encrypt for free SSL certificates, that you may hit rate-limits for generating new certificates if you do this too often within a given week.
## What does faasd deploy?
* faasd - itself, and its [faas-provider](https://github.com/openfaas/faas-provider) for containerd - CRUD for functions and services, implements the OpenFaaS REST API
@ -119,7 +180,15 @@ For community functions see `faas-cli store --help`
For templates built by the community see: `faas-cli template store list`, you can also use the `dockerfile` template if you just want to migrate an existing service without the benefits of using a template.
### Workshop
### Training and courses
#### LinuxFoundation training course
The founder of faasd and OpenFaaS has written a training course for the LinuxFoundation which also covers how to use OpenFaaS on Kubernetes. Much of the same concepts can be applied to faasd, and the course is free:
* [Introduction to Serverless on Kubernetes](https://www.edx.org/course/introduction-to-serverless-on-kubernetes)
#### Community workshop
[The OpenFaaS workshop](https://github.com/openfaas/workshop/) is a set of 12 self-paced labs and provides a great starting point for learning the features of openfaas. Not all features will be available or usable with faasd.
@ -184,4 +253,9 @@ Other operations are pending development in the provider such as:
* [x] Configure `basic_auth` to protect the OpenFaaS gateway and faasd-provider HTTP API
* [x] Setup custom working directory for faasd `/var/lib/faasd/`
* [x] Use CNI to create network namespaces and adapters
* [x] Optionally expose core services from the docker-compose.yaml file, locally or to all adapters.
WIP:
* [ ] Annotation support (PR ready)
* [ ] Hard memory limits for functions (PR ready)

View File

@ -7,7 +7,6 @@ import (
"os"
"os/signal"
"path"
"strings"
"sync"
"syscall"
"time"
@ -72,20 +71,15 @@ func runUp(cmd *cobra.Command, _ []string) error {
log.Printf("Supervisor created in: %s\n", time.Since(start).String())
start = time.Now()
err = supervisor.Start(services)
if err != nil {
if err := supervisor.Start(services); err != nil {
return err
}
defer supervisor.Close()
log.Printf("Supervisor init done in: %s\n", time.Since(start).String())
shutdownTimeout := time.Second * 1
timeout := time.Second * 60
proxyDoneCh := make(chan bool)
wg := sync.WaitGroup{}
wg.Add(1)
@ -102,38 +96,38 @@ func runUp(cmd *cobra.Command, _ []string) error {
fmt.Println(err)
}
// Close proxy
proxyDoneCh <- true
// TODO: close proxies
time.AfterFunc(shutdownTimeout, func() {
wg.Done()
})
}()
gatewayURLChan := make(chan string, 1)
proxyPort := 8080
proxy := pkg.NewProxy(proxyPort, timeout)
go proxy.Start(gatewayURLChan, proxyDoneCh)
localResolver := pkg.NewLocalResolver(path.Join(cfg.workingDir, "hosts"))
go localResolver.Start()
go func() {
time.Sleep(3 * time.Second)
proxies := map[uint32]*pkg.Proxy{}
for _, svc := range services {
for _, port := range svc.Ports {
fileData, fileErr := ioutil.ReadFile(path.Join(cfg.workingDir, "hosts"))
if fileErr != nil {
log.Println(fileErr)
return
}
host := ""
lines := strings.Split(string(fileData), "\n")
for _, line := range lines {
if strings.Index(line, "gateway") > -1 {
host = line[:strings.Index(line, "\t")]
listenPort := port.Port
if _, ok := proxies[listenPort]; ok {
return fmt.Errorf("port %d already allocated", listenPort)
}
hostIP := "0.0.0.0"
if len(port.HostIP) > 0 {
hostIP = port.HostIP
}
upstream := fmt.Sprintf("%s:%d", svc.Name, port.TargetPort)
proxies[listenPort] = pkg.NewProxy(upstream, listenPort, hostIP, timeout, localResolver)
}
log.Printf("[up] Sending %s to proxy\n", host)
gatewayURLChan <- host + ":8080"
close(gatewayURLChan)
}()
}
// TODO: track proxies for later cancellation when receiving sigint/term
for _, v := range proxies {
go v.Start()
}
wg.Wait()
return nil

View File

@ -1,7 +1,7 @@
version: "3.7"
services:
basic-auth-plugin:
image: "docker.io/openfaas/basic-auth-plugin:0.18.17${ARCH_SUFFIX}"
image: "docker.io/openfaas/basic-auth-plugin:0.18.18${ARCH_SUFFIX}"
environment:
- port=8080
- secret_mount_path=/run/secrets
@ -26,6 +26,8 @@ services:
- "8222"
- "--store=memory"
- "--cluster_id=faas-cluster"
# ports:
# - "127.0.0.1:8222:8222"
prometheus:
image: docker.io/prom/prometheus:v2.14.0
@ -35,9 +37,11 @@ services:
target: /etc/prometheus/prometheus.yml
cap_add:
- CAP_NET_RAW
ports:
- "127.0.0.1:9090:9090"
gateway:
image: "docker.io/openfaas/gateway:0.18.17${ARCH_SUFFIX}"
image: "docker.io/openfaas/gateway:0.18.18${ARCH_SUFFIX}"
environment:
- basic_auth=true
- functions_provider_url=http://faasd-provider:8081/
@ -65,6 +69,8 @@ services:
- basic-auth-plugin
- nats
- prometheus
ports:
- "8080:8080"
queue-worker:
image: docker.io/openfaas/queue-worker:0.11.2

104
pkg/local_resolver.go Normal file
View File

@ -0,0 +1,104 @@
package pkg
import (
"io/ioutil"
"log"
"os"
"strings"
"sync"
"time"
)
// LocalResolver provides hostname to IP look-up for faasd core services
type LocalResolver struct {
Path string
Map map[string]string
Mutex *sync.RWMutex
}
// NewLocalResolver creates a new resolver for reading from a hosts file
func NewLocalResolver(path string) Resolver {
return &LocalResolver{
Path: path,
Mutex: &sync.RWMutex{},
Map: make(map[string]string),
}
}
// Start polling the disk for the hosts file in Path
func (l *LocalResolver) Start() {
var lastStat os.FileInfo
for {
rebuild := false
if info, err := os.Stat(l.Path); err == nil {
if lastStat == nil {
rebuild = true
} else {
if !lastStat.ModTime().Equal(info.ModTime()) {
rebuild = true
}
}
lastStat = info
}
if rebuild {
log.Printf("Resolver rebuilding map")
l.rebuild()
}
time.Sleep(time.Second * 3)
}
}
func (l *LocalResolver) rebuild() {
l.Mutex.Lock()
defer l.Mutex.Unlock()
fileData, fileErr := ioutil.ReadFile(l.Path)
if fileErr != nil {
log.Printf("resolver rebuild error: %s", fileErr.Error())
return
}
lines := strings.Split(string(fileData), "\n")
for _, line := range lines {
index := strings.Index(line, "\t")
if len(line) > 0 && index > -1 {
ip := line[:index]
host := line[index+1:]
log.Printf("Resolver: %q=%q", host, ip)
l.Map[host] = ip
}
}
}
// Get resolves a hostname to an IP, or timesout after the duration has passed
func (l *LocalResolver) Get(upstream string, got chan<- string, timeout time.Duration) {
start := time.Now()
for {
if val := l.get(upstream); len(val) > 0 {
got <- val
break
}
if time.Now().After(start.Add(timeout)) {
log.Printf("Timed out after %s getting host %q", timeout.String(), upstream)
break
}
time.Sleep(time.Millisecond * 250)
}
}
func (l *LocalResolver) get(upstream string) string {
l.Mutex.RLock()
defer l.Mutex.RUnlock()
if val, ok := l.Map[upstream]; ok {
return val
}
return ""
}

View File

@ -69,6 +69,7 @@ func deploy(ctx context.Context, req types.FunctionDeployment, client *container
if err != nil {
return err
}
imgRef := reference.TagNameOnly(r).String()
snapshotter := ""
@ -98,6 +99,11 @@ func deploy(ctx context.Context, req types.FunctionDeployment, client *container
name := req.Service
labels := map[string]string{}
if req.Labels != nil {
labels = *req.Labels
}
container, err := client.NewContainer(
ctx,
name,
@ -108,7 +114,7 @@ func deploy(ctx context.Context, req types.FunctionDeployment, client *container
oci.WithCapabilities([]string{"CAP_NET_RAW"}),
oci.WithMounts(mounts),
oci.WithEnv(envs)),
containerd.WithContainerLabels(*req.Labels),
containerd.WithContainerLabels(labels),
)
if err != nil {
@ -122,7 +128,6 @@ func deploy(ctx context.Context, req types.FunctionDeployment, client *container
func createTask(ctx context.Context, client *containerd.Client, container containerd.Container, cni gocni.CNI) error {
name := container.ID()
// task, taskErr := container.NewTask(ctx, cio.NewCreator(cio.WithStdio))
task, taskErr := container.NewTask(ctx, cio.BinaryIO("/usr/local/bin/faasd", nil))

View File

@ -68,6 +68,7 @@ func GetFunction(client *containerd.Client, name string) (Function, error) {
if err != nil {
return Function{}, fmt.Errorf("unable to get task status for container: %s %s", name, err)
}
if svc.Status == "running" {
replicas = 1
f.pid = task.Pid()
@ -85,7 +86,7 @@ func GetFunction(client *containerd.Client, name string) (Function, error) {
f.replicas = replicas
return f, nil
}
return Function{}, fmt.Errorf("unable to find function: %s, error %s", name, err)
}

View File

@ -11,6 +11,7 @@ import (
"github.com/containerd/containerd"
"github.com/containerd/containerd/namespaces"
gocni "github.com/containerd/go-cni"
"github.com/openfaas/faas-provider/types"
faasd "github.com/openfaas/faasd/pkg"
)
@ -58,46 +59,71 @@ func MakeReplicaUpdateHandler(client *containerd.Client, cni gocni.CNI) func(w h
return
}
taskExists := true
var taskExists bool
var taskStatus *containerd.Status
task, taskErr := ctr.Task(ctx, nil)
if taskErr != nil {
msg := fmt.Sprintf("cannot load task for service %s, error: %s", name, taskErr)
log.Printf("[Scale] %s\n", msg)
taskExists = false
} else {
taskExists = true
status, statusErr := task.Status(ctx)
if statusErr != nil {
msg := fmt.Sprintf("cannot load task status for %s, error: %s", name, statusErr)
log.Printf("[Scale] %s\n", msg)
http.Error(w, msg, http.StatusInternalServerError)
return
} else {
taskStatus = &status
}
}
if req.Replicas > 0 {
if taskExists {
if status, statusErr := task.Status(ctx); statusErr == nil {
if status.Status == containerd.Paused {
if resumeErr := task.Resume(ctx); resumeErr != nil {
log.Printf("[Scale] error resuming task %s, error: %s\n", name, resumeErr)
http.Error(w, resumeErr.Error(), http.StatusBadRequest)
}
}
}
} else {
deployErr := createTask(ctx, client, ctr, cni)
if deployErr != nil {
log.Printf("[Scale] error deploying %s, error: %s\n", name, deployErr)
http.Error(w, deployErr.Error(), http.StatusBadRequest)
createNewTask := false
// Scale to zero
if req.Replicas == 0 {
// If a task is running, pause it
if taskExists && taskStatus.Status == containerd.Running {
if pauseErr := task.Pause(ctx); pauseErr != nil {
wrappedPauseErr := fmt.Errorf("error pausing task %s, error: %s", name, pauseErr)
log.Printf("[Scale] %s\n", wrappedPauseErr.Error())
http.Error(w, wrappedPauseErr.Error(), http.StatusNotFound)
return
}
return
}
} else {
if taskExists {
if status, statusErr := task.Status(ctx); statusErr == nil {
if status.Status == containerd.Running {
if pauseErr := task.Pause(ctx); pauseErr != nil {
log.Printf("[Scale] error pausing task %s, error: %s\n", name, pauseErr)
http.Error(w, pauseErr.Error(), http.StatusBadRequest)
}
}
}
}
}
}
if taskExists {
if taskStatus != nil {
if taskStatus.Status == containerd.Paused {
if resumeErr := task.Resume(ctx); resumeErr != nil {
log.Printf("[Scale] error resuming task %s, error: %s\n", name, resumeErr)
http.Error(w, resumeErr.Error(), http.StatusBadRequest)
return
}
} else if taskStatus.Status == containerd.Stopped {
// Stopped tasks cannot be restarted, must be removed, and created again
if _, delErr := task.Delete(ctx); delErr != nil {
log.Printf("[Scale] error deleting stopped task %s, error: %s\n", name, delErr)
http.Error(w, delErr.Error(), http.StatusBadRequest)
return
}
createNewTask = true
}
}
} else {
createNewTask = true
}
if createNewTask {
deployErr := createTask(ctx, client, ctr, cni)
if deployErr != nil {
log.Printf("[Scale] error deploying %s, error: %s\n", name, deployErr)
http.Error(w, deployErr.Error(), http.StatusBadRequest)
return
}
}
}
}

View File

@ -6,74 +6,91 @@ import (
"log"
"net"
"net/http"
"strconv"
"strings"
"time"
)
// NewProxy creates a HTTP proxy to expose the gateway container
// from OpenFaaS to the host
func NewProxy(port int, timeout time.Duration) *Proxy {
// NewProxy creates a HTTP proxy to expose a host
func NewProxy(upstream string, listenPort uint32, hostIP string, timeout time.Duration, resolver Resolver) *Proxy {
return &Proxy{
Port: port,
Timeout: timeout,
Upstream: upstream,
Port: listenPort,
HostIP: hostIP,
Timeout: timeout,
Resolver: resolver,
}
}
// Proxy for exposing a private container
type Proxy struct {
Timeout time.Duration
Port int
}
type proxyState struct {
Host string
// Port on which to listen to traffic
Port uint32
// Upstream is where to send traffic when received
Upstream string
// The IP to use to bind locally
HostIP string
Resolver Resolver
}
// Start listening and forwarding HTTP to the host
func (p *Proxy) Start(gatewayChan chan string, done chan bool) error {
tcp := p.Port
func (p *Proxy) Start() error {
http.DefaultClient.CheckRedirect = func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}
ps := proxyState{
Host: "",
upstreamHost, upstreamPort, err := getUpstream(p.Upstream, p.Port)
if err != nil {
return err
}
ps.Host = <-gatewayChan
log.Printf("Looking up IP for: %q", upstreamHost)
got := make(chan string, 1)
log.Printf("Starting faasd proxy on %d\n", tcp)
go p.Resolver.Get(upstreamHost, got, time.Second*5)
fmt.Printf("Gateway: %s\n", ps.Host)
ipAddress := <-got
close(got)
l, err := net.Listen("tcp", fmt.Sprintf("0.0.0.0:%d", tcp))
upstreamAddr := fmt.Sprintf("%s:%d", ipAddress, upstreamPort)
localBind := fmt.Sprintf("%s:%d", p.HostIP, p.Port)
log.Printf("Proxy from: %s, to: %s (%s)\n", localBind, p.Upstream, ipAddress)
l, err := net.Listen("tcp", localBind)
if err != nil {
log.Printf("Error: %s", err.Error())
return err
}
defer l.Close()
for {
// Wait for a connection.
conn, err := l.Accept()
if err != nil {
acceptErr := fmt.Errorf("Unable to accept on %d, error: %s", tcp, err.Error())
acceptErr := fmt.Errorf("Unable to accept on %d, error: %s",
p.Port,
err.Error())
log.Printf("%s", acceptErr.Error())
return acceptErr
}
upstream, err := net.Dial("tcp", fmt.Sprintf("%s", ps.Host))
upstream, err := net.Dial("tcp", upstreamAddr)
if err != nil {
log.Printf("unable to dial to %s, error: %s", ps.Host, err.Error())
log.Printf("unable to dial to %s, error: %s", upstreamAddr, err.Error())
return err
}
go pipe(conn, upstream)
go pipe(upstream, conn)
}
}
@ -81,3 +98,19 @@ func pipe(from net.Conn, to net.Conn) {
defer from.Close()
io.Copy(from, to)
}
func getUpstream(val string, defaultPort uint32) (string, uint32, error) {
upstreamHostname := val
upstreamPort := defaultPort
if in := strings.Index(val, ":"); in > -1 {
upstreamHostname = val[:in]
port, err := strconv.ParseInt(val[in+1:], 10, 32)
if err != nil {
return "", defaultPort, err
}
upstreamPort = uint32(port)
}
return upstreamHostname, upstreamPort, nil
}

View File

@ -16,7 +16,7 @@ func Test_Proxy_ToPrivateServer(t *testing.T) {
wantBodyText := "OK"
wantBody := []byte(wantBodyText)
upstream := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
upstreamSvr := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Body != nil {
defer r.Body.Close()
@ -27,17 +27,19 @@ func Test_Proxy_ToPrivateServer(t *testing.T) {
}))
defer upstream.Close()
defer upstreamSvr.Close()
port := 8080
proxy := NewProxy(port, time.Second*1)
u, _ := url.Parse(upstreamSvr.URL)
log.Println("Host", u.Host)
upstreamAddr := u.Host
proxy := NewProxy(upstreamAddr, 8080, "127.0.0.1", time.Second*1, &mockResolver{})
gwChan := make(chan string, 1)
doneCh := make(chan bool)
go proxy.Start(gwChan, doneCh)
go proxy.Start()
u, _ := url.Parse(upstream.URL)
log.Println("Host", u.Host)
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
@ -71,3 +73,14 @@ func Test_Proxy_ToPrivateServer(t *testing.T) {
doneCh <- true
}()
}
type mockResolver struct {
}
func (m *mockResolver) Start() {
}
func (m *mockResolver) Get(upstream string, got chan<- string, timeout time.Duration) {
got <- upstream
}

12
pkg/resolver.go Normal file
View File

@ -0,0 +1,12 @@
package pkg
import "time"
// Resolver resolves an upstream IP address for a given upstream host
type Resolver interface {
// Start any polling or connections required to resolve
Start()
// Get an IP address using an asynchronous operation
Get(upstream string, got chan<- string, timeout time.Duration)
}

View File

@ -41,6 +41,13 @@ type Service struct {
Caps []string
Args []string
DependsOn []string
Ports []ServicePort
}
type ServicePort struct {
TargetPort uint32
Port uint32
HostIP string
}
type Mount struct {
@ -154,7 +161,7 @@ func (s *Supervisor) Start(svcs []Service) error {
Options: []string{"rbind", "ro"},
})
newContainer, containerCreateErr := s.client.NewContainer(
newContainer, err := s.client.NewContainer(
ctx,
svc.Name,
containerd.WithImage(image),
@ -166,14 +173,14 @@ func (s *Supervisor) Start(svcs []Service) error {
oci.WithEnv(svc.Env)),
)
if containerCreateErr != nil {
log.Printf("Error creating container: %s\n", containerCreateErr)
return containerCreateErr
if err != nil {
log.Printf("Error creating container: %s\n", err)
return err
}
log.Printf("Created container: %s\n", newContainer.ID())
task, err := newContainer.NewTask(ctx, cio.NewCreator(cio.WithStdio))
task, err := newContainer.NewTask(ctx, cio.BinaryIO("/usr/local/bin/faasd", nil))
if err != nil {
log.Printf("Error creating task: %s\n", err)
return err
@ -181,13 +188,14 @@ func (s *Supervisor) Start(svcs []Service) error {
labels := map[string]string{}
network, err := cninetwork.CreateCNINetwork(ctx, s.cni, task, labels)
if err != nil {
log.Printf("Error creating CNI for %s: %s", svc.Name, err)
return err
}
ip, err := cninetwork.GetIPAddress(network, task)
if err != nil {
log.Printf("Error getting IP for %s: %s", svc.Name, err)
return err
}
@ -297,12 +305,26 @@ func ParseCompose(config *compose.Config) ([]Service, error) {
Env: env,
Mounts: mounts,
DependsOn: s.DependsOn,
Ports: convertPorts(s.Ports),
}
}
return services, nil
}
func convertPorts(ports []compose.ServicePortConfig) []ServicePort {
servicePorts := []ServicePort{}
for _, p := range ports {
servicePorts = append(servicePorts, ServicePort{
Port: p.Published,
TargetPort: p.Target,
HostIP: p.HostIP,
})
}
return servicePorts
}
// LoadComposeFile is a helper method for loading a docker-compose file
func LoadComposeFile(wd string, file string) (*compose.Config, error) {
return LoadComposeFileWithArch(wd, file, env.GetClientArch)

View File

@ -26,6 +26,8 @@ services:
- "8222"
- "--store=memory"
- "--cluster_id=faas-cluster"
ports:
- "127.0.0.1:8222:8222"
prometheus:
image: docker.io/prom/prometheus:v2.14.0
@ -35,6 +37,8 @@ services:
target: /etc/prometheus/prometheus.yml
cap_add:
- CAP_NET_RAW
ports:
- "127.0.0.1:9090:9090"
gateway:
image: "docker.io/openfaas/gateway:0.18.17${ARCH_SUFFIX}"
@ -65,6 +69,8 @@ services:
- basic-auth-plugin
- nats
- prometheus
ports:
- "8080:8080"
queue-worker:
image: docker.io/openfaas/queue-worker:0.11.2