23 Commits

Author SHA1 Message Date
f3599f4699 Apply gofmt
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alex@openfaas.com>
2022-12-14 11:40:43 +00:00
3bafff7e09 Fix issue with empty CI tag
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alex@openfaas.com>
2022-12-14 11:33:42 +00:00
e3171b49b0 Remove old ROADMAP
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alex@openfaas.com>
2022-12-14 11:32:20 +00:00
e1c62f4875 Update Go and alpine versions
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alex@openfaas.com>
2022-12-14 11:30:41 +00:00
b31419c8de Fix CI for deprecated set_output
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alex@openfaas.com>
2022-12-14 11:29:08 +00:00
004bbddadb Update queue code for legacy NATS Streaming
NATS Streaming is deprecated and will have no support from
early 2023 by Synadia. Upgrade to OpenFaaS Pro as soon as
possible.

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alex@openfaas.com>
2022-12-14 11:24:45 +00:00
88bedf78bd Update ADOPTER
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alex@openfaas.com>
2022-12-14 11:24:45 +00:00
9d0436e511 Update README intro
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2022-12-06 21:12:58 +00:00
c07bebbbc9 fix: return provider response during fnc listing errors
Return the original upstream response body when the the list request
returns an error. In general, the provider is returning useful and
actionable error messages for the user, the previous code hid this in
the logs and this is easy for user to overlook.

Additionally, remove an early return from error case after fetching
metrics. This looked like a bug and could result in empty api responses
if there was a prometheus error.

Signed-off-by: Lucas Roesler <roesler.lucas@gmail.com>
2022-10-24 18:23:49 +01:00
208b1b2235 Update ISSUE_TEMPLATE.md
Signed-off-by: Alex Ellis <alexellis2@gmail.com>
2022-10-24 11:47:46 +01:00
0255a9480b Add ADOPTER
Signed-off-by: Arne Diekmann <diekmann@neoskop.de>
2022-10-24 11:35:03 +01:00
f7f71f1497 Add Klar to ADOPTERS
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2022-10-24 10:54:44 +01:00
03b6d6c01b Add another user to ADOPTERS
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2022-10-20 13:27:28 +01:00
efffd83990 Add ADOPTER
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alex@openfaas.com>
2022-10-20 09:30:46 +01:00
06433e11c0 Add ADOPTER
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alex@openfaas.com>
2022-10-20 09:29:24 +01:00
806585b434 Update some missing ADOPTERS
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alex@openfaas.com>
2022-10-20 09:27:26 +01:00
32b828b25e Update ADOPTERS.md
Signed-off-by: Alex Ellis <alexellis2@gmail.com>
2022-10-13 08:54:23 +01:00
bb163760ff HelloSafe
Signed-off-by: Simon Renault <94172348+SimonRenault86@users.noreply.github.com>
2022-10-11 20:42:14 +01:00
1a00a55c77 Use write interceptor from faas-provider
We now have two write interceptors, with one moved into
faas-provider. This commit makes the gateway use the new
external package and deletes its own.

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alex@openfaas.com>
2022-09-29 20:36:40 +01:00
bc2eeff467 Improve errors when backend doesn't return JSON
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2022-09-21 09:09:48 +01:00
887c804254 Improve error message when unable to list functions
Related to: #1022

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2022-09-21 08:53:04 +01:00
9da2ec244f Update ISSUE_TEMPLATE.md
Signed-off-by: Alex Ellis <alexellis2@gmail.com>
2022-09-13 09:02:04 +01:00
8e711b3a0c Use Desired Replicas when scaling from zero
During some exploratory testing, I ran into an issue where
the gateway would attempt to scale a deployment from zero
replicas to min, despite there already being min replicas.

Why?

The scaling logic was looking for Available replicas when
it should have looked for Desired replicas. So when a
deployment had zero ready replicas due to readiness checks
failing, the gateway was attempting to scale from zero
to min.

This logic has been corrected and separated from the
a holding pattern where the gateway waits for a ready
replica.

Tested with KinD and an edited function which had a
readiness probe, which was failing and no ready
replicas. As desired, the gateway did not scale to min.

However, when setting desired replicas to zero, the
gateway did scale up as expected.

This change also modifies all print statements for
"seconds" and makes them use 4 decimal places instead of
the default which was a longer, more verbose string for
the logs.

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
2022-09-08 11:21:29 +01:00
28 changed files with 358 additions and 264 deletions

View File

@ -7,6 +7,9 @@
<!-- How is this affecting you? What task are you trying to accomplish? -->
## Why do you need this?
## Who is this for?
What company is this for? Are you listed in the [ADOPTERS.md](https://github.com/openfaas/faas/blob/master/ADOPTERS.md) file?
<!--- Provide a general summary of the issue in the Title above -->
## Expected Behaviour
@ -20,7 +23,7 @@
## Are you a GitHub Sponsor (Yes/No?)
<!--- Given this request for help, how are you supporting the project? -->
<!-- Issues created by customers or monthly sponsors get priority -->
Check at: https://github.com/sponsors/openfaas
- [ ] Yes

View File

@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
go-version: [1.18.x]
go-version: [1.19.x]
steps:
- uses: actions/checkout@master
with:
@ -23,12 +23,17 @@ jobs:
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Get TAG
id: get_tag
run: echo ::set-output name=TAG::latest-dev
- name: Get git commit
id: get_git_commit
run: echo "GIT_COMMIT=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Get version
id: get_version
run: echo "VERSION=$(git describe --tags --dirty)" >> $GITHUB_ENV
- name: Get Repo Owner
id: get_repo_owner
run: echo ::set-output name=repo_owner::$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')
run: echo "REPO_OWNER=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" > $GITHUB_ENV
- name: Build ${{ matrix.svc }}
uses: docker/build-push-action@v2
with:
@ -37,18 +42,17 @@ jobs:
outputs: "type=image,push=false"
platforms: linux/amd64,linux/arm/v7,linux/arm64
build-args: |
VERSION=${{ steps.get_tag.outputs.TAG }}
VERSION=${{ env.TAG }}
GIT_COMMIT=${{ github.sha }}
tags: |
ghcr.io/${{ steps.get_repo_owner.outputs.repo_owner }}/gateway:${{ steps.get_tag.outputs.TAG }}
ghcr.io/${{ steps.get_repo_owner.outputs.repo_owner }}/gateway:${{ github.sha }}
ghcr.io/${{ steps.get_repo_owner.outputs.repo_owner }}/gateway:latest
ghcr.io/${{ env.REPO_OWNER }}/gateway:${{ github.sha }}
ghcr.io/${{ env.REPO_OWNER }}/gateway:latest
build-auth-plugins:
runs-on: ubuntu-latest
strategy:
matrix:
go-version: [1.17.x]
go-version: [1.19.x]
svc: [
basic-auth
]
@ -60,12 +64,17 @@ jobs:
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Get TAG
id: get_tag
run: echo ::set-output name=TAG::latest-dev
- name: Get git commit
id: get_git_commit
run: echo "GIT_COMMIT=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Get version
id: get_version
run: echo "VERSION=$(git describe --tags --dirty)" >> $GITHUB_ENV
- name: Get Repo Owner
id: get_repo_owner
run: echo ::set-output name=repo_owner::$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')
run: echo "REPO_OWNER=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" > $GITHUB_ENV
- name: Build ${{ matrix.svc }}
uses: docker/build-push-action@v2
with:
@ -74,6 +83,5 @@ jobs:
outputs: "type=image,push=false"
platforms: linux/amd64,linux/arm/v7,linux/arm64
tags: |
ghcr.io/${{ steps.get_repo_owner.outputs.repo_owner }}/${{ matrix.svc }}:${{ steps.get_tag.outputs.TAG }}
ghcr.io/${{ steps.get_repo_owner.outputs.repo_owner }}/${{ matrix.svc }}:${{ github.sha }}
ghcr.io/${{ steps.get_repo_owner.outputs.repo_owner }}/${{ matrix.svc }}:latest
ghcr.io/${{ env.REPO_OWNER }}/${{ matrix.svc }}:${{ github.sha }}
ghcr.io/${{ env.REPO_OWNER }}/${{ matrix.svc }}:latest

View File

@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
go-version: [1.18.x]
go-version: [1.19.x]
steps:
- uses: actions/checkout@master
with:
@ -24,14 +24,23 @@ jobs:
uses: docker/login-action@v1
with:
username: ${{ github.repository_owner }}
password: ${{ secrets.DOCKER_PASSWORD }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
- name: Get TAG
id: get_tag
run: echo ::set-output name=TAG::${GITHUB_REF#refs/tags/}
run: echo TAG=${GITHUB_REF#refs/tags/} >> $GITHUB_ENV
- name: Get git commit
id: get_git_commit
run: echo "GIT_COMMIT=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Get version
id: get_version
run: echo "VERSION=$(git describe --tags --dirty)" >> $GITHUB_ENV
- name: Get Repo Owner
id: get_repo_owner
run: echo ::set-output name=repo_owner::$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')
run: echo "REPO_OWNER=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" > $GITHUB_ENV
- name: Publish ${{ matrix.svc }}
uses: docker/build-push-action@v2
with:
@ -40,17 +49,18 @@ jobs:
outputs: "type=registry,push=true"
platforms: linux/amd64,linux/arm/v7,linux/arm64
build-args: |
VERSION=${{ steps.get_tag.outputs.TAG }}
VERSION=${{ env.TAG }}
GIT_COMMIT=${{ github.sha }}
tags: |
ghcr.io/${{ steps.get_repo_owner.outputs.repo_owner }}/gateway:${{ steps.get_tag.outputs.TAG }}
ghcr.io/${{ steps.get_repo_owner.outputs.repo_owner }}/gateway:${{ github.sha }}
ghcr.io/${{ steps.get_repo_owner.outputs.repo_owner }}/gateway:latest
ghcr.io/${{ env.REPO_OWNER }}/gateway:${{ github.sha }}
ghcr.io/${{ env.REPO_OWNER }}/gateway:${{ env.TAG }}
ghcr.io/${{ env.REPO_OWNER }}/gateway:latest
publish-auth-plugins:
runs-on: ubuntu-latest
strategy:
matrix:
go-version: [1.17.x]
go-version: [1.19.x]
svc: [
basic-auth
]
@ -62,18 +72,27 @@ jobs:
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Get TAG
id: get_tag
run: echo ::set-output name=TAG::${GITHUB_REF#refs/tags/}
- name: Login to Docker Registry
uses: docker/login-action@v1
with:
username: ${{ github.repository_owner }}
password: ${{ secrets.DOCKER_PASSWORD }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
- name: Get TAG
id: get_tag
run: echo TAG=${GITHUB_REF#refs/tags/} >> $GITHUB_ENV
- name: Get git commit
id: get_git_commit
run: echo "GIT_COMMIT=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Get version
id: get_version
run: echo "VERSION=$(git describe --tags --dirty)" >> $GITHUB_ENV
- name: Get Repo Owner
id: get_repo_owner
run: echo ::set-output name=repo_owner::$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')
run: echo "REPO_OWNER=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" > $GITHUB_ENV
- name: Publish ${{ matrix.svc }}
uses: docker/build-push-action@v2
with:
@ -82,6 +101,6 @@ jobs:
outputs: "type=registry,push=true"
platforms: linux/amd64,linux/arm/v7,linux/arm64
tags: |
ghcr.io/${{ steps.get_repo_owner.outputs.repo_owner }}/${{ matrix.svc }}:${{ steps.get_tag.outputs.TAG }}
ghcr.io/${{ steps.get_repo_owner.outputs.repo_owner }}/${{ matrix.svc }}:${{ github.sha }}
ghcr.io/${{ steps.get_repo_owner.outputs.repo_owner }}/${{ matrix.svc }}:latest
ghcr.io/${{ env.REPO_OWNER }}/${{ matrix.svc }}:${{ github.sha }}
ghcr.io/${{ env.REPO_OWNER }}/${{ matrix.svc }}:${{ env.TAG }}
ghcr.io/${{ env.REPO_OWNER }}/${{ matrix.svc }}:latest

View File

@ -1,16 +1,29 @@
# Adopters
This list shows adopters of OpenFaaS. If you're using OpenFaaS in some way, then please get in touch.
This list shows adopters of OpenFaaS. If you're using OpenFaaS in some way, then please add your team and use-case to this file
## Further resources
## How else can you support this project?
Become a GitHub Sponsor - either as an individual practitioner using or introducing OpenFaaS, or as a company, or both.
### Individual or company sponsor
You can sponsor OpenFaaS on GitHub, most users do this using their personal accounts.
You'll show up as a sponsor on issues and PRs, which is going to make it more likely you'll get a timely response and help from the community.
* [Sponsor OpenFaaS on GitHub](https://github.com/openfaas/sponsors)
Support & OpenFaaS PRO
### Help yourself, with the manual for OpenFaaS
* Buy [OpenFaaS PRO or Enterprise Subscription](https://openfaas.com/support) from OpenFaaS Ltd
Help yourself learn and grow, whilst supporting us.
* [Serverless For Everyone Else](https://openfaas.gumroad.com/l/serverless-for-everyone-else) is the official manual for OpenFaaS, brimming with examples written in Node.js
* [Everyday Go](https://openfaas.gumroad.com/l/everyday-golang) is the official reference material from the OpenFaaS founder Alex Ellis for learning Go, and writing functions in Go.
### Running OpenFaaS in production?
See how Community Edition (CE) which you are using now, compares to OpenFaaS Pro, which is designed for commercial use:
[Overview and comparison of OpenFaaS Pro](https://docs.openfaas.com/openfaas-pro/introduction/)
Tell us more:
@ -19,8 +32,14 @@ Tell us more:
## Adopters list (alphabetical)
* [911 Security](https://www.911security.com/) - "We migrated our Python functions from AWS Lambda using automation, and now run them in airgapped environments for customers using OpenFaaS and arkade" - Scott Creager
* [Axa France](https://www.axa.fr) - Axa uses OpenFaaS for inference and predictions at scale using ML models - Pierre-Henri Gache
* [3fs](https://3fs.si) - 3fs is using OpenFaaS for automating repetitive development tasks like automatic rebasing, vendoring of dependencies on merge requests and many other things that make our developers lives easier
* [Baidu](https://baidu.com) - A team within Baidu provides ML models to customers which are hosted on OpenFaaS - He Sun.
* [BT](https://www.bt.com) - BT are using OpenFaaS to enable collaboration between data-scientists and developers. The teams are going from 3-years to build and deliver a PoC, to 3 months. See: [KubeCon video](https://www.youtube.com/watch?v=y77HlN2Fa-w)
* [Breu](https://breu.io) - Breu is using OpenFaaS to build an end user monitoring solution for hybrid cloud.
@ -49,10 +68,14 @@ Tell us more:
* [Fonix Telematics](https://fonixtelematics.com/) - "We are using OpenFaaS to build our new generation of APIs."
* [FTI Consulting](https://www.fticonsulting.com/) - "We've built a cloud-based analytical framework using Netflix Conductor for the workflow engine and OpenFaaS for our serverless function implementation, where each function can be called from a workflow. We've currently deployed several dozen OpenFaaS functions to our on-premise Kubernetes clusters" - Jason Cullison
* [GalaxyCard](https://www.galaxycard.in/) - "GalaxyCard is a happy user of OpenFaaS"
* [GMO Internet](https://www.gmo.jp/en/)
* [HelloSafe](https://hellosafe.ca/en/) - "HelloSafe is one of the leading website of financial products comparison in Canada. We're using OpenFaas on our production applications."
* [HPE](https://www.hpe.com/) - HPE Ezmeral is a purpose-built, hybrid cloud platform for data science and analytics workloads.
* [HM Planning Inspectorate](http://www.planninginspectorate.gov.uk) - HM Planning Inspectorate is the UK Government body responsible for dealing with planning appeals, national infrastructure planning applications, examinations of local plans and other specialist casework in England and Wales. OpenFaaS eased the communication between the new planning appeals website and the monolithic back-office application and allowed easy retries in the event of network failure.
@ -65,6 +88,8 @@ Tell us more:
* [Intraffic](https://www.intraffic.nl/) - "Using OpenFaaS for integration and callable AI/ML models for asset management."
* [Klar MX](https://klar.mx) - "Cuenta con Klar" - Klar provides access to credit cards in Mexico for those who have issues with credit history.
* [LivePerson](https://www.liveperson.com/) - LivePerson extended their chat platform by allowing customers to write functions to execute in client chat flows. See [KubeCon video](https://www.youtube.com/watch?v=bt06Z28uzPA)
* [metaspan](https://metaspan.com) - "End-to-end blockchain solutions". metaspan ported all api endpoints from monolith express.js/sails.js to openfaas micro-functions.
@ -73,18 +98,30 @@ Tell us more:
* [Naamio](https://naamio.cloud/) - "Naamio are providing an event-based serverless API to developers to enable rapid development of decentralized applications on the cloud. By providing progressive enhancement within the developer tools, OpenFaaS has enabled Naamio to go from clustered Docker container deployments with REST APIs using Kubernetes, to load balanced deployable functions over an open event queue interface. It was key to enabling a standard multilingual development kit across cloud providers."
* [Neoskop](https://www.neoskop.de) - Neoskop is using OpenFAAS in production to provide our developers with a self-service platform for backend functionality and thereby our customers agile and rapid feature development.
* [Nexylan](nexylan.com/) - "We are a French professional hoster that use OpenFaaS in dev and production inside our private extranet. We use OpenFaaS to split our historic monolith project and then simplify development/maintainability and speed up development times."
* [NGC](https://www.ngcsoftware.com/)
* [Northwestern Mutual](https://www.northwesternmutual.com/) - "OpenFaaS is a great platform and Alex and team are a great resource. They will work very diligently with your team to help you get the most out of OpenFaaS, and he will always be able to provide valuable insight into issues that a team might face while developing software for the cloud." Kieran Gordon
* [P. A. Media Group](https://pamediagroup.com/) - "We use OpenFaaS to orchestrate Terraform and Jenkins jobs for our internal infrastructure provisioning" - Rob Stonham
* [PathfinderZA](https://www.pathfinderza.com) - PathfinderZA is an IOT security firm selling underground sensors that transmits warnings to users if a person or vehicle goes past it. We're using OpenFaas, with Dockerised functions written in Java (Quarkus) and Rust (Actix/Rocket-RS).
* [Pentium Network](https://www.pentium.network/)
* [PiperCI](https://piperci.dreamer-labs.net) - PiperCI is a task management framework that provides users with a standard library of CI/CD-centric tasks and the [OpenFaas](https://www.openfaas.com/) and [Kubernetes](https://kubernetes.io/) based infrastructure required to run them. PiperCI can be used in conjunction with existing CI/CD orchestrators like GitlabCI, Jenkins, TravisCI, or others to create a more scalable, robust, and functional CI/CD system.
* [Politics Rewired](https://www.politicsrewired.com/) - Politics Rewired uses OpenFaaS to enable organisation of political campaigns and sending of SMS message at scale using functions.
* [Press Association](https://www.pressassociation.com/) - Press Association is using OpenFaaS in development and production as part of our deployment pipeline.
* [Pypestream](https://www.pypestream.com) - "We have just migrated 50 of our customers from Kubeless, which is now deprecated to OpenFaaS" - Antoine Hamon
* [Outsystems](https://outsystems.com) - "In my team, we're using OpenFaaS to help the orchestration of our CD pipelines. From a high-level perspective, we have a NATS cluster and the OpenFaaS functions subscribing to NATS topics and reacting to them. Our functions are doing some work related to the pipeline, like saving data to the database, sending Slack messages, or just returning something from the database." (Marco Alves)
* [Ratehub](https://www.ratehub.ca) - Ratehub is Canada's leading personal finance comparison site. We're breaking apart our monolithic PHP and Java codebases into Node, PHP and Java OpenFaaS functions; there's not much that we don't plan on moving to FaaS!
* [Rapid Circle](https://www.rapidcircle.com) is using OpenFaaS within a Azure Kubernetes cluster to host a large amount of micro-services aiming at automating core activities of their Microsoft 365 Cloud Managed Services offering. Robustness, speed, scalable and simplicity have been major reasons to favor OpenFaaS over Azure Functions.
@ -95,6 +132,8 @@ Tell us more:
* [SURFsara IoT Platform for Sensemakers](https://github.com/sensemakersamsterdam/sensemakers-iot-platform) - The SURFsara IoT Platform for Sensemakers is a platform for storing, monitoring, visualising and analyzing sensor data. It is a collaboration platform designed to host multiple projects carried by the Sensemakers community. In addition, there is a project dedicated to experimentation, available for everyone to use. All data within the platform is shared. OpenFaaS serverless functions give access to the platform through an HTTP entry point, take care of the metadata extraction and enable custom event-driven actions.
* [skyslope.com](https://skyslope.com) - "We process millions of documents per day and moved from AWS Lambda to Kubernetes. We estimate that OpenFaaS has saved us 60,000 USD each year over the past three years that we've been running it in our business" - Derrick Martinez
* [Transmute Industries](https://www.transmute.industries/) - "At Transmute we use OpenFaaS to develop identity and access integrations leveraging decentralized identities that integrate with legacy IAM systems. OpenFaaS helps Transmute and our customers avoid vendor lock in, encourages modularity, and helps us rapidly develop and release integrations for customers."
* [Traversals](https://traversals.com/) - At Traversals, we use OpenFaaS for processing of incoming data. We take benefit from various programming languages available in OpenFaaS.

View File

@ -385,13 +385,7 @@ The [community.md](https://github.com/openfaas/faas/blob/master/community.md) fi
### Roadmap
* See the [2019 Project Update](https://www.openfaas.com/blog/project-update/)
* Browse open issues in [openfaas/faas](https://github.com/openfaas/faas/issues)
* Join the [2020 Roadmap on Trello](https://trello.com/invite/b/5OpMyrBP/ade103a10ae1e38eb5d3eee7955260a9/2020-openfaas-roadmap)
For commercial users, please feel free to ask about support, backlog prioritisation and feature development. Email sales@openfaas.com.
See also: [OpenFaaS Pro](https://docs.openfaas.com/openfaas-pro/introduction/)
## License

View File

@ -20,7 +20,7 @@ OpenFaaS&reg; makes it easy for developers to deploy event-driven functions and
* Portable: runs on existing hardware or public/private cloud by leveraging [Kubernetes](https://github.com/openfaas/faas-netes)
* [CLI](http://github.com/openfaas/faas-cli) available with YAML format for templating and defining functions
* Auto-scales as demand increases [including to zero](https://docs.openfaas.com/architecture/autoscaling/)
* [Commercially supported distribution by the team behind OpenFaaS](https://openfaas.com/support/)
* [Commercially supported Pro distribution by the team behind OpenFaaS](https://openfaas.com/pricing/)
**Want to dig deeper into OpenFaaS?**
@ -153,9 +153,9 @@ Have you written a blog about OpenFaaS? Do you have a speaking event? Send a Pul
* [Read blogs/articles and find events about OpenFaaS](https://github.com/openfaas/faas/blob/master/community.md)
### Roadmap and contributing
### Contributing
OpenFaaS is written in Golang and is MIT licensed - contributions are welcomed whether that means providing feedback, testing existing and new feature or hacking on the source.
OpenFaaS Community Edition is written in Golang and is MIT licensed. Various types of contributions are welcomed whether that means providing feedback, testing existing and new feature or hacking on the source code.
#### How do I become a contributor?

View File

@ -1,34 +0,0 @@
# Roadmap
## GitHub projects / source code
You can find a detailed breakdown of the [openfaas](https://github.com/openfaas/) and [openfaas-incubator](https://github.com/openfaas-incubator/) organisations and projects [in the docs](https://docs.openfaas.com/contributing/get-started/).
## Feature overview
For an overview see [the docs](https://docs.openfaas.com/) or see a [feature comparison between OpenFaaS and OpenFaaS Cloud](https://docs.openfaas.com/openfaas-cloud/intro/).
### OpenFaaS
OpenFaaS is a platform for building Serverless Functions and/or deploying existing microservices. Any programming language or binary is supported with a range of [templates](https://github.com/openfaas/templates) available to help you get started.
The core services which make up OpenFaaS need to run on a Linux master, but Windows worker nodes can be added to your cluster to run Windows binaries and functions.
Platforms: the x86_64 platform has first class support, with 32-bit arm and 64-bit arm provided on a best-effort basis.
Orchestrators: there is official support for Kubernetes & faasd (containerd) with the community providing support for AWS Fargate, Hashicorp Nomad and others.
### OpenFaaS Cloud
OpenFaaS Cloud is a multi-user distribution of OpenFaaS with a built-in CI/CD pipeline, OAuth delegation, a dashboard and a git-based workflow with public/private GitHub and self-hosted GitLab.
## What is coming next?
Proposals and feature requests are tracked [on the 2020 Roadmap on Trello](https://trello.com/invite/b/5OpMyrBP/ade103a10ae1e38eb5d3eee7955260a9/2020-openfaas-roadmap) and through the GitHub issue tracker of each project in the two organisations.
* [openfaas](https://github.com/openfaas/)
* [openfaas-incubator](https://github.com/openfaas-incubator/)
## Contributing
Please see [CONTRIBUTING.md](https://github.com/openfaas/faas/blob/master/CONTRIBUTING.md).

View File

@ -1,6 +1,6 @@
FROM --platform=${BUILDPLATFORM:-linux/amd64} ghcr.io/openfaas/license-check:0.4.0 as license-check
FROM --platform=${BUILDPLATFORM:-linux/amd64} ghcr.io/openfaas/license-check:0.4.1 as license-check
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.18 as build
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.19 as build
ENV GO111MODULE=off
ENV CGO_ENABLED=0
@ -26,7 +26,7 @@ RUN CGO_ENABLED=${CGO_ENABLED} GOOS=${TARGETOS} GOARCH=${TARGETARCH} go test -v
RUN CGO_ENABLED=${CGO_ENABLED} GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build \
--ldflags "-s -w" -a -installsuffix cgo -o handler .
FROM --platform=${TARGETPLATFORM:-linux/amd64} alpine:3.16.2 as ship
FROM --platform=${TARGETPLATFORM:-linux/amd64} alpine:3.17 as ship
# Add non-root user
RUN addgroup -S app && adduser -S -g app app \
&& mkdir -p /home/app \

View File

@ -11,7 +11,7 @@
// programmers to add context to the failure path in their code in a way
// that does not destroy the original value of the error.
//
// Adding context to an error
// # Adding context to an error
//
// The errors.Wrap function returns a new error that adds context to the
// original error by recording a stack trace at the point Wrap is called,
@ -27,7 +27,7 @@
// operations: annotating an error with a stack trace and with a message,
// respectively.
//
// Retrieving the cause of an error
// # Retrieving the cause of an error
//
// Using errors.Wrap constructs a stack of errors, adding context to the
// preceding error. Depending on the nature of the error it may be necessary
@ -52,7 +52,7 @@
// Although the causer interface is not exported by this package, it is
// considered a part of its stable public interface.
//
// Formatted printing of errors
// # Formatted printing of errors
//
// All error values returned from this package implement fmt.Formatter and can
// be formatted by the fmt package. The following verbs are supported:
@ -63,7 +63,7 @@
// %+v extended format. Each Frame of the error's StackTrace will
// be printed in detail.
//
// Retrieving the stack trace of an error or wrapper
// # Retrieving the stack trace of an error or wrapper
//
// New, Errorf, Wrap, and Wrapf record a stack trace at the point they are
// invoked. This information can be retrieved with the following interface:

View File

@ -1,3 +1,4 @@
//go:build go1.13
// +build go1.13
package errors

View File

@ -1,6 +1,6 @@
FROM --platform=${BUILDPLATFORM:-linux/amd64} ghcr.io/openfaas/license-check:0.4.0 as license-check
FROM --platform=${BUILDPLATFORM:-linux/amd64} ghcr.io/openfaas/license-check:0.4.1 as license-check
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.18 as build
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.19 as build
ENV GO111MODULE=on
ENV CGO_ENABLED=0
@ -45,7 +45,7 @@ RUN CGO_ENABLED=${CGO_ENABLED} GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build --
-X github.com/openfaas/faas/gateway/types.Arch=${TARGETARCH}" \
-a -installsuffix cgo -o gateway .
FROM --platform=${TARGETPLATFORM:-linux/amd64} alpine:3.16.1 as ship
FROM --platform=${TARGETPLATFORM:-linux/amd64} alpine:3.17 as ship
LABEL org.label-schema.license="MIT" \
org.label-schema.vcs-url="https://github.com/openfaas/faas" \

View File

@ -7,7 +7,7 @@ SERVER?=docker.io
OWNER?=alexellis2
NAME=gateway
.PHONY: build-local
.PHONY: local-docker
build-local:
@echo $(SERVER)/$(OWNER)/$(NAME):$(TAG) \
&& docker buildx create --use --name=multiarch --node multiarch \
@ -17,8 +17,8 @@ build-local:
--output "type=docker,push=false" \
--tag $(SERVER)/$(OWNER)/$(NAME):$(TAG) .
.PHONY: build-push
build-push:
.PHONY: push-docker
push-docker:
@echo $(SERVER)/$(OWNER)/$(NAME):$(TAG) \
&& docker buildx create --use --name=multiarch --node multiarch \
&& docker buildx build \

View File

@ -1,6 +1,6 @@
module github.com/openfaas/faas/gateway
go 1.17
go 1.18
require (
github.com/docker/distribution v2.8.1+incompatible

View File

@ -5,7 +5,7 @@ package handlers
import "net/http"
//HealthzHandler healthz hanlder for mertics server
// HealthzHandler healthz hanlder for mertics server
func HealthzHandler(w http.ResponseWriter, r *http.Request) {
switch r.Method {

View File

@ -6,50 +6,21 @@ package handlers
import (
"net/http"
"time"
"github.com/openfaas/faas-provider/httputil"
)
// MakeNotifierWrapper wraps a http.HandlerFunc in an interceptor to pass to HTTPNotifier
func MakeNotifierWrapper(next http.HandlerFunc, notifiers []HTTPNotifier) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
then := time.Now()
writer := newWriteInterceptor(w)
next(&writer, r)
url := r.URL.String()
writer := httputil.NewHttpWriteInterceptor(w)
next(writer, r)
for _, notifier := range notifiers {
notifier.Notify(r.Method, url, url, writer.Status(), "completed", time.Since(then))
}
}
}
func newWriteInterceptor(w http.ResponseWriter) writeInterceptor {
return writeInterceptor{
w: w,
}
}
type writeInterceptor struct {
CapturedStatusCode int
w http.ResponseWriter
}
func (c *writeInterceptor) Status() int {
if c.CapturedStatusCode == 0 {
return http.StatusOK
}
return c.CapturedStatusCode
}
func (c *writeInterceptor) Header() http.Header {
return c.w.Header()
}
func (c *writeInterceptor) Write(data []byte) (int, error) {
return c.w.Write(data)
}
func (c *writeInterceptor) WriteHeader(code int) {
c.CapturedStatusCode = code
c.w.WriteHeader(code)
}

View File

@ -82,6 +82,6 @@ type LoggingNotifier struct {
// Notify the LoggingNotifier about a request
func (LoggingNotifier) Notify(method string, URL string, originalURL string, statusCode int, event string, duration time.Duration) {
if event == "completed" {
log.Printf("Forwarded [%s] to %s - [%d] - %fs seconds", method, originalURL, statusCode, duration.Seconds())
log.Printf("Forwarded [%s] to %s - [%d] - %.4fs", method, originalURL, statusCode, duration.Seconds())
}
}

View File

@ -6,7 +6,6 @@ package handlers
import (
"fmt"
"io/ioutil"
"log"
"net/http"
"net/url"
"strings"
@ -19,8 +18,6 @@ import (
"github.com/openfaas/faas/gateway/scaling"
)
const queueAnnotation = "com.openfaas.queue"
// MakeQueuedProxy accepts work onto a queue
func MakeQueuedProxy(metrics metrics.MetricOptions, queuer ftypes.RequestQueuer, pathTransformer middleware.URLPathTransformer, defaultNS string, functionQuery scaling.FunctionQuery) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
@ -44,12 +41,6 @@ func MakeQueuedProxy(metrics metrics.MetricOptions, queuer ftypes.RequestQueuer,
vars := mux.Vars(r)
name := vars["name"]
queueName, err := getQueueName(name, functionQuery)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
req := &ftypes.QueueRequest{
Function: name,
Body: body,
@ -59,11 +50,6 @@ func MakeQueuedProxy(metrics metrics.MetricOptions, queuer ftypes.RequestQueuer,
Header: r.Header,
Host: r.Host,
CallbackURL: callbackURL,
QueueName: queueName,
}
if len(queueName) > 0 {
log.Printf("Queueing %s to: %s\n", name, queueName)
}
if err = queuer.Queue(req); err != nil {
@ -92,21 +78,6 @@ func getCallbackURLHeader(header http.Header) (*url.URL, error) {
return callbackURL, nil
}
func getQueueName(name string, fnQuery scaling.FunctionQuery) (queueName string, err error) {
fn, ns := getNameParts(name)
annotations, err := fnQuery.GetAnnotations(fn, ns)
if err != nil {
return "", err
}
queueName = ""
if v := annotations[queueAnnotation]; len(v) > 0 {
queueName = v
}
return queueName, err
}
func getNameParts(name string) (fn, ns string) {
fn = name
ns = ""

View File

@ -48,6 +48,7 @@ func MakeScalingHandler(next http.HandlerFunc, scaler scaling.FunctionScaler, co
return
}
log.Printf("[Scale] function=%s.%s 0=>N timed-out after %fs\n", functionName, namespace, res.Duration.Seconds())
log.Printf("[Scale] function=%s.%s 0=>N timed-out after %.4fs\n",
functionName, namespace, res.Duration.Seconds())
}
}

View File

@ -278,7 +278,7 @@ func main() {
log.Fatal(s.ListenAndServe())
}
//runMetricsServer Listen on a separate HTTP port for Prometheus metrics to keep this accessible from
// runMetricsServer Listen on a separate HTTP port for Prometheus metrics to keep this accessible from
// the internal network only.
func runMetricsServer() {
metricsHandler := metrics.PrometheusHandler()

View File

@ -34,19 +34,17 @@ func AddMetricsHandler(handler http.HandlerFunc, prometheusQuery PrometheusQuery
log.Printf("List functions responded with code %d, body: %s",
recorder.Code,
string(upstreamBody))
http.Error(w, "Metrics hander: unexpected status code retrieving functions from backend", http.StatusInternalServerError)
http.Error(w, string(upstreamBody), recorder.Code)
return
}
var functions []types.FunctionStatus
err := json.Unmarshal(upstreamBody, &functions)
if err != nil {
log.Printf("Metrics upstream error: %s", err)
log.Printf("Metrics upstream error: %s, value: %s", err, string(upstreamBody))
http.Error(w, "Error parsing metrics from upstream provider/backend", http.StatusInternalServerError)
http.Error(w, "Unable to parse list of functions from provider", http.StatusInternalServerError)
return
}
@ -63,8 +61,8 @@ func AddMetricsHandler(handler http.HandlerFunc, prometheusQuery PrometheusQuery
results, err := prometheusQuery.Fetch(url.QueryEscape(q))
if err != nil {
// log the error but continue, the mixIn will correctly handle the empty results.
log.Printf("Error querying Prometheus: %s\n", err.Error())
return
}
mixIn(&functions, results)
}
@ -72,7 +70,7 @@ func AddMetricsHandler(handler http.HandlerFunc, prometheusQuery PrometheusQuery
bytesOut, err := json.Marshal(functions)
if err != nil {
log.Printf("Error serializing functions: %s", err)
http.Error(w, "error writing response after adding metrics", http.StatusInternalServerError)
http.Error(w, "Error writing response after adding metrics", http.StatusInternalServerError)
return
}

View File

@ -5,6 +5,7 @@ import (
"log"
"net/http"
"net/http/httptest"
"strings"
"testing"
types "github.com/openfaas/faas-provider/types"
@ -55,6 +56,34 @@ func Test_PrometheusMetrics_MixedInto_Services(t *testing.T) {
}
}
func Test_MetricHandler_ForwardsErrors(t *testing.T) {
functionsHandler := func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusConflict)
w.Write([]byte("test error case"))
}
// explicitly set the query fetcher to nil because it should
// not be called when a non-200 response is returned from the
// functions handler, if it is called then the test will panic
handler := AddMetricsHandler(functionsHandler, nil)
rr := httptest.NewRecorder()
request, _ := http.NewRequest(http.MethodGet, "/system/functions", nil)
handler.ServeHTTP(rr, request)
if status := rr.Code; status != http.StatusConflict {
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusConflict)
}
if rr.Header().Get("Content-Type") != "text/plain; charset=utf-8" {
t.Errorf("Want 'text/plain; charset=utf-8' content-type, got: %s", rr.Header().Get("Content-Type"))
}
body := strings.TrimSpace(rr.Body.String())
if body != "test error case" {
t.Errorf("Want 'test error case', got: %q", body)
}
}
func Test_FunctionsHandler_ReturnsJSONAndOneFunction(t *testing.T) {
functionsHandler := makeFunctionsHandler()

View File

@ -144,7 +144,7 @@ func (e *Exporter) getHTTPClient(timeout time.Duration) http.Client {
}
func (e *Exporter) getFunctions(endpointURL url.URL, namespace string) ([]types.FunctionStatus, error) {
timeout := 3 * time.Second
timeout := 5 * time.Second
proxyClient := e.getHTTPClient(timeout)
endpointURL.Path = path.Join(endpointURL.Path, "/system/functions")
@ -170,10 +170,11 @@ func (e *Exporter) getFunctions(endpointURL url.URL, namespace string) ([]types.
return services, readErr
}
unmarshalErr := json.Unmarshal(bytesOut, &services)
if unmarshalErr != nil {
return services, unmarshalErr
if err := json.Unmarshal(bytesOut, &services); err != nil {
return services, fmt.Errorf("error unmarshalling response: %s, error: %s",
string(bytesOut), err)
}
return services, nil
}
@ -186,7 +187,7 @@ func (e *Exporter) getNamespaces(endpointURL url.URL) ([]string, error) {
get.SetBasicAuth(e.credentials.User, e.credentials.Password)
}
timeout := 3 * time.Second
timeout := 5 * time.Second
proxyClient := e.getHTTPClient(timeout)
res, err := proxyClient.Do(get)
@ -203,9 +204,8 @@ func (e *Exporter) getNamespaces(endpointURL url.URL) ([]string, error) {
return namespaces, readErr
}
unmarshalErr := json.Unmarshal(bytesOut, &namespaces)
if unmarshalErr != nil {
return namespaces, unmarshalErr
if err := json.Unmarshal(bytesOut, &namespaces); err != nil {
return namespaces, fmt.Errorf("error unmarshalling response: %s, error: %s", string(bytesOut), err)
}
return namespaces, nil
}

View File

@ -102,7 +102,7 @@ func (s ExternalServiceQuery) GetReplicas(serviceName, serviceNamespace string)
// log.Printf("GetReplicas [%s.%s] took: %fs", serviceName, serviceNamespace, time.Since(start).Seconds())
} else {
log.Printf("GetReplicas [%s.%s] took: %fs, code: %d\n", serviceName, serviceNamespace, time.Since(start).Seconds(), res.StatusCode)
log.Printf("GetReplicas [%s.%s] took: %.4fs, code: %d\n", serviceName, serviceNamespace, time.Since(start).Seconds(), res.StatusCode)
return emptyServiceQueryResponse, fmt.Errorf("server returned non-200 status code (%d) for function, %s, body: %s", res.StatusCode, serviceName, string(bytesOut))
}
@ -176,7 +176,8 @@ func (s ExternalServiceQuery) SetReplicas(serviceName, serviceNamespace string,
err = fmt.Errorf("error scaling HTTP code %d, %s", res.StatusCode, urlPath)
}
log.Printf("SetReplicas [%s.%s] took: %fs", serviceName, serviceNamespace, time.Since(start).Seconds())
log.Printf("SetReplicas [%s.%s] took: %.4fs",
serviceName, serviceNamespace, time.Since(start).Seconds())
return err
}

View File

@ -39,6 +39,8 @@ type FunctionScaleResult struct {
func (f *FunctionScaler) Scale(functionName, namespace string) FunctionScaleResult {
start := time.Now()
// First check the cache, if there are available replicas, then the
// request can be served.
if cachedResponse, hit := f.Cache.Get(functionName, namespace); hit &&
cachedResponse.AvailableReplicas > 0 {
return FunctionScaleResult{
@ -48,8 +50,10 @@ func (f *FunctionScaler) Scale(functionName, namespace string) FunctionScaleResu
Duration: time.Since(start),
}
}
getKey := fmt.Sprintf("GetReplicas-%s.%s", functionName, namespace)
// The wasn't a hit, or there were no available replicas found
// so query the live endpoint
getKey := fmt.Sprintf("GetReplicas-%s.%s", functionName, namespace)
res, err, _ := f.SingleFlight.Do(getKey, func() (interface{}, error) {
return f.Config.ServiceQuery.GetReplicas(functionName, namespace)
})
@ -71,16 +75,30 @@ func (f *FunctionScaler) Scale(functionName, namespace string) FunctionScaleResu
}
}
queryResponse := res.(ServiceQueryResponse)
// Check if there are available replicas in the live data
if res.(ServiceQueryResponse).AvailableReplicas > 0 {
return FunctionScaleResult{
Error: nil,
Available: true,
Found: true,
Duration: time.Since(start),
}
}
// Store the result of GetReplicas in the cache
queryResponse := res.(ServiceQueryResponse)
f.Cache.Set(functionName, namespace, queryResponse)
if queryResponse.AvailableReplicas == 0 {
// If the desired replica count is 0, then a scale up event
// is required.
if queryResponse.Replicas == 0 {
minReplicas := uint64(1)
if queryResponse.MinReplicas > 0 {
minReplicas = queryResponse.MinReplicas
}
// In a retry-loop, first query desired replicas, then
// set them if the value is still at 0.
scaleResult := types.Retry(func(attempt int) error {
res, err, _ := f.SingleFlight.Do(getKey, func() (interface{}, error) {
@ -91,19 +109,23 @@ func (f *FunctionScaler) Scale(functionName, namespace string) FunctionScaleResu
return err
}
// Cache the response
queryResponse = res.(ServiceQueryResponse)
f.Cache.Set(functionName, namespace, queryResponse)
// The scale up is complete because the desired replica count
// has been set to 1 or more.
if queryResponse.Replicas > 0 {
return nil
}
// Request a scale up to the minimum amount of replicas
setKey := fmt.Sprintf("SetReplicas-%s.%s", functionName, namespace)
if _, err, _ := f.SingleFlight.Do(setKey, func() (interface{}, error) {
log.Printf("[Scale %d] function=%s 0 => %d requested", attempt, functionName, minReplicas)
log.Printf("[Scale %d/%d] function=%s 0 => %d requested",
attempt, int(f.Config.SetScaleRetries), functionName, minReplicas)
if err := f.Config.ServiceQuery.SetReplicas(functionName, namespace, minReplicas); err != nil {
return nil, fmt.Errorf("unable to scale function [%s], err: %s", functionName, err)
@ -126,6 +148,9 @@ func (f *FunctionScaler) Scale(functionName, namespace string) FunctionScaleResu
}
}
}
// Holding pattern for at least one function replica to be available
for i := 0; i < int(f.Config.MaxPollCount); i++ {
res, err, _ := f.SingleFlight.Do(getKey, func() (interface{}, error) {
@ -150,8 +175,7 @@ func (f *FunctionScaler) Scale(functionName, namespace string) FunctionScaleResu
if queryResponse.AvailableReplicas > 0 {
log.Printf("[Scale] function=%s 0 => %d successful - %fs",
functionName, queryResponse.AvailableReplicas, totalTime.Seconds())
log.Printf("[Ready] function=%s waited for - %.4fs", functionName, totalTime.Seconds())
return FunctionScaleResult{
Error: nil,
@ -163,7 +187,6 @@ func (f *FunctionScaler) Scale(functionName, namespace string) FunctionScaleResu
time.Sleep(f.Config.FunctionPollInterval)
}
}
return FunctionScaleResult{
Error: nil,

View File

@ -0,0 +1,57 @@
package httputil
import (
"bufio"
"net"
"net/http"
)
func NewHttpWriteInterceptor(w http.ResponseWriter) *HttpWriteInterceptor {
return &HttpWriteInterceptor{w, 0}
}
type HttpWriteInterceptor struct {
http.ResponseWriter
statusCode int
}
func (c *HttpWriteInterceptor) Status() int {
if c.statusCode == 0 {
return http.StatusOK
}
return c.statusCode
}
func (c *HttpWriteInterceptor) Header() http.Header {
return c.ResponseWriter.Header()
}
func (c *HttpWriteInterceptor) Write(data []byte) (int, error) {
if c.statusCode == 0 {
c.WriteHeader(http.StatusOK)
}
return c.ResponseWriter.Write(data)
}
func (c *HttpWriteInterceptor) WriteHeader(code int) {
c.statusCode = code
c.ResponseWriter.WriteHeader(code)
}
func (c *HttpWriteInterceptor) Flush() {
fl := c.ResponseWriter.(http.Flusher)
fl.Flush()
}
func (c *HttpWriteInterceptor) Hijack() (net.Conn, *bufio.ReadWriter, error) {
hj := c.ResponseWriter.(http.Hijacker)
return hj.Hijack()
}
func (c *HttpWriteInterceptor) CloseNotify() <-chan bool {
notifier, ok := c.ResponseWriter.(http.CloseNotifier)
if ok == false {
return nil
}
return notifier.CloseNotify()
}

View File

@ -0,0 +1,12 @@
package httputil
import (
"fmt"
"net/http"
)
// Errorf sets the response status code and write formats the provided message as the
// response body
func Errorf(w http.ResponseWriter, statusCode int, msg string, args ...interface{}) {
http.Error(w, fmt.Sprintf(msg, args...), statusCode)
}

View File

@ -40,6 +40,7 @@ github.com/nats-io/stan.go/pb
# github.com/openfaas/faas-provider v0.19.1
## explicit; go 1.17
github.com/openfaas/faas-provider/auth
github.com/openfaas/faas-provider/httputil
github.com/openfaas/faas-provider/types
# github.com/openfaas/nats-queue-worker v0.0.0-20220805080536-d1d72d857b1c
## explicit; go 1.16