Announcing NGINX Ingress Controller for Kubernetes Release 1.4.0

Original: https://www.nginx.com/blog/announcing-nginx-ingress-controller-for-kubernetes-release-1-4-0/

We are pleased to announce release 1.4.0 of the NGINX Ingress Controller for Kubernetes. This represents a milestone in the development of our supported solution for Ingress load balancing on Kubernetes platforms, including Amazon Elastic Container Service for Kubernetes (EKS), Diamanti, Google Kubernetes Engine (GKE), IBM Cloud Private, Microsoft Azure Container Service (AKS), Red Hat OpenShift, and others.

Release 1.4.0 includes:

The complete changelog for release 1.4.0, including bug fixes, improvements and changes, is available on GitHub.

From this release forwards, we will also make an “edge release” available as nginx/nginx-ingress:edge. Based on the latest commit on the master branch, this release is intended for users who wish to experiment with the latest NGINX features in a non‑production or non‑critical environment.

What Is the NGINX Ingress Controller for Kubernetes?

The NGINX Ingress controller for Kubernetes is a daemon that runs alongside NGINX Open Source or NGINX Plus instances in a Kubernetes environment. The daemon monitors Ingress resources – requests for external access to services deployed in Kubernetes. It then automatically configures NGINX or NGINX Plus to route and load balance traffic to these services.

Multiple NGINX Ingress controller implementations are available. The official NGINX implementation is high‑performance, production‑ready, and suitable for long‑term deployment. Compared to the community NGINX‑based offering, we focus more on maintaining stability across releases than on feature velocity. We provide full technical support to NGINX Plus subscribers at no additional cost, and NGINX Open Source users benefit from our focus on stability and supportability.

NGINX Ingress Controller 1.4.0 Features in Detail

Support for TCP and UDP Load Balancing

Kubernetes Ingress Resources are, by design, HTTP‑centric. They do not provide a natural way for users to configure load balancing for non‑HTTP protocols: TCP protocols such as database or MQTT, or UDP protocols such as DNS or media.

On the other hand, NGINX is widely used to load balance TCP connections and UDP sessions, alongside HTTP and related protocols. TCP and UDP load balancing is configured in the stream{} block of the NGINX configuration. In release 1.4.0, you can now use the new stream-snippets ConfigMap key to insert configuration into this block.

The configuration differs slightly between NGINX Open Source and NGINX Plus:

The following example illustrates how to configure an NGINX Plus‑based Ingress controller to load balance a UDP‑based protocol. The service is implemented using the DNS name syslog-headless.default.svc.cluster.local.

A simple NGINX Plus configuration looks like the following:

resolver kube-dns.kube-system.svc.cluster.local valid=5s;

upstream syslog-udp {
    zone syslog-udp-zone 64k;
    server syslog-headless.default.svc.cluster.local service=_syslog._udp resolve;
}

server {
    listen 514 udp;
    proxy_pass syslog-udp;
    proxy_responses 0;
    status_zone syslog-udp;
}

You then embed this configuration in your Ingress resource as follows:

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
  namespace: nginx-ingress
data:
    stream-snippets: |
        resolver kube-dns.kube-system.svc.cluster.local valid=5s;

        upstream syslog-udp {
            zone syslog-udp-zone 64k;
            server syslog-headless.default.svc.cluster.local service=_syslog._udp resolve;
        }

        server {
            listen 514 udp;
            proxy_pass syslog-udp;
            proxy_responses 0;
            status_zone syslog-udp;
        }

You can of course embed any NGINX configuration in the stream-snippets ConfigMap, so you have full access to the entire set of NGINX or NGINX Plus capabilities to manage TCP and UDP traffic, such as health checks, authentication, and custom access logging.

Extended Prometheus Support

NGINX Open Source

The Prometheus Exporter now supports NGINX Open Source as well as NGINX Plus. It can now query the stub_status API in NGINX Open Source and provide the API metrics to Prometheus.

The stub_status API is published locally on port 8080. If you wish to connect to the stub_status API remotely, you can use kubectl to port‑forward traffic to port 8080, as described in the updated installation documentation.

NGINX Plus

Accompanying the new support for TCP and UDP load balancing, the Prometheus Exporter has been updated to collect and export TCP/UDP metrics in NGINX Plus. These provide insights into TCP and UDP load‑balancing performance, health checks, server response times, and much more.

Note that to collect this data you must include the status_zone directive in the TCP/UDP configuration.

Easy Development of Custom Annotations

Annotations are used to add arbitrary additional data to a Kubernetes resource. The NGINX Ingress Controller recognises a number of these annotations, and uses them to define additional behavior such as JWT validation or performance‑tuning settings. These annotations are built into the Ingress controller application; adding a new built‑in annotation means modifying the code and rebuilding the application.

In release 1.4.0, you can also specify the implementation of annotations in the template that is used to generate the NGINX configuration. This makes it much easier to enrich the NGINX configuration without having to implement the annotation in Go and rebuild the Ingress controller application. You can further use the custom templates introduced in release 1.3.0 to apply these annotation implementations to a running NGINX Ingress Controller.

The capability makes it much simpler to develop custom NGINX configuration templates for features such as caching or authentication. Your application teams can then enable and tune these features easily, using the annotations you have defined.

For more information, check out the Ingress Controller Custom Annotations documentation, and see it in action with the Custom Annotations rate limit example.

Support for the New Random with Two Choices Load‑Balancing Algorithm

In NGINX Plus R16 and open source NGINX 1.15.1 we added a new method that is particularly suitable for distributed load balancers. The algorithm is referred to in the literature as “power of two choices”, because it was first described in Michael Mitzenmacher’s 1996 dissertation, The Power of Two Choices in Randomized Load Balancing. “Power of two choices” avoids the undesirable “herd” behavior that traditional best‑choice algorithms such as Least Connections can exhibit when there are multiple load balancers, each with incomplete and inconsistent views of the cluster.

In NGINX and NGINX Plus, “power of two choices” is implemented as a variation of the Random load‑balancing algorithm, so we also refer to it as Random with Two Choices.

Random with Two Choices is the new default load‑balancing method for the NGINX Ingress Controller for Kubernetes. You can tune the load‑balancing method using the nginx.org/lb-method annotation and lb-method ConfigMap key.

The blog post NGINX and the “Power of Two Choices” Load‑Balancing Algorithm describes the algorithm in more detail, comparing it to other load‑balancing algorithms.

Additional Features

Getting Started with the Ingress Controller

If you’d like to find out more about the NGINX Ingress Controller for Kubernetes, check out these resources:

The NGINX Ingress Controller for Kubernetes supports both NGINX Open Source and NGINX Plus, and is a supported alternative to the community Ingress controller. A feature comparison for the two controllers is available here.

The main design goal of the NGINX Ingress Controller for Kubernetes is to maintain performance and compatibility across releases. We provide full technical support to NGINX Plus subscribers at no additional cost, and open source users also benefit from the focus on long‑term stability and supportability.

Retrieved by Nick Shadrin from nginx.com website.