Skip to content. | Skip to navigation

Navigation

You are here: Home / Support / Guides / Tools / Kubernetes / Ingress

Personal tools

Ingress

Ingress is a mechanism to allow the external world some visibility of Services in your Kubernetes cluster.

It is clearly worth a page on its own.

I've been following Craig Johnston at https://imti.co/web-cluster-ingress/.

Overview

Ingress gets a bit (read: a good deal) more complicated. But not that much.

Broadly, the Ingress controller, usually nginx, is given some service descriptions which tell it to map HTTP(S) requests to Services. It even handles multiple HTTP hostnames. Neat!

(Although if you don't use the correct HTTP Host: header your request won't work!)

The one extra component, is a fallback/default handler, called defaultbackend.

Maybe the one difference is that Craig Johnston uses a DaemonSet (rather than a Deployment) which forces the use of an instance of the controller on all (worker) nodes including when you add a new one.

Although I originally followed the example fairly closely (modulo the usual API changes) I did have to update it for the mandatory IngressClass

There's also two broad swathes to this: we need to instantiate the Ingress controller; and then instantiate a Service to use the Ingress.

The Ingress Controller

Several steps here.

NameSpace

# tee 00-namespace.yml | kubectl create -f -
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-ingress

Default Backend

Here we have both a Deployment and a Service:

# tee 01-backend.yml | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: default-http-backend
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissible as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: gcr.io/google_containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend

When we try to access things we shouldn't we'll get back an HTTP 404 response, or a default backend - 404 message from curl.

ConfigMaps

Next we need to create three ConfigMaps, one for the nginx configuration and one for each of TCP and UDP services configuration.

nginx ConfigMap
# tee 02-nginx-configmap.yml | kubectl create -f -
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app: ingress-nginx
TCP/UDP ConfigMaps
# tee 03-tcp-services-configmap.yml | kubectl create -f -
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx

# tee 04-udp-services-configmap.yml | kubectl create -f -
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx

RBAC

There's several concomitant sections here. We create a ServiceAccount to use the rights and both Role and ClusterRole (and therefore RoleBinding and ClusterRoleBindings) settings for in various kinds of manipulations we need to do.

We need to be able to manipulate IngressClasses in addition to Craig's example.

There is some commentary around the resourceNames (being ingress-controller-leader-nginx) for which I have no other information. Magic, er, numbers. ingress-controller-leader appears (at some point) as a ConfigMap though created by whom is a mystery.

# tee 05-rbac.yml | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
      - update
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "networking.k8s.io"
    resources:
      - ingresses
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
        - events
    verbs:
        - create
        - patch
  - apiGroups:
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

DaemonSet

This didn't work out of the box and I had to rummage around a bit to get a working (or workable) image, here using k8s.gcr.io/ingress-nginx/controller:v1.0.5 rather than the one from quay.io.

I think that that is what required the use of the extra argument --controller-class=example.com/ingress-nginx1 where that controller class will be used in a moment.

You can see the use of the ConfigMaps although I don't see any actual usage (kubectl -n ingress-nginx describe configmap/X)

# tee 06-ds.yml | kubectl create -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      containers:
        - name: nginx-ingress-controller
          #image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
          image: k8s.gcr.io/ingress-nginx/controller:v1.0.5
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --annotations-prefix=nginx.ingress.kubernetes.io
            - --controller-class=example.com/ingress-nginx1
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
            hostPort: 80
          - name: https
            containerPort: 443
            hostPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          securityContext:
            runAsNonRoot: false

Service

We can create a NodePort Service to front-up the DaemonSet (much like we would have done with a Deployment).

Notice that we're grabbing both port 80 and 443 on all nodes for our Service and redirecting them to the ingress-nginx app, our ingress controller.

# tee 07-service.yml | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  - name: https
    port: 443
    targetPort: 443
    protocol: TCP
  selector:
    app: ingress-nginx

IngressClass

We need to create an IngressClass to satisfy the latest code's demands which appears to transform the --controller-class name we gave the DaemonSet into an ingress-nginx-one IngressClass name.

We'll use the IngressClass name when we add new Ingresses later.

# tee 08-ingressclass.yml | kubectl create -f -
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
  name: ingress-nginx-one
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
spec:
  controller: example.com/ingress-nginx1

Testing

We should be able to test the basic Ingress controller Service at this point:

# kubectl -n ingress-nginx get svc
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
default-http-backend   ClusterIP   10.106.29.189   <none>        80/TCP                       10d
ingress-nginx          NodePort    10.101.162.89   <none>        80:32568/TCP,443:30713/TCP   10d

# curl -s 10.101.162.89
default backend - 404

Good.

The Ingress Service

You can use Craig's txn2/ok service or we can use our Status Server which also tests we can use our private docker registry. Or both!

Deployment

Here, we're back to being a regular Kubernetes user, no special privileges.

Notice that we have to pass imagePullSecrets to allow containerd authorisation to pull from our private docker registry.

Also note that just because we pass, here, reg-cred-secret doesn't mean that that docker-registry Secret exists. Particularly as, when we were creating the private docker registry, we were doing everything as admin not our local, less privileged, User account and in a different NameSpace.

Let's assume our User can use the NameSpace my-namespace.

$ tee 10-status-deployment.yml | kubectl -n my-namespace create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: status
  labels:
    app: status
    system: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: status
  template:
    metadata:
      labels:
        app: status
        system: example
    spec:
      containers:
        - name: status
          image: docker-registry:5000/example.com/app/status
          imagePullPolicy: Always
          env:
            - name: IP
              value: "0.0.0.0"
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: SERVICE_ACCOUNT
              valueFrom:
                fieldRef:
                  fieldPath: spec.serviceAccountName
          ports:
            - name: status-port
              containerPort: 8080
      imagePullSecrets:
      - name: reg-cred-secret

Service

$ tee 11-status-service.yml | kubectl -n my-namespace create -f -
apiVersion: v1
kind: Service
metadata:
  name: status
  labels:
    app: status
    system: test
spec:
  selector:
    app: status
  ports:
    - protocol: "TCP"
      port: 8080
      targetPort: 8080
  type: NodePort
Test

We can quickly test the service is working at all:

$ kubectl get svc
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)          AGE
...
status         NodePort       10.97.111.97    <none>         8080:31214/TCP   21h

$ curl -s 10.97.111.97:8080
{"call-uuid":"2af3dd00-bc82-4cde-8821-a3cbd4b6eeb4","client-ip":"10.254.42.128","count":1,"node-name":"k8s-w2","pod-ip":"0.0.0.0","pod-name":"status-5fb664cbf6-jk6d5","pod-namespace":"my-namespace","pod-port":"8080","service-account":"default","svc-uuid":"97746796-205a-4f6f-8435-81c18205cfc5","time":"2022-04-03T15:41:49.489708544Z"}

Ingress

The exciting bit!

$ tee 12-status-ingress.yml | kubectl -n my-namespace create -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: status
  labels:
    app: status
    system: test
spec:
  ingressClassName: ingress-nginx-one
  rules:
  - host: k8s-ingress.example.com
    http:
      paths:
      - backend:
          service:
            name: status
            port:
              number: 8080
        path: /status
        pathType: Prefix

where we specify the IngressClassName, ingress-nginx-one and two important attributes:

  • the host expected in the HTTP Host: header

    This implies we must finagle the DNS to "make it so."

  • the path prefix for the URL

    In other words, the use of http://k8s-ingress.example.com/status will be redirected to our status microservice listening on port 8080.

Obviously, we should be using some sort of competent external-to-Kubernetes load balancing but, here we are.

We can get some configuration output with the describe verb:

$ kubectl describe ingress/status
Name:             status
Labels:           app=status
                  system=test
Namespace:        my-namespace
Address:          172.18.0.189,172.18.0.244
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" is forbidden: User "me" cannot get resource "endpoints" in API group "" in the namespace "kube-system">)
Rules:
  Host                     Path  Backends
  ----                     ----  --------
  k8s-ingress.example.com
                           /status   status:8080 (10.254.46.11:8080)
...

The IP address, 10.254.46.11 is that of the Pod running the microservice.

Checks

We can check where we are:

$ kubectl get ingress
NAME     CLASS               HOSTS                ADDRESS                     PORTS   AGE
status   ingress-nginx-one   k8s-m1.example.com   172.18.0.189,172.18.0.244   80      22h

Where the two IP addresses are those of my worker nodes.

Assuming we have finagled the DNS so that k8s-ingress.example.com points at our two worker nodes we should be able to:

$ curl k8s-ingress.office.soho/status
Hello World from /status

Eh?

Hmm, it turns out that Ingress is not doing any rewriting (can it?) so our request for /status is being passed verbatim to our microservice. It so happens that, out of interest, we covered the /:path GET variant and it will respond with Hello World from $path. So fair enough.

The correct solution is to edit either the YAML and replace path: /status to path: / or kubectl edit ingress/status.

If you were also running Craig's txn2/ok Service then there'll be a clash between both Ingresses claiming /.

Testing

$ curl k8s-ingress.office.soho/
{"call-uuid":"eb0f4205-dfee-477d-9959-12cccb6812ad","client-ip":"172.18.0.244","count":2,"node-name":"k8s-w2","pod-ip":"0.0.0.0","pod-name":"status-5fb664cbf6-jk6d5","pod-namespace":"my-namespace","pod-port":"8080","service-account":"default","svc-uuid":"97746796-205a-4f6f-8435-81c18205cfc5","time":"2022-04-03T17:31:56.83909024Z"}

Looks OK.

We can test our other microservcie routes:

$ curl k8s-ingress.office.soho/status
Hello World from /status

$ curl k8s-ingress.office.soho/secret/sauce
/secret/sauce Not Found

Document Actions