A Deep Dive into Kubernetes Admission Control
In the complex, distributed world of container orchestration, securing and governing workloads is a paramount challenge. As the central nervous system of your cluster, the Kubernetes API server is the gateway for all changes. This makes Kubernetes Admission Control one of the most critical components for enforcing security, compliance, and best practices. It's the ultimate gatekeeper, deciding what is and isn't allowed to run in your cluster. This deep dive will explore every facet of admission control, from the fundamental concepts and built-in controllers to the dynamic power of webhooks and modern policy engines.
What is Kubernetes Admission Control?
At its core, Kubernetes Admission Control is a process, enforced by a series of plugins in the kube-apiserver, that intercepts requests *after* they have been authenticated and authorized. Think of it this way:
- Authentication (AuthN): Asks "Who are you?" (e.g., "You are user 'dev-jane'").
- Authorization (AuthZ): Asks "What are you allowed to do?" (e.g., "User 'dev-jane' is allowed to create 'Pods' in the 'staging' namespace").
- Admission Control: Asks "Is what you're trying to do *allowed* by our policies?" (e.g., "The 'Pod' dev-jane is creating must not use the 'latest' image tag and must have a 'cost-center' label").
Admission control is the final checkpoint before an object is persisted to etcd, making it the ideal place to validate object configurations, modify them, or reject them outright.
The API Request Lifecycle: Beyond AuthN & AuthZ
When you run kubectl apply -f my-pod.yaml, your request goes on a journey. Admission control is a crucial part of this flow.
- Authentication: The API server verifies your identity (e.g., via client certificate or token).
- Authorization: The API server checks if you have permission (e.g., via RBAC) to perform the requested action (e.g.,
CREATEaPod). - Mutating Admission: The request is passed to a chain of mutating admission controllers. These can *change* the object. For example, a mutating controller might inject a default
resourceLimitor add a sidecar container for service mesh. - Object Schema Validation: The API server validates that the (potentially modified) object conforms to the official Kubernetes API schema (e.g., "Does this Pod YAML have all the required fields?").
- Validating Admission: The request is passed to a chain of validating admission controllers. These controllers *cannot* change the object; they can only approve or reject the request. This is where you enforce policies like "disallow privileged containers."
- Persistence: If the request passes all checks, the object is written to
etcd, and the operation is complete.
This separation of mutating and validating steps is critical. Mutations happen first, ensuring that validators see the final, modified version of the object before making a decision.
Why Do We Need Admission Control?
Admission control moves your cluster from a permissive to a prescriptive environment. It's the foundation for "policy-as-code" in Kubernetes and provides tangible benefits:
- Security Enforcement: This is the primary use case. You can prevent non-compliant workloads from ever running.
- Block privileged containers.
- Prevent pods from running as the
rootuser. - Enforce network policies by default.
- Disallow mounting of sensitive host paths.
- Governance & Compliance: Ensure all resources adhere to internal or external (e.g., PCI, HIPAA) standards.
- Mandate specific labels (e.g.,
app,owner,cost-center) on all resources for cost tracking and management. - Prevent public
LoadBalancerservices from being created in non-production namespaces.
- Mandate specific labels (e.g.,
- Configuration Management & Best Practices: Automate configuration and enforce sensible defaults.
- Automatically inject resource
limitsandrequeststo prevent "noisy neighbor" problems. - Prevent developers from using the
:latestimage tag, which hurts reproducibility. - Inject common environment variables or volume mounts (like security certificates) into all pods.
- Automatically inject resource
The Two Types of Admission Controllers
As seen in the API lifecycle, admission controllers are split into two main categories: mutating and validating. They can be either built-in to Kubernetes or implemented dynamically via webhooks.
1. Mutating Admission Controllers
A mutating controller intercepts a request and can modify the object definition. Its job is to apply defaults or automatically augment configurations. For example, the built-in DefaultStorageClass controller watches for PersistentVolumeClaim (PVC) objects created without a storageClassName and automatically sets it to the cluster's default StorageClass.
Common use cases:
- Injecting Sidecars: This is famously used by service meshes like Istio and Linkerd. A mutating webhook sees a new
Podcreation request, and if it has a specific annotation (e.g.,istio-injection: "enabled"), it patches the Pod's YAML to include theistio-proxysidecar container before it's saved toetcd. - Setting Default Labels: Automatically add a
namespaceorapplabel to all Pods based on their metadata. - Applying Default Security Contexts: Set
runAsNonRoot: truefor all containers that don't specify a security context.
2. Validating Admission Controllers
A validating controller inspects a request and makes a binary decision: allow or deny. It cannot modify the object. Its response is simply a "yes" or "no," with an accompanying message if the request is denied. This is your primary tool for enforcing strict security and governance policies.
Common use cases:
- Enforcing Label Requirements: Reject any
DeploymentorServicethat does not include anownerlabel. - Blocking Insecure Images: Integrate with an image scanner (like Trivy or Clair) and reject any
Podthat tries to use an image with high-severity vulnerabilities. - Restricting Ingress Hostnames: Ensure all
Ingressobjects use a valid, whitelisted domain (e.g.,*.my-company.com) and not an arbitrary one.
Built-in Admission Controllers vs. Dynamic Admission Control
Kubernetes provides two ways to implement admission control logic: using the pre-compiled plugins or building your own custom logic that the API server calls out to.
The Classics: Built-in Controllers
The kube-apiserver binary includes a set of admission controllers that are compiled directly into it. A cluster administrator can choose which ones to enable by passing a flag.
You can see the recommended default list in the official Kubernetes documentation. Some of the most important ones include:
NamespaceLifecycle: Prevents resources from being created in a namespace that is being terminated, and prevents deletion of thedefault,kube-system, andkube-publicnamespaces.LimitRanger: EnforcesLimitRangeobjects in a namespace, applying default resource requests and limits to Pods.ResourceQuota: EnforcesResourceQuotaobjects, rejecting any new resource that would exceed the namespace's quota (e.g., "no more than 10 CPUs" or "no more than 5Services").PodSecurity: The new (as of 1.25) built-in controller that enforces Pod Security Standards (more on this later).
These controllers are enabled via an API server flag:
# Example of enabling built-in controllers on kube-apiserver
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ResourceQuota,PodSecurity
While powerful, built-in controllers are limited. You can't write your own custom logic; you can only use what's provided by the Kubernetes team. This is where dynamic admission control comes in.
The Power of Webhooks: Dynamic Kubernetes Admission Control
Dynamic Admission Control allows you to write your own admission logic, host it as a simple HTTPS web service (a "webhook"), and tell the API server to call your service when a relevant API request comes in. This decouples your custom policies from the Kubernetes API server lifecycle, allowing you to build, deploy, and update policies without reconfiguring or restarting the control plane.
This is managed by two special resources:
MutatingAdmissionWebhookValidatingAdmissionWebhook
When you create one of these, you tell the API server three main things:
- What to watch: Which objects (e.g.,
pods,services), operations (e.g.,CREATE,UPDATE), and namespaces should trigger this webhook? - Where to call: What is the
ServiceURL for your webhook? - How to connect: What is the CA bundle (TLS certificate) to trust when connecting to your webhook?
When a matching request arrives, the API server sends an AdmissionReview object (as a JSON payload) to your webhook. Your webhook inspects this object, performs its logic, and sends an AdmissionReview response back.
- A mutating webhook's response can include a JSON Patch that instructs the API server on how to modify the object.
- A validating webhook's response simply includes
"allowed": trueor"allowed": falsealong with a status message explaining the rejection.
Building and Implementing a Dynamic Admission Webhook
While policy engines are now the preferred method, understanding how a webhook is built is crucial. Let's look at the high-level steps to create a simple validating webhook that blocks any Pod using an image with the :latest tag.
Step 1: Write the Webhook Server
You can write this in any language. It's just a web server that accepts POST requests at a specific path (e.g., /validate), parses the incoming AdmissionReview JSON, and returns a new AdmissionReview JSON.
Here's a conceptual example in Python using Flask:
from flask import Flask, request, jsonify app = Flask(__name__) @app.route('/validate', methods=['POST']) def validate_pod(): review = request.get_json() pod = review['request']['object'] # Default to allowed response = {"allowed": True} for container in pod['spec']['containers']: image = container.get('image', '') if ':' not in image or image.endswith(':latest'): # Deny the request response = { "allowed": False, "status": { "message": f"Image '{image}' uses 'latest' tag or no tag, which is forbidden." } } break # Found a violation, no need to check other containers # Wrap the response in an AdmissionReview object return jsonify({ "apiVersion": "admission.k8s.io/v1", "kind": "AdmissionReview", "response": response }) if __name__ == '__main__': # Server must run with TLS app.run(host='0.0.0.0', port=443, ssl_context=('cert.pem', 'key.pem'))
Step 2: Containerize and Deploy the Webhook
You would package this server into a container image, push it to a registry, and deploy it to your cluster using a Deployment. You also need to create a Service to give it a stable DNS name (e.g., latest-tag-blocker.default.svc) that the API server can call.
Step 3: Handle TLS Certificates
This is often the trickiest part. The API server *must* communicate with your webhook over HTTPS, and it *must* trust the certificate your webhook server presents. For production, you can't use self-signed certs easily. The standard solution is to use cert-manager to automatically provision a valid TLS certificate (e.g., from Let's Encrypt or an internal CA) and inject it into your webhook pod and the webhook configuration.
Step 4: Register the Webhook with Kubernetes
Finally, you create a ValidatingWebhookConfiguration resource to tell the API server about your new webhook.
apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: name: "block-latest-tag-webhook" webhooks: - name: "block-latest.my-company.com" rules: - apiGroups: [""] apiVersions: ["v1"] operations: ["CREATE", "UPDATE"] resources: ["pods"] scope: "Namespaced" clientConfig: service: name: "latest-tag-blocker" # Name of your Service namespace: "default" # Namespace of your Service path: "/validate" # The path in your web server caBundle: "Cg...==" # Base64-encoded CA certificate to trust admissionReviewVersions: ["v1"] sideEffects: None timeoutSeconds: 5 failurePolicy: Fail # Critical: 'Fail' blocks the request if the webhook is down.
The caBundle field contains the base64-encoded CA certificate that signed your webhook server's certificate. cert-manager can also automate patching this field. Once this object is created, the API server will immediately start sending Pod creation/update requests to your service.
Beyond Manual Webhooks: Policy Engines (OPA, Kyverno)
As you can see, building a webhook from scratch involves a lot of boilerplate: writing an HTTP server, handling TLS, managing deployments. For every new policy, you'd have to write and deploy new code. This isn't scalable.
Policy engines solve this problem. They provide a single, pre-built admission webhook that is highly configurable. You deploy the engine once, and then you define policies as custom resources (CRDs). The engine's webhook receives *all* requests and evaluates them against your library of policy CRDs.
Open Policy Agent (OPA) Gatekeeper
Open Policy Agent (OPA) is a general-purpose policy engine. Gatekeeper is its specific Kubernetes integration. With Gatekeeper, you define policies using a high-level declarative language called Rego.
You write policies in two parts:
- ConstraintTemplate: This defines the policy logic in Rego and the parameters it accepts.
- Constraint: This is an instance of the template, applied to specific resources (e.g., "apply this template to all Pods in the 'production' namespace").
Example: A constraint to require the owner label.
# 1. The ConstraintTemplate (The logic) apiVersion: templates.gatekeeper.sh/v1 kind: ConstraintTemplate metadata: name: k8srequiredlabels spec: crd: spec: names: kind: K8sRequiredLabels validation: openAPIV3Schema: properties: labels: type: array items: type: string targets: - target: admission.k8s.gatekeeper.sh rego: | package k8srequiredlabels violation[{"msg": msg}] { provided := {label | input.review.object.metadata.labels[label]} required := {label | label := input.parameters.labels[_]} missing := required - provided count(missing) > 0 msg := sprintf("You must provide labels: %v", [missing]) } --- # 2. The Constraint (The enforcement) apiVersion: constraints.gatekeeper.sh/v1 kind: K8sRequiredLabels metadata: name: "pods-must-have-owner" spec: match: kinds: - apiGroups: [""] kinds: ["Pod"] parameters: labels: ["owner"] # Enforce the 'owner' label
Kyverno
Kyverno is another very popular policy engine that was built specifically for Kubernetes. Its main advantage is that policies are defined as Kubernetes resources without requiring a separate language like Rego. This makes it much more accessible for many teams.
Kyverno policies can validate, mutate, and even generate new resources. A single Policy resource can define the entire logic.
Example: The same policy (require owner label) in Kyverno.
apiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: "require-owner-label" spec: validationFailureAction: Enforce # Block requests that fail rules: - name: "check-for-owner-label" match: any: - resources: kinds: - Pod - Deployment - Service validate: message: "The label 'owner' is required." pattern: metadata: labels: owner: "?*" # Check that 'owner' label exists and has any value
As you can see, the Kyverno YAML is simpler and more intuitive for anyone already familiar with Kubernetes object manifests.
The Deprecation of Pod Security Policies (PSPs)
For a long time, Pod Security Policies (PSPs) were Kubernetes' built-in solution for pod security. PSPs were, in fact, a built-in admission controller. An administrator would define a PSP (e.g., "disallow privileged") and then use RBAC to grant Service Accounts *permission* to use that policy.
However, PSPs were notoriously difficult to use correctly. Their interaction with RBAC was confusing, and it was very easy to accidentally lock users (or even system components) out of the cluster. Because of these usability issues, PSPs were deprecated in Kubernetes 1.21 and completely removed in 1.25.
The Successor: Pod Security Admission (PSA)
PSPs were replaced by Pod Security Admission (PSA). PSA is a new, simplified, *built-in* admission controller. Instead of complex, granular PSP objects, PSA defines three simple, cluster-wide security profiles based on the Pod Security Standards:
- Privileged: Unrestricted. The "anything goes" policy.
- Baseline: Minimally restrictive, blocking known high-risk fields while maintaining compatibility with most workloads.
- Restricted: Heavily restricted, following all modern security best practices (e.g., requires
runAsNonRoot, disallowshostPathvolumes).
Enforcement is dramatically simpler. You just apply labels to your namespaces:
# This namespace will now enforce the 'restricted' policy. # Any new pod that violates this policy will be rejected. kubectl label namespace my-secure-ns pod-security.kubernetes.io/enforce=restricted # This namespace will allow 'baseline' pods, but 'warn' about # any pods that violate the 'restricted' policy (good for auditing). kubectl label namespace my-staging-ns pod-security.kubernetes.io/enforce=baseline kubectl label namespace my-staging-ns pod-security.kubernetes.io/warn=restricted
For most clusters, PSA should be your new baseline for pod security, and you should use a policy engine like Kyverno or Gatekeeper for all your other, more specific custom policies (like enforcing labels).
Best Practices for Kubernetes Admission Control
- Start with Pod Security Admission (PSA): Don't try to replicate pod security with a policy engine. Use the built-in PSA. Aim to run as many workloads as possible in
baselineorrestrictednamespaces. - Use Policy Engines for Custom Logic: Don't write your own webhooks unless you have a highly specialized use case. Use Kyverno or OPA Gatekeeper for enforcing labels, restricting image sources, and other custom governance.
- Monitor Your Webhooks: Admission controllers are a single point of failure. If your webhook is slow, all API requests will be slow. If it's down, all API requests may fail (if
failurePolicy: Fail). Monitor your webhook's latency and error rates like any other critical production service. - Set Failure Policies Correctly: For security policies (e.g., "block privileged pods"), always set
failurePolicy: Fail. This ensures that a webhook failure doesn't "fail open" and allow a non-compliant resource. For non-critical mutations (e.g., "add default label"), you *might* considerfailurePolicy: Ignore. - Scope Webhooks Tightly: Use
namespaceSelectorandobjectSelectorin your webhook configuration to limit its scope. There's no reason for a webhook that validatesIngressobjects to also receive requests forConfigMapupdates. - Beware of Mutating System Resources: Be extremely careful when writing mutating webhooks that could affect
kube-systemresources. A faulty webhook can break your entire cluster. Use anamespaceSelectorto explicitly exclude system namespaces.
Frequently Asked Questions
What's the difference between Kubernetes Admission Control and RBAC?
They work together. RBAC (Authorization) controls *who* can do *what*. For example: "The 'developers' group can CREATE Deployments." Admission Control controls the *properties* of those actions. For example: "Deployments created by anyone must not have containers running as root."
What happens if an admission webhook is down?
It depends on the failurePolicy set in the ValidatingWebhookConfiguration or MutatingWebhookConfiguration. If failurePolicy: Fail (the default and recommended setting for security), the API request will be rejected with an error. If failurePolicy: Ignore, the API server will skip the webhook and allow the request, potentially bypassing your policy.
Can a mutating webhook and a validating webhook act on the same object?
Yes. This is a core part of the design. The API request *always* goes through the mutating chain first. After all mutations are applied and the object is re-validated against the schema, it is then sent to the validating chain. This ensures validators are checking the *final* state of the object.
How do I debug a failing admission webhook?
There are two places to look:
- The
kube-apiserverlogs: The API server will log an error message if it fails to call your webhook (e.g., "connection refused" or "TLS handshake failed"). - Your webhook server's logs: This is where your application logic lives. Check its logs to see why it decided to reject a request or why it crashed.
You can also use kubectl get validatingwebhookconfiguration ... -o yaml to check its configuration, especially the caBundle and service reference.
Conclusion
Kubernetes Admission Control is an essential, multi-layered mechanism that transforms your cluster from a simple container-runner into a secure, governed, and compliant platform. It is the implementation of "policy-as-code" for Kubernetes. By moving beyond the basics of AuthN/AuthZ, you gain fine-grained control over every object that enters your system. While the built-in controllers and the new Pod Security Admission provide a powerful baseline, the true potential is unlocked with dynamic admission webhooks—preferably managed through robust policy engines like Kyverno or OPA Gatekeeper. Mastering Kubernetes Admission Control is a non-negotiable skill for any administrator or platform engineer serious about running secure, stable, and well-managed clusters at scale. Thank you for reading the huuphan.com

Comments
Post a Comment