Skip to main content

Master Kubernetes: Architecture, Commands, and Real-World Applications

Kubernetes Basics for DevOps & DevSecOps

Kubernetes Basics for DevSecOps

1. Introduction to Kubernetes

In the early days of deploying applications, we used to run them directly on physical servers. This approach was inflexible and inefficient — if one application needed more resources, it could starve others. Virtual machines (VMs) improved this by isolating workloads, but they were heavy and took time to provision.

Then came containers. Containers are lightweight, portable, and can run anywhere — your laptop, a server in the cloud, or even a Raspberry Pi. But managing containers at scale quickly becomes a nightmare. Imagine you have 500 containers — how do you start them, stop them, update them, and ensure they recover from failures automatically?

Enter Kubernetes — an open-source container orchestration platform that automates deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).

2. Kubernetes Architecture

Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers. Kubernetes enables developers and operators to manage containerized applications with ease, providing a robust and scalable infrastructure for modern applications.

Control Plane

The Control Plane is the brain of the Kubernetes cluster. It manages the cluster's state and makes global decisions about the cluster (e.g., scheduling). The main components of the Control Plane include:

  • API Server: Serves the Kubernetes API and is the entry point for commands and queries.
  • etcd: A consistent and highly-available key-value store used as Kubernetes' backing store for all cluster data.
  • Scheduler: Watches for newly created Pods and assigns them to nodes.
  • Controller Manager: Runs controller processes that regulate the state of the system, ensuring the desired state matches the actual state.

Node Components

Nodes are the machines (physical or virtual) that run your application workloads. Each node contains the necessary components to run Pods and is managed by the Control Plane. The main components of a node include:

  • kubelet: Ensures that containers are running in a Pod.
  • kube-proxy: Maintains network rules for Pod communication.
  • Container Runtime: Responsible for running containers (e.g., Docker, containerd).

Workloads

Workloads are the applications or services running on your cluster. Kubernetes provides several abstractions to manage these workloads:

  • Pod: The smallest and simplest Kubernetes object; a Pod represents a set of running containers on the cluster.
  • Deployment: Provides declarative updates to Pods and ReplicaSets.
  • ReplicaSet: Ensures that a specified number of pod replicas are running at any given time.
  • StatefulSet: Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.
  • DaemonSet: Ensures that all (or some) Nodes run a copy of a Pod.
  • Job: Creates one or more Pods and ensures that a specified number of them successfully terminate.
  • CronJob: Runs Jobs on a scheduled time.

Services

Services are an abstraction that defines a logical set of Pods and a policy by which to access them. They enable communication between various components within and outside the cluster. Types of Services include:

  • ClusterIP: Exposes the Service on a cluster-internal IP.
  • NodePort: Exposes the Service on each Node’s IP at a static port.
  • LoadBalancer: Exposes the Service externally using a cloud provider's load balancer.
  • ExternalName: Maps the Service to the contents of the externalName field (e.g., foo.bar.example.com).

Networking

Kubernetes networking allows Pods to communicate with each other and with the outside world. Key concepts include:

  • Pod-to-Pod Communication: All Pods can communicate with each other without NAT.
  • Service Discovery: Services can be discovered within the cluster using DNS or environment variables.
  • Network Policies: Controls the communication between Pods and/or Services.

Storage

Kubernetes provides abstractions for managing storage resources:

  • Volumes: A directory containing data, accessible to the containers in a Pod.
  • Persistent Volumes (PV): A piece of storage in the cluster that has been provisioned by an administrator.
  • Persistent Volume Claims (PVC): A request for storage by a user.
  • Storage Classes: Provides a way to describe the "classes" of storage available in a cluster.

3. Kubernetes Landscape

The Kubernetes ecosystem includes tools for logging, monitoring, CI/CD, and security. Some popular components are:

  • Helm: Package manager for Kubernetes applications.
  • Prometheus & Grafana: Monitoring and visualization.
  • Istio/Linkerd: Service mesh solutions for traffic management and security.
  • ArgoCD: GitOps continuous delivery tool.

4. Kubernetes Configuration

Think of Kubernetes YAML files as blueprints for a city you’re building.

The Service is like the city’s address system — it tells visitors where to go.

The Deployment is like the city’s construction plan — it says what to build and how many buildings you need.

We’ll look at these two “blueprints” one by one.

Step 1: The Service — Your App’s “Reception Desk”

When you define a service in Kubernetes, you’re basically setting up a reception desk that greets requests and forwards them to the right place.

Here’s the file:

apiVersion: v1
kind: Service
metadata:
  name: example-nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80
  type: ClusterIP

What’s going on here?

  • apiVersion: v1 → We’re using version 1 of Kubernetes API for a basic service.
  • kind: Service → Tells Kubernetes we’re creating a Service.
  • metadata → name → We name it example-nginx-service.
  • selector: app: nginx → This service will only talk to pods with the label app=nginx.
  • ports →
    • port: 8080 → Visitors come here.
    • targetPort: 80 → Requests are forwarded here inside the pod.
  • type: ClusterIP → The default type; service is accessible only inside the cluster.
💡 Why define the service first?
Because when Kubernetes starts a container, it creates environment variables for each service that’s already running. If your service exists first, your pods will know where to find it from the start.

Step 2: The Deployment — The Building Plan

If the Service is the receptionist, the Deployment is the construction crew that builds and maintains the actual app instances (pods).

Here’s the deployment YAML file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

Breaking it down:

  • apiVersion: apps/v1 → For deployments, we use the apps/v1 API.
  • kind: Deployment → We’re defining a Deployment.
  • replicas: 3 → We want 3 identical pods running.
  • selector → matchLabels: app: nginx → Matches the Service’s selector so they connect properly.
  • template → This is the “blueprint” for each pod:
    • metadata: labels: app: nginx → The pods get the nginx label.
    • spec → containers →
      • Runs the nginx:latest image.
      • Exposes containerPort: 80 (which matches the service’s targetPort).

How Kubernetes Keeps Things in Check

Once you apply these YAML files, Kubernetes treats them like a promise:

“I promise there will always be 3 nginx pods, connected to this service, running and ready.”

Behind the scenes, Kubernetes uses etcd (its “memory bank”) to store the current state and constantly compares it to your desired state.

If you said 3 pods but only 2 are running, Kubernetes will create another one.

If one crashes, Kubernetes will replace it automatically.

5. kubectl – Kubernetes Command-Line Tool

That’s where kubectl comes in. Kubectl is a command-line tool that talks to the Kubernetes API Server (part of the control plane) and tells it what to do. Think of it as remote control for your Kubernetes cluster.

1. kubectl apply

Purpose: Takes a YAML file and tells Kubernetes to create or update resources based on it.

kubectl apply -f example-deployment.yaml

This tells Kubernetes: "Here’s my configuration, please make it happen."

2. kubectl get

Purpose: Checks the status of resources (Pods, Deployments, Services, etc.).

kubectl get pods -n example-namespace

Example output:

NAME          READY   STATUS    RESTARTS   AGE
example-pod   1/1     Running   0          2m18s

Other useful commands:

kubectl get services
kubectl get deployments

3. kubectl describe

Purpose: Shows detailed information about a specific resource — useful for troubleshooting.

kubectl describe pod example-pod -n example-namespace

Details include:

  • Node it’s running on
  • IP address
  • Events (helpful for finding issues)

4. kubectl logs

Purpose: View application logs from inside a Pod.

kubectl logs example-pod -n example-namespace

If your application crashes, logs help you understand why.

5. kubectl exec

Purpose: Opens a shell inside a container for debugging.

kubectl exec -it example-pod -n example-namespace -- sh

Now you can run commands inside the container, like checking files or testing network connectivity.

6. kubectl port-forward

Purpose: Forward traffic from your local machine to a Pod in the cluster — useful for testing apps without exposing them publicly.

kubectl port-forward service/example-service 8090:8080

Now you can access your service locally at http://localhost:8090.


Summary

  • apply → Create/update from YAML
  • get → Check resource status
  • describe → Detailed info
  • logs → View app logs
  • exec → Get inside a container
  • port-forward → Access app locally

6. Kubernetes Security

Kubernetes is relatively new but wildly popular, especially with startups and tech-forward companies. Introducing any new tool into your system brings new security challenges, and Kubernetes is no exception. Think about it: inside Kubernetes, many tiny "pods" (containers) chat with each other over a network. By default, they can talk freely — a tempting door for attackers if left unchecked.

As a DevSecOps engineer, your job is to lock down those doors, ensuring that pods only talk when they should and that the entire system stays safe from prying eyes.

Kubernetes Hardening: Building Your Fort

Container hardening is your first line of defense. It’s all about scanning containers for vulnerabilities and patching them before attackers find a way in. Kubernetes hardening means following best security practices across your cluster to keep it locked tight.

Here’s how you can start:

Lock Down Your Pods

  • Don’t run containers with root privileges unless absolutely necessary.
  • Use immutable file systems to prevent unwanted changes.
  • Regularly scan container images for vulnerabilities or misconfigurations.
  • Avoid privileged containers that have too much access.
  • Use Pod Security Standards (PSS) and Pod Security Admission (PSA) to enforce security policies.

Harden Your Network

Communication is vital, but it’s also a risk. Secure your Kubernetes network by:

  • Restricting access to the control plane using firewalls and role-based access control (RBAC).
  • Encrypting communication between control plane components with TLS.
  • Creating explicit deny policies — deny first, allow later.
  • Encrypting sensitive data using Kubernetes Secrets, instead of storing them as plain text.

Authentication and Authorization — Your Gatekeepers

No security plan is complete without controlling who gets in and what they can do:

  • Disable anonymous access — no freeloaders allowed!
  • Use strong authentication methods for users.
  • Define RBAC policies for every team and service account, giving only the minimum permissions needed.

Stay Alert with Logging and Monitoring

You can’t protect what you don’t see. Keep an eye on what’s happening inside your cluster:

  • Enable audit logging to track changes and access.
  • Use log monitoring and alerting systems to catch suspicious activity early.

Security Is a Continuous Journey

It’s not enough to just secure your Kubernetes cluster once — you have to keep it secure:

  • Patch and update Kubernetes components quickly when fixes are available.
  • Run vulnerability scans and penetration tests regularly.
  • Remove old or unused components that could become attack vectors.

Kubernetes Security in Action: Key Practices to Know

Let’s highlight three powerful tools in your Kubernetes security toolkit:

1. Role-Based Access Control (RBAC)

RBAC restricts what users and services can do inside your cluster. You define roles with specific permissions (like creating or deleting resources), then assign those roles to users or service accounts. This keeps your resources safe from unauthorized access.

2. Secrets Management

Kubernetes Secrets store sensitive info like passwords or tokens securely (encoded in base64). For better protection, enable encryption at rest and control access tightly using RBAC. This helps prevent leaks of critical data.

3. Pod Security Admission (PSA) and Pod Security Standards (PSS)

These enforce security policies for pods at different levels:

  • Privileged: Nearly unrestricted access (risky, but sometimes necessary).
  • Baseline: Default minimal restrictions to prevent common privilege escalations.
  • Restricted: Strictest settings following best container security practices.

PSA works behind the scenes to enforce these rules whenever pods are created or updated.

Note: Before Kubernetes v1.25, Pod Security Policies (PSPs) handled this, but they are now deprecated.

Wrapping It Up

Kubernetes security is complex but essential. As a DevSecOps engineer, mastering these hardening practices and tools is your way to keep your cluster resilient against attacks — and keep your applications running safe and sound.

Embrace automated security scans, enforce strict access controls, monitor continuously, and stay ahead of vulnerabilities. That’s how you turn Kubernetes from a powerful platform into a secure fortress.

Comments

Popular posts from this blog

OLX fraud : Beware of this new fraud/scam of 'Army men'

Awareness is necessity Nowadays, OLX related frauds are increasing, such as share OLX password/OTP, QR code scams, Paytm link scam etc. The most occurring cases are related to Army personnel. Instead of writing all the things, it will be better to watch the video by a YouTuber, Mr. Rohit R Gaba and lets see How fraudster makes fools to people as Army personnel. Here, in the video, the fraudster talked about QR code. Let's understand what a QR code is and how fraud can be occurred by QR code. QR code ( Quick Response code ), We can store so much information within it in text form. I made one QR code that stores information such as a person's name, aadhar card number, etc. We can store any data with the QR code. Same fraudster store bank details and malicious code so that when you scan that QR code, the money will be debit from the account directly. How to protect ourselves from online OLX frauds? Always prefer face to face meetings with buyers or sellers and ...

e-SIM fraud : All you need to know about e-SIM and SIM swapping fraud

Awareness is necessity Ever heard about the place, Jamtara? Many of you must have seen the famous series "Jamtara: Sab ka number ayega" on Netflix. It is located near Jharkhand's capital Ranchi. This place has become a hub for phishing and bank fraud. Recently, Jamtara has come in the limelight because this place's fraudsters have started a new type of crime/ fraud, i.e. e-SIM fraud. Do you know what eSIM is? e-SIM stands for the "Embedded Subscriber Identity Module." You don't need to buy a telecom operator's SIM card separately and insert it into your mobile. e-SIM is a part of your smartphone's hardware. This e-SIM chip comes pre-installed on your smartphone. Its working is the same as our standard SIM, which saves information like IMSI number, some contact details etc. e-SIM is re-writable means previous telecom operator related details can be erased and new information can be written again by a new telecom operator. This type o...

Docker 101: Understanding Containers from Scratch

Docker Basics and Docker Compose Explained Docker Through My Lens Introduction to Docker Docker is a platform designed to create, deploy, and run applications inside containers. Containers bundle an application with all its dependencies, ensuring consistency across different environments. Unlike virtual machines, containers are lightweight and share the host operating system kernel, making them efficient for development, testing, and deployment. Basic Docker Commands To start using Docker, here are some essential commands: docker run [image] – Runs a container from the specified image. docker ps – Lists running containers. docker ps -a – Lists all containers, including stopped ones. docker stop [container_id] – Stops a running container. docker rm [container_id] – Removes a container. docker images – Lists available Docker images. docker rmi [image_id] – Removes a Docker image. Creating Your First Docker Container You can run ...