Containers & Kubernetes in Windows Server 2025 or RedHat EL(RHEL)
Windows Server vs RedHat for Modern Hosting
Windows Server (including 2025) is traditionally strong for legacy, .NET Framework, Active Directory, and Hyper-V virtualization. Hyper-V is a hypervisor — meaning it is designed to run full Virtual Machines. Each VM boots its own full OS. This is great for old workloads or isolating entire OS instances, but it is heavy for modern microservices.
RedHat Enterprise Linux (RHEL) and its family (CentOS, Rocky, AlmaLinux) were designed much closer to the container ecosystem. Linux was the birthplace of Docker, containers, and Kubernetes. The kernel features containers depend on (cgroups, namespaces) were invented in Linux. That is why RedHat integrates better with container runtimes (containerd, CRI-O) and Kubernetes cluster nodes.
So if someone asks “which is better for containers? Windows or RedHat?” — the modern industry answer is: RedHat or any Linux distro is the natural native home of containers.
The Scenario: I want to run 20 containers
Now imagine I need to run 20 microservices. If I try to do that in Hyper-V I would need 20 VMs or at least several large VMs with multiple apps each. That is heavy, slow, and expensive. But if I run containers, all 20 apps can run isolated but share the host OS kernel. This is the power of containers.
Then I can add Kubernetes on top to automatically scale these 20 containers if traffic increases. Kubernetes doesn’t care if containers run on Windows or Linux — but it works way more smoothly on Linux because container tech was invented there.
Conclusion: Windows is not the ideal host for containers
So even if Windows Server 2025 supports containers, the container ecosystem is native to Linux. RedHat is built for this type of architecture. Windows + Hyper-V is more for VMs. RedHat (or Linux in general) is best for containers + Kubernetes.
Containers Are Not Little Virtual Machines
Containers are a packaging format. They do NOT have their own full OS. They share the kernel of the host. That is why containers are ultra lightweight compared to Hyper-V or VMware VMs.
Example difference:
- Hyper-V VM = boots its own Windows or Linux kernel
- Container = uses the host kernel, only brings the app + libraries
This gives containers three super powers:
- Fast to start
- Very small CPU/RAM overhead
- Scale out horizontally very quickly
What Happens if You Need To Run 20 Containers?
With containers that is normal — you can easily run 20, 50, 100 containers. If the host has enough CPU + RAM, everything will keep working. If you need more, Kubernetes can automatically spin up more pods (containers) depending on the traffic.
This is called Horizontal Pod Autoscaling:
kubectl autoscale deployment api --cpu-percent=70 --min=2 --max=10
This means when CPU goes above 70%, Kubernetes will spin up more containers automatically.
Kubernetes Controls Security & Access
We also learned that access control is NOT done inside the container. It is done inside Kubernetes:
- NetworkPolicies decide which microservices can talk to which microservices
- Ingress is the only component exposed to the public internet
- IAM Roles connect a specific Pod to specific cloud permissions
If you need to restrict Postgres to only be accessed by 1 app, you write a Kubernetes Network Policy like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: only-api-can-access-postgres
spec:
podSelector:
matchLabels:
app: postgres
ingress:
- from:
- podSelector:
matchLabels:
app: api
This policy says: only Pods labeled app=api can talk to the Pod labeled app=postgres. Everything else is blocked.
The Host OS Role
The OS (Windows Server, RedHat, Ubuntu, AlmaLinux, etc) is just the foundation. Its purpose is not to run logic or security rules for each app. The OS only needs to:
- allow outbound internet to install/pull container images
- open ports 80/443 for the Ingress to be reachable from the public internet
Everything else (routing, traffic, SSL certs, autoscaling) is managed INSIDE Kubernetes. Not in Windows. Not in Linux.
Modern App Hosting Architecture
- The host OS runs Kubernetes
- Kubernetes runs your apps as containers
- Only the Ingress Controller is reachable from internet
- NetworkPolicies lock down the internal communication
- Autoscaling makes the system grow under traffic
Internet
↓
Load Balancer (Cloud or Metal)
↓
Ingress Controller (Ports 80 / 443)
↓
-----------------------------
| Kubernetes Cluster |
| |
| Frontend → Backend |
| ↓ |
| PostgreSQL |
| ↓ |
| RabbitMQ |
-----------------------------
Only the Ingress is public.
Everything else is private.
NetworkPolicies decide who can talk to who.
PostgreSQL Database and Access Control Policies
PostgreSQL should NOT be exposed openly. The database should live inside the cluster as a Deployment or StatefulSet, and you use NetworkPolicies to control which Pods can Read or Write to it.
For example, if I want only the backend to be able to access PostgreSQL on port 5432, Kubernetes makes this easy using the NetworkPolicy I showed above. Kubernetes becomes a “firewall inside the cluster” so your sensitive data layer is always protected.
You can even separate by direction:
- Backend → DB: allowed
- Frontend → DB: blocked
- RabbitMQ → DB: blocked
- Outside tools → DB: blocked by default
How To Configure or Open Specific Ports for Specific Containers
In Kubernetes you NEVER open ports at the OS level for each container. Instead:
- You expose a port inside the cluster using a Service
- You expose something publicly ONLY through Ingress
Example use case:
- Frontend → exposed externally (port 80/443) through Ingress
- Backend → internal only (ClusterIP) not exposed to internet
- Database (PostgreSQL) → internal only (ClusterIP) only backend can reach
- RabbitMQ → internal only (ClusterIP) only backend workers can reach
For something running OUTSIDE Kubernetes like Ollama (local LLM), you treat Ollama like an external service. You would:
- Expose Ollama on a known port (example: 11434)
- Create a Kubernetes
ServiceorExternalNameto reference it - Apply NetworkPolicies so only specific pods can call Ollama
So again, we learned that all traffic flow between services is controlled by Kubernetes, NOT the OS. The OS only exposes 80 and 443 for public access to the Ingress. Everything else stays private and locked down.
This is the modern clean way.
Diagram for the Architecture
flowchart TD
subgraph Comparison_and_Technology
A[Windows Server + Hyper-V] -->|Best For| B(Legacy, .NET Framework, Active Directory)
B --> B1[Uses Heavy Virtual Machines VMs]
B1 --> B2(Each VM has its own Full OS Kernel)
C[RedHat/Linux + Kubernetes] -->|Best For| D(Containers, Microservices)
D --> D1[Uses Lightweight Containers]
D1 --> D2(Containers Share Host OS Kernel)
D1 --> D3(Fast Start, Low Overhead, Easy Scaling)
end
subgraph Modern_Application_Flow_in_Kubernetes
direction TB
M0[Internet] --> M1(Load Balancer)
M1 --> M2{Ingress Controller Exposed on Ports 80/443}
M2 --> M3[Kubernetes Cluster Host OS Linux/RedHat]
M3 --> M4[Application Pods / Containers]
M4 --> M5[Internal Services e.g. PostgreSQL RabbitMQ]
M3 --> M6[NetworkPolicies Internal Firewall/Access Control]
M4 --> M6
M3 --> M7[Horizontal Pod Autoscaling]
M4 --> M7
end
C --> Modern_Application_Flow_in_Kubernetes
style M2 fill:#cceeff,stroke:#333
style M7 fill:#ddffdd,stroke:#333
Example Running Redhat on a VM , using Cockpit/Podman , containers RabbitMQ, PostgreSQL and NodeJS
As we discussed Windows doing VM stuff, and RedHat managing containers, I know this is not Openshift but for the purposes of the example I use Cockpit which is not mandatory since you also can just use Podman.
💬 Let's Connect!
Passionate about cloud technologies, AWS, Azure, Docker, DevOps, C++, JS / NodeJS, Python, and computer science? I'd love to hear from you!
Whether you want to discuss architecture patterns, share ideas, or collaborate on projects, feel free to reach out.
Connect on LinkedIn


Comments
Post a Comment