Building a Complete Kubernetes Development Environment with Podman Desktop: A Comprehensive Guide to Setting Up PostgreSQL, RabbitMQ, MinIO, and Mail Server on Windows 11
π Building a Complete Kubernetes Development Environment with Podman Desktop
A Comprehensive Guide to Setting Up PostgreSQL, RabbitMQ, MinIO, and Mail Server on Windows 11
π Table of Contents
1. Prerequisites & Initial Setup
⚠️ Important: Before You Begin
You must complete these steps before following the deployment guides below.
Step 1: Install Podman Desktop
Podman Desktop is a container management tool that provides a Docker-compatible interface with enhanced security features. It's the foundation of our local Kubernetes environment.
- Download Podman Desktop:
- Visit: https://podman-desktop.io/
- Download the Windows installer (usually
podman-desktop-setup.exe)
- Install Podman Desktop:
- Run the installer with administrator privileges
- Follow the installation wizard
- Accept the default settings unless you have specific requirements
- Initialize Podman Machine:
# Open PowerShell and run: podman machine init --cpus 4 --memory 8192 --disk-size 50 # Start the machine podman machine start # Verify installation podman --version podman machine list - Enable Kubernetes in Podman Desktop:
- Open Podman Desktop application
- Go to Settings → Resources
- Find Kubernetes and click Enable
- Wait for Kubernetes to initialize (this may take a few minutes)
Step 2: Verify Kubernetes Installation
# Check kubectl is installed and working
kubectl version --client
# Verify cluster is running
kubectl cluster-info
# Check nodes
kubectl get nodes
# Expected output: One node in "Ready" status
Step 3: Pull Required Container Images
Before deploying services, we need to pull all the container images. This ensures faster deployments and allows us to work offline if needed.
# Pull PostgreSQL image
podman pull postgres:16.1
# Pull RabbitMQ with Management UI
podman pull rabbitmq:3.13-management
# Pull MinIO
podman pull minio/minio:RELEASE.2024-11-07T00-52-20Z
# Pull docker-mailserver
podman pull mailserver/docker-mailserver:13.3.1
# Verify all images are downloaded
podman images
✅ Verification Checklist
Before proceeding, ensure:
- Podman Desktop is installed and running
- Podman machine is started and healthy
- Kubernetes is enabled and the cluster is ready
kubectlcommands work correctly- All four container images are pulled successfully
Step 4: Prepare Storage Directories
We'll store all persistent data on the D: drive to keep it separate from the system drive and easily accessible.
# SSH into Podman machine
podman machine ssh
# Create all required directories
sudo mkdir -p /mnt/d/K8s_Data/postgres
sudo mkdir -p /mnt/d/K8s_Data/rabbitmq
sudo mkdir -p /mnt/d/K8s_Data/minio
sudo mkdir -p /mnt/d/K8s_Data/mailserver-data
sudo mkdir -p /mnt/d/K8s_Data/mailserver-config
# Set permissions (required due to NTFS limitations)
sudo chmod -R 777 /mnt/d/K8s_Data
# Verify directories exist
ls -la /mnt/d/K8s_Data/
# Exit Podman machine
exit
βΉ️ Why These Preparations Matter
Pre-pulling images and creating directories prevents deployment failures and speeds up the process. It also helps identify connectivity or permission issues early, before we start creating Kubernetes resources.
2. Understanding the Environment
What is Podman Desktop?
Podman Desktop is an open-source container management tool that serves as a Docker alternative. Unlike Docker, Podman runs containers without requiring a daemon running as root, making it more secure. It provides full Docker CLI compatibility, meaning most Docker commands work with Podman by simply replacing docker with podman.
Why Podman Instead of Docker Desktop?
- Security: Rootless containers by default
- No Daemon Required: Each container runs as a child process
- Docker Compatibility: Drop-in replacement for Docker CLI
- Kubernetes Integration: Built-in Kubernetes support
- Free for Commercial Use: No licensing restrictions
Project Overview
Today, we built a complete local Kubernetes development environment on Windows 11 using Podman Desktop. This setup includes four critical services that form the backbone of modern cloud-native applications:
| Service | Purpose | Use Case | Port(s) |
|---|---|---|---|
| PostgreSQL 16.1 | Relational Database | Primary data storage, ACID transactions | 30432 |
| RabbitMQ 3.13 | Message Broker | Async communication, task queues | 30672 (AMQP) 31672 (Management) |
| MinIO | Object Storage | File storage, S3-compatible API | 30900 (API) 30901 (Console) |
| Mail Server | Email Infrastructure | SMTP/IMAP email services | 30025, 30143, 30587, 30993 |
Storage Architecture
All services store their data on the D: drive at D:\K8s_Data\, mapped to /mnt/d/K8s_Data/ inside the Podman WSL2 virtual machine. This approach provides:
- ✅ Data persistence across pod restarts
- ✅ Easy access from Windows Explorer
- ✅ Simple backup and restore procedures
- ✅ Separation from system drive
3. PostgreSQL Deployment
Challenge: PGDATA Directory Configuration
The first service we deployed was PostgreSQL. We encountered an important issue: PostgreSQL requires a specific data directory structure. When mounting volumes, PostgreSQL creates a subdirectory based on the hostname, which can cause permission issues.
π‘ Solution: Use PGDATA Environment Variable
By setting PGDATA=/var/lib/postgresql/data/pgdata, we explicitly tell PostgreSQL where to store its data files, avoiding conflicts with the mount point.
Final PostgreSQL Configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
securityContext:
fsGroup: 999 # PostgreSQL user group
containers:
- name: postgres
image: postgres:16.1
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "mydatabase"
- name: POSTGRES_USER
value: "myuser"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
hostPath:
path: /mnt/d/K8s_Data/postgres
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: NodePort
ports:
- port: 5432
targetPort: 5432
nodePort: 30432
selector:
app: postgres
Deployment Steps
# Create secret for password
kubectl create secret generic postgres-secret \
--from-literal=password='supersecretpassword'
# Apply the deployment
kubectl apply -f postgres.yaml
# Verify deployment
kubectl get pods -l app=postgres
kubectl get svc postgres-service
# Test connection
kubectl exec -it deployment/postgres-deployment -- psql -U myuser -d mydatabase
π PostgreSQL Access Information
Host: localhost
Port: 30432
Database: mydatabase
Username: myuser
Connection String: postgresql://myuser:password@localhost:30432/mydatabase
Data Location: D:\K8s_Data\postgres\pgdata\
4. RabbitMQ Deployment
Understanding RabbitMQ in Kubernetes
RabbitMQ is a message broker that enables asynchronous communication between services. It's essential for building scalable, decoupled microservices architectures. Our deployment includes the management plugin, providing a web UI for monitoring and administration.
Security Configuration
We implemented proper security practices by storing credentials in Kubernetes secrets rather than hardcoding them in YAML files. This follows the principle of separation of concerns and makes credential rotation easier.
Final RabbitMQ Configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq-deployment
labels:
app: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
securityContext:
fsGroup: 999
containers:
- name: rabbitmq
image: rabbitmq:3.13-management
ports:
- containerPort: 5672 # AMQP
- containerPort: 15672 # Management UI
env:
- name: RABBITMQ_DEFAULT_USER
valueFrom:
secretKeyRef:
name: rabbitmq-secret
key: username
- name: RABBITMQ_DEFAULT_PASS
valueFrom:
secretKeyRef:
name: rabbitmq-secret
key: password
- name: RABBITMQ_MNESIA_BASE
value: /var/lib/rabbitmq/mnesia
volumeMounts:
- name: rabbitmq-storage
mountPath: /var/lib/rabbitmq
volumes:
- name: rabbitmq-storage
hostPath:
path: /mnt/d/K8s_Data/rabbitmq
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-service
spec:
type: NodePort
ports:
- name: amqp
protocol: TCP
port: 5672
targetPort: 5672
nodePort: 30672
- name: management
protocol: TCP
port: 15672
targetPort: 15672
nodePort: 31672
selector:
app: rabbitmq
Deployment Steps
# Create directory
podman machine ssh
sudo mkdir -p /mnt/d/K8s_Data/rabbitmq
sudo chmod 777 /mnt/d/K8s_Data/rabbitmq
exit
# Create secret
kubectl create secret generic rabbitmq-secret \
--from-literal=username='rabbituser' \
--from-literal=password='rabbitpass'
# Apply deployment
kubectl apply -f rabbitmq.yaml
# Verify
kubectl get pods -l app=rabbitmq
kubectl logs -l app=rabbitmq
π° RabbitMQ Access Information
AMQP Port: localhost:30672
Management UI: http://localhost:31672
Username: rabbituser
Connection String: amqp://rabbituser:rabbitpass@localhost:30672
Data Location: D:\K8s_Data\rabbitmq\mnesia\
5. MinIO Deployment
Challenge: NTFS Filesystem Limitations
MinIO presented the most significant challenge of our deployment. The issue stemmed from Windows NTFS filesystems not fully supporting Linux filesystem operations that MinIO requires, specifically:
- Atomic file renames
- Proper file locking mechanisms
- Linux user/group permissions
⚠️ The Permission Problem
Even after setting chmod 777 permissions, MinIO failed with "file access denied" errors. This is because NTFS permissions displayed in Linux are cosmetic – they don't reflect actual NTFS ACLs.
Solution: Run MinIO as Root
For local development environments, the pragmatic solution is to run MinIO as root user (UID 0). This bypasses the permission issues while being acceptable for non-production use.
π‘ Production Alternative
In production, use proper Linux filesystems (ext4, xfs) with PersistentVolumes from cloud providers. Never run containers as root in production.
Final MinIO Configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: minio-deployment
labels:
app: minio
spec:
replicas: 1
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
metadata:
labels:
app: minio
spec:
containers:
- name: minio
image: minio/minio:RELEASE.2024-11-07T00-52-20Z
args: ["server", "/data", "--console-address", ":9001"]
ports:
- containerPort: 9000 # API
- containerPort: 9001 # Console
env:
- name: MINIO_ROOT_USER
valueFrom:
secretKeyRef:
name: minio-secret
key: root-user
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: minio-secret
key: root-password
volumeMounts:
- name: minio-storage
mountPath: /data
securityContext:
runAsUser: 0 # Run as root for NTFS compatibility
runAsGroup: 0
livenessProbe:
httpGet:
path: /minio/health/live
port: 9000
initialDelaySeconds: 30
periodSeconds: 30
readinessProbe:
httpGet:
path: /minio/health/ready
port: 9000
initialDelaySeconds: 10
periodSeconds: 10
volumes:
- name: minio-storage
hostPath:
path: /mnt/d/K8s_Data/minio
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: NodePort
ports:
- name: api
protocol: TCP
port: 9000
targetPort: 9000
nodePort: 30900
- name: console
protocol: TCP
port: 9001
targetPort: 9001
nodePort: 30901
selector:
app: minio
Deployment Steps
# Create directory with full permissions
podman machine ssh
sudo mkdir -p /mnt/d/K8s_Data/minio
sudo chmod 777 /mnt/d/K8s_Data/minio
exit
# Create secret
kubectl create secret generic minio-secret \
--from-literal=root-user='minioadmin' \
--from-literal=root-password='minioadmin123'
# Apply deployment
kubectl apply -f minio.yaml
# Watch logs (should see successful startup)
kubectl logs -f deployment/minio-deployment
π️ MinIO Access Information
Console UI: http://localhost:30901
API Endpoint: http://localhost:30900
Access Key: minioadmin
Secret Key: minioadmin123
Data Location: D:\K8s_Data\minio\.minio.sys\
S3 Compatible: Yes (use AWS SDK with custom endpoint)
6. Mail Server Deployment
Understanding docker-mailserver
Docker-mailserver is a production-ready, full-featured mail server solution that includes Postfix, Dovecot, SpamAssassin, and more. It's designed to run in containers and provides a complete email infrastructure.
Challenge: SSL Certificate Generation
The initial deployment failed because docker-mailserver expected self-signed SSL certificates that didn't exist. The error indicated missing certificate files at specific paths.
π‘ Solution: Certificate Auto-Generation
We configured the mail server to auto-generate self-signed certificates on first startup by setting PERMIT_DOCKER=network and specifying certificate paths.
Final Mail Server Configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: mailserver-deployment
labels:
app: mailserver
spec:
replicas: 1
selector:
matchLabels:
app: mailserver
strategy:
type: Recreate
template:
metadata:
labels:
app: mailserver
spec:
hostname: mail
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
containers:
- name: mailserver
image: mailserver/docker-mailserver:13.3.1
ports:
- containerPort: 25 # SMTP
- containerPort: 143 # IMAP
- containerPort: 465 # SMTPS
- containerPort: 587 # Submission
- containerPort: 993 # IMAPS
env:
- name: OVERRIDE_HOSTNAME
value: "mail.example.com"
- name: ENABLE_SPAMASSASSIN
value: "1"
- name: ENABLE_CLAMAV
value: "0"
- name: ENABLE_FAIL2BAN
value: "1"
- name: ENABLE_POSTGREY
value: "0"
- name: SSL_TYPE
value: "self-signed"
- name: PERMIT_DOCKER
value: "network"
- name: ONE_DIR
value: "1"
- name: TZ
value: "UTC"
- name: LOG_LEVEL
value: "info"
volumeMounts:
- name: mail-data
mountPath: /var/mail-state
- name: mail-config
mountPath: /tmp/docker-mailserver/
livenessProbe:
exec:
command:
- /bin/bash
- -c
- "supervisorctl status | grep -E 'postfix|dovecot' | grep RUNNING"
initialDelaySeconds: 180
periodSeconds: 30
timeoutSeconds: 10
readinessProbe:
exec:
command:
- /bin/bash
- -c
- "supervisorctl status | grep -E 'postfix|dovecot' | grep RUNNING"
initialDelaySeconds: 120
periodSeconds: 10
volumes:
- name: mail-data
hostPath:
path: /mnt/d/K8s_Data/mailserver-data
type: DirectoryOrCreate
- name: mail-config
hostPath:
path: /mnt/d/K8s_Data/mailserver-config
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: mailserver-service
spec:
type: NodePort
ports:
- name: smtp
port: 25
targetPort: 25
nodePort: 30025
- name: imap
port: 143
targetPort: 143
nodePort: 30143
- name: smtps
port: 465
targetPort: 465
nodePort: 30465
- name: submission
port: 587
targetPort: 587
nodePort: 30587
- name: imaps
port: 993
targetPort: 993
nodePort: 30993
selector:
app: mailserver
Deployment and Email Account Management
# Create directories
podman machine ssh
sudo mkdir -p /mnt/d/K8s_Data/mailserver-data
sudo mkdir -p /mnt/d/K8s_Data/mailserver-config
sudo chmod -R 777 /mnt/d/K8s_Data/mailserver-data
sudo chmod -R 777 /mnt/d/K8s_Data/mailserver-config
exit
# Apply deployment
kubectl apply -f mailserver.yaml
# Wait for pod to be ready (this takes a few minutes)
kubectl wait --for=condition=ready pod -l app=mailserver --timeout=300s
# Create email account
POD_NAME=$(kubectl get pod -l app=mailserver -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -- setup email add user@example.com SecurePassword123
# List all email accounts
kubectl exec -it $POD_NAME -- setup email list
# Check services are running
kubectl exec -it $POD_NAME -- supervisorctl status
π§ Mail Server Access Information
SMTP (Plain): localhost:30025
SMTP (Submission): localhost:30587 (STARTTLS) ✅ Recommended
SMTP (SSL): localhost:30465
IMAP (Plain): localhost:30143
IMAP (SSL): localhost:30993 ✅ Recommended
Data Location: D:\K8s_Data\mailserver-data\
Config Location: D:\K8s_Data\mailserver-config\
7. Secret Management with PowerShell
The Problem with Hardcoded Credentials
Initially, we created Kubernetes secrets using command-line arguments, which meant passwords were stored in shell history and potentially visible in scripts. This is a security anti-pattern.
Solution: Windows Credential Manager Integration
We developed a comprehensive PowerShell-based secret management solution that integrates with Windows Credential Manager. This provides:
- π Encrypted storage using Windows DPAPI (Data Protection API)
- π₯️ GUI management through native Windows Credential Manager
- π€ PowerShell automation for Kubernetes integration
- π€ Per-user encryption (secrets are user-specific)
- πΎ No plaintext files on disk
PowerShell SecretManagement Setup
# Install required PowerShell modules
Install-Module -Name Microsoft.PowerShell.SecretManagement -Repository PSGallery -Force -Scope CurrentUser
Install-Module -Name SecretManagement.JustinGrote.CredMan -Repository PSGallery -Force -Scope CurrentUser
# Register Windows Credential Manager as vault
Register-SecretVault -Name "WindowsCredentialManager" -ModuleName "SecretManagement.JustinGrote.CredMan" -DefaultVault
# Verify registration
Get-SecretVault
Secret Manager Utility Features
We created SecretManagerUtility.ps1, a comprehensive tool that provides:
| Feature | Description | Benefit |
|---|---|---|
| Initialize Secrets | Interactive prompts to create all secrets | Secure input (passwords hidden) |
| Show Secrets | Display all stored Kubernetes secrets | Quick verification and auditing |
| Apply to Kubernetes | Sync secrets to Kubernetes cluster | One-command deployment |
| Verify Secrets | Check which secrets exist in cluster | Troubleshooting and validation |
| Update Secret | Change individual secret values | Easy credential rotation |
| GUI Access | Open Windows Credential Manager | Visual management interface |
| Export/Backup | Save secrets to encrypted file | Disaster recovery |
| Remove Secrets | Delete all Kubernetes secrets | Clean environment reset |
Using the Secret Manager
# Run the secret manager
.\SecretManagerUtility.ps1
# Interactive menu will appear:
# 1. Initialize Secrets (first time setup)
# 2. Show All Secrets
# 3. Apply Secrets to Kubernetes
# 4. Verify Kubernetes Secrets
# 5. Update a Secret
# 6. Open Credential Manager GUI
# 7. Export Secrets (Backup)
# 8. Remove All Secrets
# 9. Exit
# Example workflow:
# First time: Select option 1 to create secrets
# Enter passwords when prompted (input is masked)
# Then: Select option 3 to apply to Kubernetes
# Done! All services now have their credentials
Managing Secrets via GUI
- Run the utility and select option 6, or manually open Credential Manager:
- Press
Win + R - Type:
control /name Microsoft.CredentialManager - Click Windows Credentials
- Press
- Find secrets prefixed with
k8s-:k8s-postgres-passwordk8s-rabbitmq-usernamek8s-rabbitmq-passwordk8s-minio-root-userk8s-minio-root-password
- Click any secret to expand, then click Edit
- Update the password and click Save
- Re-run the utility and select option 3 to sync changes to Kubernetes
✅ Security Best Practices Implemented
- Secrets never stored in plaintext files
- Windows DPAPI encryption (AES-256)
- User-specific encryption (can't be accessed by other users)
- No credentials in version control
- Easy credential rotation workflow
- Kubernetes secrets created dynamically
8. Final Architecture
System Architecture Diagram
┌─────────────────────────────────────────────────────────────────────┐
│ Windows 11 Host │
│ ┌───────────────────────────────────────────────────────────────┐ │
│ │ Podman Desktop (WSL2 Backend) │ │
│ │ ┌─────────────────────────────────────────────────────────┐ │ │
│ │ │ Kubernetes Cluster (Single Node) │ │ │
│ │ │ │ │ │
│ │ │ ┌──────────────┐ ┌──────────────┐ ┌─────────────┐ │ │ │
│ │ │ │ PostgreSQL │ │ RabbitMQ │ │ MinIO │ │ │ │
│ │ │ │ Port: 30432 │ │ Port: 30672 │ │Port: 30900 │ │ │ │
│ │ │ │ │ │ Port: 31672 │ │Port: 30901 │ │ │ │
│ │ │ └──────┬───────┘ └──────┬───────┘ └──────┬──────┘ │ │ │
│ │ │ │ │ │ │ │ │
│ │ │ ┌──────────────────────────────────────────────────┐ │ │ │
│ │ │ │ Mail Server │ │ │ │
│ │ │ │ SMTP: 30025, 30587, 30465 │ │ │ │
│ │ │ │ IMAP: 30143, 30993 │ │ │ │
│ │ │ └──────────────────┬───────────────────────────────┘ │ │ │
│ │ │ │ │ │ │
│ │ └─────────────────────┼───────────────────────────────────┘ │ │
│ │ │ │ │
│ │ ┌─────────────────────▼───────────────────────────────────┐ │ │
│ │ │ Volume Mounts (/mnt/d/K8s_Data/) │ │ │
│ │ └─────────────────────┬───────────────────────────────────┘ │ │
│ └────────────────────────┼──────────────────────────────────────┘ │
│ │ │
│ ┌────────────────────────▼──────────────────────────────────────┐ │
│ │ D:\K8s_Data\ (NTFS) │ │
│ │ ├── postgres/ │ │
│ │ ├── rabbitmq/ │ │
│ │ ├── minio/ │ │
│ │ ├── mailserver-data/ │ │
│ │ └── mailserver-config/ │ │
│ └───────────────────────────────────────────────────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────────────┐ │
│ │ Windows Credential Manager (DPAPI Encrypted) │ │
│ │ ├── k8s-postgres-password │ │
│ │ ├── k8s-rabbitmq-username │ │
│ │ ├── k8s-rabbitmq-password │ │
│ │ ├── k8s-minio-root-user │ │
│ │ └── k8s-minio-root-password │ │
│ └───────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘
Complete Service Overview
| Service | Image | Ports | Data Path | Security Context |
|---|---|---|---|---|
| PostgreSQL | postgres:16.1 | 30432 | D:\K8s_Data\postgres\ | fsGroup: 999 |
| RabbitMQ | rabbitmq:3.13-management | 30672 (AMQP) 31672 (UI) |
D:\K8s_Data\rabbitmq\ | fsGroup: 999 |
| MinIO | minio/minio:RELEASE.2024-11-07T00-52-20Z | 30900 (API) 30901 (Console) |
D:\K8s_Data\minio\ | runAsUser: 0 (root) |
| Mail Server | mailserver/docker-mailserver:13.3.1 | 30025, 30143, 30465 30587, 30993 |
D:\K8s_Data\mailserver-data\ D:\K8s_Data\mailserver-config\ |
runAsUser: 0 (root) |
9. Troubleshooting & Lessons Learned
Challenge 1: PostgreSQL Data Directory Issues
Problem: PostgreSQL failed to start or corrupted data when using direct volume mounts.
Root Cause: PostgreSQL expects a specific directory structure and creates subdirectories based on the hostname.
Solution: Set PGDATA environment variable to /var/lib/postgresql/data/pgdata to explicitly define the data directory within the mount point.
Challenge 2: NTFS Permission Limitations with MinIO
Problem: MinIO failed with "file access denied" errors despite chmod 777.
Root Cause: NTFS doesn't support Linux user/group permissions. The displayed permissions are cosmetic and don't affect actual NTFS ACLs. MinIO requires atomic file operations that NTFS can't guarantee.
Solution: Run MinIO as root (runAsUser: 0) in development environments. In production, use proper Linux filesystems (ext4, xfs) with cloud provider PersistentVolumes.
Challenge 3: Mail Server SSL Certificate Generation
Problem: Mail server shut down immediately with missing SSL certificate errors.
Root Cause: docker-mailserver expected self-signed certificates to already exist at specific paths.
Solution: Configure SSL_TYPE=self-signed and PERMIT_DOCKER=network to allow automatic certificate generation on first startup. Certificates are then stored in the config volume for subsequent restarts.
Challenge 4: Secret Management Security
Problem: Credentials stored in shell history and potentially visible in scripts.
Root Cause: Using kubectl create secret with --from-literal directly on command line.
Solution: Developed comprehensive PowerShell secret manager integrating with Windows Credential Manager. Uses DPAPI encryption and provides GUI management.
Why chmod 777?
We used chmod -R 777 extensively. While this seems like a security concern, it's acceptable here because:
- ✅ Local development environment (not production)
- ✅ Personal machine with single user
- ✅ Podman VM is isolated from external network
- ✅ NTFS permissions don't map cleanly to Linux anyway
- ✅ Services are only accessible via localhost
- ❌ Never use 777 in production with proper Linux filesystems
Key Takeaways
- Environment-Specific Solutions: What works in development (running as root, 777 permissions) is unacceptable in production. Always plan the migration path.
- Filesystem Matters: NTFS limitations affected MinIO significantly. Understand your storage layer before deploying stateful services.
- Security from the Start: Implementing proper secret management early prevents technical debt and security issues later.
- Documentation is Critical: Each challenge taught us something new. Document issues and solutions for future reference.
- Test, Then Test Again: Verify each deployment step before moving to the next service. Troubleshooting multiple failed services simultaneously is much harder.
10. Conclusion
What We Accomplished
Today, we successfully built a complete, production-like Kubernetes development environment on Windows 11 using Podman Desktop. This achievement includes:
- ✅ Four critical services deployed and running: PostgreSQL, RabbitMQ, MinIO, and Mail Server
- ✅ All data persisting on D: drive with easy Windows Explorer access
- ✅ Secure credential management using Windows Credential Manager
- ✅ PowerShell automation for secret management and deployment
- ✅ Production-like configurations with proper health checks and resource limits
- ✅ Comprehensive documentation of challenges and solutions
Skills Developed
| Category | Skills |
|---|---|
| Kubernetes | Deployments, Services, ConfigMaps, Secrets, Volume Mounts, NodePort, Health Checks |
| Containers | Podman Desktop, Image Management, Container Security Contexts, WSL2 Integration |
| Databases | PostgreSQL configuration, data directory management, connection string formatting |
| Message Brokers | RabbitMQ setup, AMQP protocol, management UI, queue configuration |
| Object Storage | MinIO deployment, S3-compatible API, bucket management, filesystem challenges |
| Email Infrastructure | SMTP/IMAP configuration, SSL certificates, email account management, Postfix/Dovecot |
| Security | Kubernetes Secrets, Windows Credential Manager, DPAPI encryption, credential rotation |
| PowerShell | Module management, SecretManagement API, interactive scripts, automation |
| Troubleshooting | Log analysis, permission debugging, filesystem limitations, container inspection |
Next Steps
Now that you have a fully functional development environment, consider these enhancements:
- Add Monitoring:
- Deploy Prometheus for metrics collection
- Add Grafana for visualization
- Set up AlertManager for notifications
- Implement Ingress:
- Install Nginx Ingress Controller
- Configure domain-based routing
- Add SSL/TLS termination
- Database Backups:
- Create automated PostgreSQL backup jobs
- Implement point-in-time recovery
- Test restore procedures
- CI/CD Pipeline:
- Integrate with GitHub Actions or GitLab CI
- Automate deployment testing
- Implement rolling updates
- Service Mesh:
- Explore Istio or Linkerd
- Add distributed tracing
- Implement advanced traffic management
Production Migration Checklist
When moving this setup to production, remember to:
- ❌ Never run containers as root (except where absolutely required)
- ❌ Never use
chmod 777permissions - ❌ Never use NodePort in production (use LoadBalancer or Ingress)
- ❌ Never store secrets in YAML files or version control
- ✅ Use managed Kubernetes services (EKS, GKE, AKS)
- ✅ Use cloud provider PersistentVolumes with proper storage classes
- ✅ Implement proper RBAC (Role-Based Access Control)
- ✅ Use real SSL certificates (Let's Encrypt, cloud provider certificates)
- ✅ Set resource limits and requests for all containers
- ✅ Implement network policies for pod-to-pod communication
- ✅ Use external secret managers (AWS Secrets Manager, Azure Key Vault, HashiCorp Vault)
- ✅ Set up multi-zone deployments for high availability
- ✅ Implement automated backups and disaster recovery
Useful Commands Reference
# Check all running services
kubectl get all
# View logs for all services
kubectl logs -l app=postgres
kubectl logs -l app=rabbitmq
kubectl logs -l app=minio
kubectl logs -l app=mailserver
# Get pod details
kubectl describe pod -l app=postgres
# Execute commands in pods
kubectl exec -it deployment/postgres-deployment -- psql -U myuser -d mydatabase
kubectl exec -it deployment/minio-deployment -- mc admin info local
# Check persistent data on Windows
dir D:\K8s_Data\postgres
dir D:\K8s_Data\rabbitmq
dir D:\K8s_Data\minio
dir D:\K8s_Data\mailserver-data
# Restart a service
kubectl rollout restart deployment/postgres-deployment
# Scale a deployment
kubectl scale deployment/postgres-deployment --replicas=0
kubectl scale deployment/postgres-deployment --replicas=1
# Delete and recreate everything
kubectl delete -f postgres.yaml
Comments
Post a Comment