Installing BadgerOrchestrator
BadgerOrchestrator is the Kubernetes agent that manages game server workloads as native Kubernetes resources. It runs as a Deployment inside your Kubernetes cluster and connects back to your BadgerPanel instance over WebSocket.
Prerequisites
Kubernetes Cluster
| Requirement | Minimum | Recommended |
|---|---|---|
| Kubernetes version | 1.25+ | 1.28+ |
| Cluster type | Any (K3s, K8s, EKS, GKE, AKS) | K3s or managed K8s |
| Worker nodes | 1 | 3+ for high availability |
| kubectl | Configured with cluster admin access | -- |
K3s
K3s is a lightweight, production-ready Kubernetes distribution that works well with BadgerOrchestrator. It is easy to install and requires fewer resources than full Kubernetes.
Network Requirements
- Outbound HTTPS (443) from the cluster to your panel URL (for the WebSocket connection)
- NodePort range (default 30000-32767) accessible from the internet for game server traffic
- Container images must be pullable from public registries (Docker Hub)
Panel Requirements
- BadgerPanel must be running and accessible over HTTPS
- You need admin access to register the cluster
Step 1: Register the Cluster in the Panel
- Log into your panel as an administrator
- Navigate to Admin > Clusters (or Admin > Kubernetes)
- Click Create Cluster
- Fill in the cluster details:
| Field | Description | Example |
|---|---|---|
| Name | Display name for this cluster | US-East-K8s |
| Description | Optional description | Production K8s cluster on AWS |
- After creating the cluster, you will receive:
- A cluster ID
- An authentication token
- A Kubernetes manifest for deploying the orchestrator
Step 2: Deploy the Orchestrator
Option A: Using the Generated Manifest (Recommended)
The panel generates a ready-to-apply Kubernetes manifest. Copy it from the cluster's detail page in the admin panel.
# Apply the generated manifest
kubectl apply -f orchestrator-manifest.yamlThe manifest creates:
- A Namespace (
badgerpanel-system) - A ServiceAccount with RBAC permissions
- A ClusterRole and ClusterRoleBinding for managing pods, deployments, services, and PVCs
- A ConfigMap with the orchestrator configuration
- A Deployment running the orchestrator binary
Option B: Manual Deployment
If you prefer to customize the deployment, use the following manifests as a starting point.
Namespace
apiVersion: v1
kind: Namespace
metadata:
name: badgerpanel-system
labels:
app.kubernetes.io/name: badgerpanel
app.kubernetes.io/component: orchestratorRBAC
apiVersion: v1
kind: ServiceAccount
metadata:
name: badgerpanel-orchestrator
namespace: badgerpanel-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: badgerpanel-orchestrator
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "pods/exec", "services", "namespaces",
"nodes", "persistentvolumeclaims", "events"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: badgerpanel-orchestrator
subjects:
- kind: ServiceAccount
name: badgerpanel-orchestrator
namespace: badgerpanel-system
roleRef:
kind: ClusterRole
name: badgerpanel-orchestrator
apiGroup: rbac.authorization.k8s.ioConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: badgerpanel-orchestrator-config
namespace: badgerpanel-system
data:
config.yml: |
panel_url: "https://panel.your-domain.com"
token: "your-cluster-authentication-token"
namespace_prefix: "bp-"
log:
level: "info"
monitoring:
stats_interval: 15
event_buffer_size: 100Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: badgerpanel-orchestrator
namespace: badgerpanel-system
labels:
app.kubernetes.io/name: badgerpanel
app.kubernetes.io/component: orchestrator
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: badgerpanel
app.kubernetes.io/component: orchestrator
template:
metadata:
labels:
app.kubernetes.io/name: badgerpanel
app.kubernetes.io/component: orchestrator
spec:
serviceAccountName: badgerpanel-orchestrator
containers:
- name: orchestrator
image: badgerpanel/orchestrator:latest
# Or download the binary from your panel and build a custom image
args:
- "--config"
- "/etc/badgerpanel/config.yml"
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"
volumeMounts:
- name: config
mountPath: /etc/badgerpanel
readOnly: true
volumes:
- name: config
configMap:
name: badgerpanel-orchestrator-configApply all manifests:
kubectl apply -f namespace.yaml
kubectl apply -f rbac.yaml
kubectl apply -f configmap.yaml
kubectl apply -f deployment.yamlStep 3: Verify the Deployment
Check Pod Status
kubectl -n badgerpanel-system get podsYou should see a running pod:
NAME READY STATUS RESTARTS AGE
badgerpanel-orchestrator-7d8f9c6b4-x2k9p 1/1 Running 0 30sCheck Orchestrator Logs
kubectl -n badgerpanel-system logs -f deployment/badgerpanel-orchestratorLook for messages indicating:
- Configuration loaded successfully
- Connected to the panel via WebSocket
- Cluster information reported
Verify in the Panel
In the admin dashboard, navigate to Admin > Clusters. Your cluster should show:
- Status: Online (green indicator)
- Orchestrator version
- Node count and resource summary
- Connected timestamp
Step 4: Configure Storage
Game servers need persistent storage for their data files. Ensure your cluster has a StorageClass configured.
Check Available StorageClasses
kubectl get storageclassCommon options:
| Provider | StorageClass | Notes |
|---|---|---|
| K3s (default) | local-path | Local node storage, no replication |
| AWS EKS | gp3 / gp2 | EBS volumes |
| GKE | standard / premium-rwo | Persistent Disks |
| AKS | managed-premium | Azure Managed Disks |
| Longhorn | longhorn | Distributed storage for K3s/K8s |
| Rook-Ceph | rook-ceph-block | Distributed block storage |
Local Storage
If using local-path or hostPath storage, game server data is tied to a specific node. Pod rescheduling to a different node will result in data loss. Use a distributed storage solution (Longhorn, Rook-Ceph) for production environments.
Step 5: Configure Networking
Game servers are exposed via NodePort services. Ensure the NodePort range is accessible from the internet.
Default NodePort Range
Kubernetes uses ports 30000-32767 by default. Players connect to game servers using:
<any-node-ip>:<nodeport>Firewall Rules
Ensure NodePort range is open on all worker nodes:
# Example for cloud security groups
# Allow inbound TCP 30000-32767
# Allow inbound UDP 30000-32767Custom Port Ranges
If you need specific port numbers (e.g., 25565 for Minecraft), you can configure the Kubernetes API server's --service-node-port-range flag. Consult your Kubernetes distribution's documentation.
Troubleshooting Installation
Pod Not Starting
# Describe the pod for events and errors
kubectl -n badgerpanel-system describe pod -l app.kubernetes.io/component=orchestrator
# Check for image pull errors
kubectl -n badgerpanel-system get events --sort-by='.lastTimestamp'Orchestrator Can't Connect to Panel
Check the logs for connection errors:
kubectl -n badgerpanel-system logs deployment/badgerpanel-orchestrator | grep -i errorCommon causes:
panel_urlis incorrect or unreachable from inside the cluster- DNS resolution issues -- ensure the cluster can resolve your panel's domain
- Authentication token mismatch -- verify the token in the ConfigMap matches the panel
- TLS certificate issues -- if using self-signed certificates, the orchestrator may reject the connection
RBAC Permission Errors
If the orchestrator logs show "forbidden" errors:
# Verify the ClusterRoleBinding exists
kubectl get clusterrolebinding badgerpanel-orchestrator
# Verify the ServiceAccount is used by the pod
kubectl -n badgerpanel-system get pod -l app.kubernetes.io/component=orchestrator -o jsonpath='{.items[0].spec.serviceAccountName}'Next Steps
- Orchestrator Configuration -- Advanced configuration options
- Upgrading the Orchestrator -- How to update to newer versions
- Architecture -- Understand how the orchestrator fits into the platform