Skip to content

Installing BadgerOrchestrator

BadgerOrchestrator is the Kubernetes agent that manages game server workloads as native Kubernetes resources. It runs as a Deployment inside your Kubernetes cluster and connects back to your BadgerPanel instance over WebSocket.

Prerequisites

Kubernetes Cluster

RequirementMinimumRecommended
Kubernetes version1.25+1.28+
Cluster typeAny (K3s, K8s, EKS, GKE, AKS)K3s or managed K8s
Worker nodes13+ for high availability
kubectlConfigured with cluster admin access--

K3s

K3s is a lightweight, production-ready Kubernetes distribution that works well with BadgerOrchestrator. It is easy to install and requires fewer resources than full Kubernetes.

Network Requirements

  • Outbound HTTPS (443) from the cluster to your panel URL (for the WebSocket connection)
  • NodePort range (default 30000-32767) accessible from the internet for game server traffic
  • Container images must be pullable from public registries (Docker Hub)

Panel Requirements

  • BadgerPanel must be running and accessible over HTTPS
  • You need admin access to register the cluster

Step 1: Register the Cluster in the Panel

  1. Log into your panel as an administrator
  2. Navigate to Admin > Clusters (or Admin > Kubernetes)
  3. Click Create Cluster
  4. Fill in the cluster details:
FieldDescriptionExample
NameDisplay name for this clusterUS-East-K8s
DescriptionOptional descriptionProduction K8s cluster on AWS
  1. After creating the cluster, you will receive:
    • A cluster ID
    • An authentication token
    • A Kubernetes manifest for deploying the orchestrator

Step 2: Deploy the Orchestrator

The panel generates a ready-to-apply Kubernetes manifest. Copy it from the cluster's detail page in the admin panel.

bash
# Apply the generated manifest
kubectl apply -f orchestrator-manifest.yaml

The manifest creates:

  • A Namespace (badgerpanel-system)
  • A ServiceAccount with RBAC permissions
  • A ClusterRole and ClusterRoleBinding for managing pods, deployments, services, and PVCs
  • A ConfigMap with the orchestrator configuration
  • A Deployment running the orchestrator binary

Option B: Manual Deployment

If you prefer to customize the deployment, use the following manifests as a starting point.

Namespace

yaml
apiVersion: v1
kind: Namespace
metadata:
  name: badgerpanel-system
  labels:
    app.kubernetes.io/name: badgerpanel
    app.kubernetes.io/component: orchestrator

RBAC

yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: badgerpanel-orchestrator
  namespace: badgerpanel-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: badgerpanel-orchestrator
rules:
  - apiGroups: [""]
    resources: ["pods", "pods/log", "pods/exec", "services", "namespaces",
                "nodes", "persistentvolumeclaims", "events"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["apps"]
    resources: ["deployments", "replicasets"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: badgerpanel-orchestrator
subjects:
  - kind: ServiceAccount
    name: badgerpanel-orchestrator
    namespace: badgerpanel-system
roleRef:
  kind: ClusterRole
  name: badgerpanel-orchestrator
  apiGroup: rbac.authorization.k8s.io

ConfigMap

yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: badgerpanel-orchestrator-config
  namespace: badgerpanel-system
data:
  config.yml: |
    panel_url: "https://panel.your-domain.com"
    token: "your-cluster-authentication-token"
    namespace_prefix: "bp-"
    log:
      level: "info"
    monitoring:
      stats_interval: 15
      event_buffer_size: 100

Deployment

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: badgerpanel-orchestrator
  namespace: badgerpanel-system
  labels:
    app.kubernetes.io/name: badgerpanel
    app.kubernetes.io/component: orchestrator
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: badgerpanel
      app.kubernetes.io/component: orchestrator
  template:
    metadata:
      labels:
        app.kubernetes.io/name: badgerpanel
        app.kubernetes.io/component: orchestrator
    spec:
      serviceAccountName: badgerpanel-orchestrator
      containers:
        - name: orchestrator
          image: badgerpanel/orchestrator:latest
          # Or download the binary from your panel and build a custom image
          args:
            - "--config"
            - "/etc/badgerpanel/config.yml"
          resources:
            requests:
              cpu: "100m"
              memory: "128Mi"
            limits:
              cpu: "500m"
              memory: "256Mi"
          volumeMounts:
            - name: config
              mountPath: /etc/badgerpanel
              readOnly: true
      volumes:
        - name: config
          configMap:
            name: badgerpanel-orchestrator-config

Apply all manifests:

bash
kubectl apply -f namespace.yaml
kubectl apply -f rbac.yaml
kubectl apply -f configmap.yaml
kubectl apply -f deployment.yaml

Step 3: Verify the Deployment

Check Pod Status

bash
kubectl -n badgerpanel-system get pods

You should see a running pod:

NAME                                          READY   STATUS    RESTARTS   AGE
badgerpanel-orchestrator-7d8f9c6b4-x2k9p     1/1     Running   0          30s

Check Orchestrator Logs

bash
kubectl -n badgerpanel-system logs -f deployment/badgerpanel-orchestrator

Look for messages indicating:

  • Configuration loaded successfully
  • Connected to the panel via WebSocket
  • Cluster information reported

Verify in the Panel

In the admin dashboard, navigate to Admin > Clusters. Your cluster should show:

  • Status: Online (green indicator)
  • Orchestrator version
  • Node count and resource summary
  • Connected timestamp

Step 4: Configure Storage

Game servers need persistent storage for their data files. Ensure your cluster has a StorageClass configured.

Check Available StorageClasses

bash
kubectl get storageclass

Common options:

ProviderStorageClassNotes
K3s (default)local-pathLocal node storage, no replication
AWS EKSgp3 / gp2EBS volumes
GKEstandard / premium-rwoPersistent Disks
AKSmanaged-premiumAzure Managed Disks
LonghornlonghornDistributed storage for K3s/K8s
Rook-Cephrook-ceph-blockDistributed block storage

Local Storage

If using local-path or hostPath storage, game server data is tied to a specific node. Pod rescheduling to a different node will result in data loss. Use a distributed storage solution (Longhorn, Rook-Ceph) for production environments.

Step 5: Configure Networking

Game servers are exposed via NodePort services. Ensure the NodePort range is accessible from the internet.

Default NodePort Range

Kubernetes uses ports 30000-32767 by default. Players connect to game servers using:

<any-node-ip>:<nodeport>

Firewall Rules

Ensure NodePort range is open on all worker nodes:

bash
# Example for cloud security groups
# Allow inbound TCP 30000-32767
# Allow inbound UDP 30000-32767

Custom Port Ranges

If you need specific port numbers (e.g., 25565 for Minecraft), you can configure the Kubernetes API server's --service-node-port-range flag. Consult your Kubernetes distribution's documentation.

Troubleshooting Installation

Pod Not Starting

bash
# Describe the pod for events and errors
kubectl -n badgerpanel-system describe pod -l app.kubernetes.io/component=orchestrator

# Check for image pull errors
kubectl -n badgerpanel-system get events --sort-by='.lastTimestamp'

Orchestrator Can't Connect to Panel

Check the logs for connection errors:

bash
kubectl -n badgerpanel-system logs deployment/badgerpanel-orchestrator | grep -i error

Common causes:

  • panel_url is incorrect or unreachable from inside the cluster
  • DNS resolution issues -- ensure the cluster can resolve your panel's domain
  • Authentication token mismatch -- verify the token in the ConfigMap matches the panel
  • TLS certificate issues -- if using self-signed certificates, the orchestrator may reject the connection

RBAC Permission Errors

If the orchestrator logs show "forbidden" errors:

bash
# Verify the ClusterRoleBinding exists
kubectl get clusterrolebinding badgerpanel-orchestrator

# Verify the ServiceAccount is used by the pod
kubectl -n badgerpanel-system get pod -l app.kubernetes.io/component=orchestrator -o jsonpath='{.items[0].spec.serviceAccountName}'

Next Steps

BadgerPanel Documentation