Skip to content

Orchestrator Configuration

BadgerOrchestrator is configured through a YAML file mounted as a Kubernetes ConfigMap. The configuration defines how the orchestrator connects to the panel, manages game server resources, and monitors the cluster.

Configuration File

The configuration is stored in a ConfigMap and mounted into the orchestrator pod at /etc/badgerpanel/config.yml.

To update the configuration:

bash
# Edit the ConfigMap directly
kubectl -n badgerpanel-system edit configmap badgerpanel-orchestrator-config

# Then restart the orchestrator to pick up changes
kubectl -n badgerpanel-system rollout restart deployment/badgerpanel-orchestrator

Full Configuration Reference

yaml
# ===========================================
# BadgerOrchestrator Configuration
# ===========================================

# Panel connection settings
panel_url: "https://panel.your-domain.com"
token: "your-cluster-authentication-token"

# ===========================================
# Namespace Management
# ===========================================

# Prefix for namespaces created for game servers
# Each server gets its own namespace: <prefix><server-uuid>
namespace_prefix: "bp-"

# ===========================================
# Storage Configuration
# ===========================================
storage:
  # Kubernetes StorageClass to use for game server PVCs
  # Leave empty to use the cluster's default StorageClass
  storage_class: ""

  # Default access mode for PVCs
  # ReadWriteOnce (single node), ReadWriteMany (multi-node)
  access_mode: "ReadWriteOnce"

# ===========================================
# Networking
# ===========================================
networking:
  # Service type for game server access
  # NodePort is the default and most common for game servers
  service_type: "NodePort"

  # NodePort range constraints (within Kubernetes API server range)
  # Leave as 0 to let Kubernetes assign automatically
  nodeport_range_start: 0
  nodeport_range_end: 0

# ===========================================
# Resource Defaults
# ===========================================
resources:
  # Default CPU request for game server pods (millicores)
  default_cpu_request: "100m"

  # Default memory request for game server pods
  default_memory_request: "256Mi"

  # Whether to set resource limits (in addition to requests)
  # Limits prevent pods from exceeding allocated resources
  enforce_limits: true

# ===========================================
# Pod Configuration
# ===========================================
pods:
  # Grace period for pod termination (seconds)
  # The game server's stop command is sent first, then this timeout applies
  termination_grace_period: 60

  # Restart policy for game server pods
  # Always = auto-restart on crash, Never = stay stopped
  restart_policy: "Always"

  # Image pull policy
  # Always, IfNotPresent, Never
  image_pull_policy: "IfNotPresent"

  # Optional: image pull secrets for private registries
  image_pull_secrets: []
  # - name: "my-registry-secret"

  # Node selector for scheduling game server pods
  # Use this to restrict game servers to specific nodes
  node_selector: {}
  # kubernetes.io/role: "gameserver"
  # node-type: "high-memory"

  # Tolerations for scheduling on tainted nodes
  tolerations: []
  # - key: "dedicated"
  #   operator: "Equal"
  #   value: "gameserver"
  #   effect: "NoSchedule"

# ===========================================
# Monitoring
# ===========================================
monitoring:
  # Interval for reporting cluster stats to the panel (seconds)
  stats_interval: 15

  # Buffer size for Kubernetes events before forwarding to the panel
  event_buffer_size: 100

  # Watch timeout for Kubernetes API watch connections (seconds)
  watch_timeout: 300

# ===========================================
# Logging
# ===========================================
log:
  # Log level: debug, info, warn, error
  level: "info"

Configuration Options

Panel Connection

OptionDescriptionDefault
panel_urlFull URL of your BadgerPanel instanceRequired
tokenAuthentication token generated by the panelRequired

The panel_url must be resolvable and reachable from within the Kubernetes cluster. If the panel is outside the cluster, ensure DNS resolution works for the domain.

Internal DNS

If your panel and cluster are in the same network, you can use an internal hostname. Otherwise, use the public domain name with valid TLS certificates.

Namespace Management

OptionDescriptionDefault
namespace_prefixPrefix for server namespacesbp-

Each game server is deployed in its own Kubernetes namespace named <prefix><server-uuid>. This provides:

  • Resource isolation between servers
  • Easy cleanup (deleting the namespace removes all resources)
  • Clear separation in kubectl output
bash
# Example: list all game server namespaces
kubectl get namespaces | grep "^bp-"

Storage Configuration

OptionDescriptionDefault
storage.storage_classStorageClass for game server PVCsCluster default
storage.access_modePVC access modeReadWriteOnce

Storage Class Selection

Choose a StorageClass appropriate for your environment:

  • Local path -- Fast but data is node-local (not portable)
  • Network storage (Longhorn, Ceph, EBS) -- Portable across nodes but higher latency
  • NFS -- Shared access but performance varies

For game servers, prioritize I/O performance. SSD-backed storage classes are strongly recommended.

Networking

OptionDescriptionDefault
networking.service_typeKubernetes service type for game serversNodePort
networking.nodeport_range_startMinimum NodePort to assign (0 = auto)0
networking.nodeport_range_endMaximum NodePort to assign (0 = auto)0

Service Types:

TypeDescriptionUse Case
NodePortExposes on a port across all nodesMost game servers (default)
LoadBalancerProvisions a cloud load balancerCloud environments with LB support

NodePort Access

With NodePort services, players connect using any node's IP and the assigned port. For a better experience, point a DNS record at your nodes and provide players with play.your-domain.com:30001.

Resource Defaults

OptionDescriptionDefault
resources.default_cpu_requestDefault CPU request for pods100m
resources.default_memory_requestDefault memory request for pods256Mi
resources.enforce_limitsSet CPU/memory limits in addition to requeststrue

Resource requests and limits are set per server based on the server's allocation in the panel. These defaults are fallbacks when values are not specified.

Requests vs Limits

  • Requests -- Guaranteed resources reserved for the pod (used for scheduling)
  • Limits -- Maximum resources the pod can use (enforced by the kubelet)

When enforce_limits is true, game servers cannot exceed their allocated memory and CPU, preventing noisy-neighbor issues.

Pod Configuration

OptionDescriptionDefault
pods.termination_grace_periodSeconds before forceful termination60
pods.restart_policyPod restart behaviorAlways
pods.image_pull_policyWhen to pull container imagesIfNotPresent
pods.image_pull_secretsSecrets for private registries[]
pods.node_selectorLabels to constrain pod scheduling{}
pods.tolerationsTolerations for tainted nodes[]

Node Selectors

Use node selectors to restrict game servers to specific nodes:

yaml
# Label your game server nodes
# kubectl label node worker-1 node-type=gameserver
# kubectl label node worker-2 node-type=gameserver

pods:
  node_selector:
    node-type: "gameserver"

Tolerations

If your game server nodes have taints, add matching tolerations:

yaml
# Taint your nodes
# kubectl taint nodes worker-1 dedicated=gameserver:NoSchedule

pods:
  tolerations:
    - key: "dedicated"
      operator: "Equal"
      value: "gameserver"
      effect: "NoSchedule"

Private Registries

If your game server Docker images are in a private registry:

bash
# Create the secret
kubectl -n badgerpanel-system create secret docker-registry my-registry-secret \
  --docker-server=registry.your-domain.com \
  --docker-username=your-username \
  --docker-password=your-password
yaml
pods:
  image_pull_secrets:
    - name: "my-registry-secret"

Monitoring

OptionDescriptionDefault
monitoring.stats_intervalCluster stats reporting interval (seconds)15
monitoring.event_buffer_sizeEvent buffer before forwarding100
monitoring.watch_timeoutAPI watch connection timeout (seconds)300

The orchestrator watches Kubernetes events (pod scheduling, failures, node changes) and forwards them to the panel in real-time. The event_buffer_size controls how many events are buffered if the panel connection is temporarily interrupted.

Logging

OptionDescriptionDefault
log.levelMinimum log levelinfo

Logs are written to stdout and captured by Kubernetes. View them with:

bash
# Follow logs in real-time
kubectl -n badgerpanel-system logs -f deployment/badgerpanel-orchestrator

# View last 100 lines
kubectl -n badgerpanel-system logs deployment/badgerpanel-orchestrator --tail 100

Debug Logging

Debug logging produces verbose output including every Kubernetes API call and event. Only enable it temporarily for troubleshooting.

Applying Configuration Changes

After editing the ConfigMap:

bash
# Edit the ConfigMap
kubectl -n badgerpanel-system edit configmap badgerpanel-orchestrator-config

# Restart the orchestrator to load the new configuration
kubectl -n badgerpanel-system rollout restart deployment/badgerpanel-orchestrator

# Watch the rollout
kubectl -n badgerpanel-system rollout status deployment/badgerpanel-orchestrator

Next Steps

BadgerPanel Documentation