Orchestrator Configuration
BadgerOrchestrator is configured through a YAML file mounted as a Kubernetes ConfigMap. The configuration defines how the orchestrator connects to the panel, manages game server resources, and monitors the cluster.
Configuration File
The configuration is stored in a ConfigMap and mounted into the orchestrator pod at /etc/badgerpanel/config.yml.
To update the configuration:
# Edit the ConfigMap directly
kubectl -n badgerpanel-system edit configmap badgerpanel-orchestrator-config
# Then restart the orchestrator to pick up changes
kubectl -n badgerpanel-system rollout restart deployment/badgerpanel-orchestratorFull Configuration Reference
# ===========================================
# BadgerOrchestrator Configuration
# ===========================================
# Panel connection settings
panel_url: "https://panel.your-domain.com"
token: "your-cluster-authentication-token"
# ===========================================
# Namespace Management
# ===========================================
# Prefix for namespaces created for game servers
# Each server gets its own namespace: <prefix><server-uuid>
namespace_prefix: "bp-"
# ===========================================
# Storage Configuration
# ===========================================
storage:
# Kubernetes StorageClass to use for game server PVCs
# Leave empty to use the cluster's default StorageClass
storage_class: ""
# Default access mode for PVCs
# ReadWriteOnce (single node), ReadWriteMany (multi-node)
access_mode: "ReadWriteOnce"
# ===========================================
# Networking
# ===========================================
networking:
# Service type for game server access
# NodePort is the default and most common for game servers
service_type: "NodePort"
# NodePort range constraints (within Kubernetes API server range)
# Leave as 0 to let Kubernetes assign automatically
nodeport_range_start: 0
nodeport_range_end: 0
# ===========================================
# Resource Defaults
# ===========================================
resources:
# Default CPU request for game server pods (millicores)
default_cpu_request: "100m"
# Default memory request for game server pods
default_memory_request: "256Mi"
# Whether to set resource limits (in addition to requests)
# Limits prevent pods from exceeding allocated resources
enforce_limits: true
# ===========================================
# Pod Configuration
# ===========================================
pods:
# Grace period for pod termination (seconds)
# The game server's stop command is sent first, then this timeout applies
termination_grace_period: 60
# Restart policy for game server pods
# Always = auto-restart on crash, Never = stay stopped
restart_policy: "Always"
# Image pull policy
# Always, IfNotPresent, Never
image_pull_policy: "IfNotPresent"
# Optional: image pull secrets for private registries
image_pull_secrets: []
# - name: "my-registry-secret"
# Node selector for scheduling game server pods
# Use this to restrict game servers to specific nodes
node_selector: {}
# kubernetes.io/role: "gameserver"
# node-type: "high-memory"
# Tolerations for scheduling on tainted nodes
tolerations: []
# - key: "dedicated"
# operator: "Equal"
# value: "gameserver"
# effect: "NoSchedule"
# ===========================================
# Monitoring
# ===========================================
monitoring:
# Interval for reporting cluster stats to the panel (seconds)
stats_interval: 15
# Buffer size for Kubernetes events before forwarding to the panel
event_buffer_size: 100
# Watch timeout for Kubernetes API watch connections (seconds)
watch_timeout: 300
# ===========================================
# Logging
# ===========================================
log:
# Log level: debug, info, warn, error
level: "info"Configuration Options
Panel Connection
| Option | Description | Default |
|---|---|---|
panel_url | Full URL of your BadgerPanel instance | Required |
token | Authentication token generated by the panel | Required |
The panel_url must be resolvable and reachable from within the Kubernetes cluster. If the panel is outside the cluster, ensure DNS resolution works for the domain.
Internal DNS
If your panel and cluster are in the same network, you can use an internal hostname. Otherwise, use the public domain name with valid TLS certificates.
Namespace Management
| Option | Description | Default |
|---|---|---|
namespace_prefix | Prefix for server namespaces | bp- |
Each game server is deployed in its own Kubernetes namespace named <prefix><server-uuid>. This provides:
- Resource isolation between servers
- Easy cleanup (deleting the namespace removes all resources)
- Clear separation in
kubectloutput
# Example: list all game server namespaces
kubectl get namespaces | grep "^bp-"Storage Configuration
| Option | Description | Default |
|---|---|---|
storage.storage_class | StorageClass for game server PVCs | Cluster default |
storage.access_mode | PVC access mode | ReadWriteOnce |
Storage Class Selection
Choose a StorageClass appropriate for your environment:
- Local path -- Fast but data is node-local (not portable)
- Network storage (Longhorn, Ceph, EBS) -- Portable across nodes but higher latency
- NFS -- Shared access but performance varies
For game servers, prioritize I/O performance. SSD-backed storage classes are strongly recommended.
Networking
| Option | Description | Default |
|---|---|---|
networking.service_type | Kubernetes service type for game servers | NodePort |
networking.nodeport_range_start | Minimum NodePort to assign (0 = auto) | 0 |
networking.nodeport_range_end | Maximum NodePort to assign (0 = auto) | 0 |
Service Types:
| Type | Description | Use Case |
|---|---|---|
NodePort | Exposes on a port across all nodes | Most game servers (default) |
LoadBalancer | Provisions a cloud load balancer | Cloud environments with LB support |
NodePort Access
With NodePort services, players connect using any node's IP and the assigned port. For a better experience, point a DNS record at your nodes and provide players with play.your-domain.com:30001.
Resource Defaults
| Option | Description | Default |
|---|---|---|
resources.default_cpu_request | Default CPU request for pods | 100m |
resources.default_memory_request | Default memory request for pods | 256Mi |
resources.enforce_limits | Set CPU/memory limits in addition to requests | true |
Resource requests and limits are set per server based on the server's allocation in the panel. These defaults are fallbacks when values are not specified.
Requests vs Limits
- Requests -- Guaranteed resources reserved for the pod (used for scheduling)
- Limits -- Maximum resources the pod can use (enforced by the kubelet)
When enforce_limits is true, game servers cannot exceed their allocated memory and CPU, preventing noisy-neighbor issues.
Pod Configuration
| Option | Description | Default |
|---|---|---|
pods.termination_grace_period | Seconds before forceful termination | 60 |
pods.restart_policy | Pod restart behavior | Always |
pods.image_pull_policy | When to pull container images | IfNotPresent |
pods.image_pull_secrets | Secrets for private registries | [] |
pods.node_selector | Labels to constrain pod scheduling | {} |
pods.tolerations | Tolerations for tainted nodes | [] |
Node Selectors
Use node selectors to restrict game servers to specific nodes:
# Label your game server nodes
# kubectl label node worker-1 node-type=gameserver
# kubectl label node worker-2 node-type=gameserver
pods:
node_selector:
node-type: "gameserver"Tolerations
If your game server nodes have taints, add matching tolerations:
# Taint your nodes
# kubectl taint nodes worker-1 dedicated=gameserver:NoSchedule
pods:
tolerations:
- key: "dedicated"
operator: "Equal"
value: "gameserver"
effect: "NoSchedule"Private Registries
If your game server Docker images are in a private registry:
# Create the secret
kubectl -n badgerpanel-system create secret docker-registry my-registry-secret \
--docker-server=registry.your-domain.com \
--docker-username=your-username \
--docker-password=your-passwordpods:
image_pull_secrets:
- name: "my-registry-secret"Monitoring
| Option | Description | Default |
|---|---|---|
monitoring.stats_interval | Cluster stats reporting interval (seconds) | 15 |
monitoring.event_buffer_size | Event buffer before forwarding | 100 |
monitoring.watch_timeout | API watch connection timeout (seconds) | 300 |
The orchestrator watches Kubernetes events (pod scheduling, failures, node changes) and forwards them to the panel in real-time. The event_buffer_size controls how many events are buffered if the panel connection is temporarily interrupted.
Logging
| Option | Description | Default |
|---|---|---|
log.level | Minimum log level | info |
Logs are written to stdout and captured by Kubernetes. View them with:
# Follow logs in real-time
kubectl -n badgerpanel-system logs -f deployment/badgerpanel-orchestrator
# View last 100 lines
kubectl -n badgerpanel-system logs deployment/badgerpanel-orchestrator --tail 100Debug Logging
Debug logging produces verbose output including every Kubernetes API call and event. Only enable it temporarily for troubleshooting.
Applying Configuration Changes
After editing the ConfigMap:
# Edit the ConfigMap
kubectl -n badgerpanel-system edit configmap badgerpanel-orchestrator-config
# Restart the orchestrator to load the new configuration
kubectl -n badgerpanel-system rollout restart deployment/badgerpanel-orchestrator
# Watch the rollout
kubectl -n badgerpanel-system rollout status deployment/badgerpanel-orchestratorNext Steps
- Installing the Orchestrator -- Installation guide
- Upgrading the Orchestrator -- How to update
- Architecture -- System architecture overview