Architecture
BadgerPanel is built as a distributed system with three primary components connected in a hub-and-spoke topology. The Panel serves as the central management plane, while BadgerDaemon and BadgerOrchestrator agents run on your infrastructure to manage game servers.
System Overview
+-----------------------+
| BadgerPanel |
| (Go API + Next.js UI) |
| |
| MySQL Redis MinIO |
| Nginx |
+-----------+------------+
|
WebSocket + HTTPS
|
+----------------------+---------------------+
| | |
+---------+--------+ +---------+--------+ +---------+----------+
| BadgerDaemon | | BadgerDaemon | | BadgerOrchestrator |
| (Docker Node 1) | | (Docker Node 2) | | (K8s Cluster 1) |
| | | | | |
| [Container] | | [Container] | | [Pod] [Pod] [Pod] |
| [Container] | | [Container] | | |
+------------------+ +------------------+ +--------------------+Components
Panel (API + Web Frontend)
The Panel is the central hub of the BadgerPanel platform. It consists of two services:
API Server (Go + Gin)
- Handles all business logic: authentication, server management, billing, user management, and administration
- Exposes a REST API with WebSocket support for real-time features
- Manages database migrations automatically on startup
- Communicates with Daemon and Orchestrator agents over authenticated WebSocket connections
Web Frontend (Next.js 15 + React 19)
- TypeScript-based single-page application with 100+ pages
- Client portal for end users to manage their servers
- Full admin dashboard for operators
- Real-time console, resource monitoring, and cluster event streaming
BadgerDaemon (Docker Node Agent)
BadgerDaemon is a single Go binary deployed on each bare-metal or VPS node that runs game servers using Docker.
Responsibilities:
- Creates and manages Docker containers with resource limits (memory, CPU, disk, I/O, swap)
- Manages port allocations (IP:port pairs assigned from the panel)
- Streams container stdout/stderr to the panel console via WebSocket
- Handles file operations within container bind-mounted volumes
- Serves a built-in SFTP server for direct file transfer
- Monitors container health and resource usage
- Implements crash recovery watchdog with auto-restart
- Runs egg installation scripts during server setup
Communication:
- Connects to the panel over an authenticated WebSocket connection
- Uses HMAC-SHA256 request authentication with nonce-based replay protection
- Reports system statistics (CPU, memory, disk) at regular intervals
BadgerOrchestrator (Kubernetes Agent)
BadgerOrchestrator is a single Go binary deployed inside Kubernetes clusters to manage game server workloads as native Kubernetes resources.
Responsibilities:
- Creates game servers as Kubernetes Deployments with resource requests and limits
- Allocates ports via NodePort services
- Manages persistent storage via PersistentVolumeClaims
- Streams pod logs to the panel console
- Supports pod exec for direct command execution
- Handles deployment scaling, rolling restarts, and updates
- Monitors node status and manages drain/cordon operations
- Reports real-time cluster events (scheduling, failures, scaling)
Communication:
- Connects to the panel over an authenticated WebSocket connection
- Forwards cluster events and pod status changes in real-time
- Receives server management commands from the panel
Supporting Infrastructure
All supporting services are deployed alongside the Panel via Docker Compose.
| Service | Purpose | Details |
|---|---|---|
| MySQL 8.0 | Primary data store | 50+ tables with automatic migrations. Stores users, servers, billing, configuration, and audit data. |
| Redis 7 | Cache & sessions | Session storage, API rate limiting, response caching, daemon nonce persistence, and pub/sub messaging. |
| MinIO | Object storage | S3-compatible storage for server backups and file uploads. Self-hosted alternative to AWS S3. |
| Nginx | Reverse proxy | TLS termination, HTTP/2, WebSocket proxying, and static file serving. Routes requests to API and Web services. |
Communication Flow
Panel to Daemon/Orchestrator
All communication between the panel and its agents uses WebSocket connections initiated by the agents:
1. Admin registers a new node/cluster in the panel
2. Panel generates a unique authentication token
3. Admin installs the Daemon/Orchestrator with the token
4. Agent connects to the panel's WebSocket endpoint
5. Bidirectional communication is established:
- Panel sends commands (start, stop, install, etc.)
- Agent sends events (status changes, stats, console output)Request Authentication
Daemon-panel communication is secured with multiple layers:
- HMAC-SHA256 signatures on all requests
- Nonce-based replay protection stored in Redis (survives restarts, works across API instances)
- Timestamp validation to prevent stale request replay
- TLS encryption for all traffic
User to Panel
Browser <--HTTPS--> Nginx <--HTTP--> Next.js (Web)
Browser <--HTTPS--> Nginx <--HTTP--> Go API (REST + WebSocket)Users interact with the panel through Nginx, which handles TLS termination and proxies requests to the appropriate backend service. WebSocket connections for the console and real-time features are proxied through Nginx.
Data Flow Examples
Starting a Game Server
User clicks "Start"
|
v
Browser --> Nginx --> Go API
|
API validates permissions
API updates server state in MySQL
API sends "start" command via WebSocket
|
v
Daemon/Orchestrator receives command
|
Docker: docker start <container>
K8s: kubectl scale deployment --replicas=1
|
Agent streams startup logs via WebSocket
|
v
Go API --> WebSocket --> Browser (live console)Console Output Streaming
Game Server process writes to stdout
|
v
Daemon captures container logs (docker attach)
Orchestrator captures pod logs (kubectl logs -f)
|
Agent forwards logs via WebSocket to Panel
|
v
Panel API relays via WebSocket to connected browsers
|
v
Browser renders in real-time terminal UIDeployment Architecture
Single-Server Deployment
For small deployments, the entire Panel stack runs on a single server:
+-----------------------------------------------+
| Panel Server |
| |
| Docker Compose: |
| - API (Go) |
| - Web (Next.js) |
| - Nginx (reverse proxy + TLS) |
| - MySQL 8.0 |
| - Redis 7 |
| - MinIO |
| - Daemon Builder (compiles daemon binaries) |
| - Orchestrator Builder |
+-----------------------------------------------+Daemons and Orchestrators are then installed on separate servers/clusters that connect back to this panel.
Multi-Region Deployment
For larger deployments, daemons and orchestrators can be distributed globally:
US-East Panel
|
+---------+---------+
| | |
US-East EU-West APAC
Daemon Daemon K8s Cluster
Nodes Nodes (Orchestrator)Scaling the Panel
The Panel API is stateless (sessions are stored in Redis), so it can be scaled horizontally behind a load balancer. Multiple API instances share the same MySQL and Redis backends.
Port Usage
| Service | Default Port | Protocol |
|---|---|---|
| Nginx (HTTPS) | 443 | TCP |
| Nginx (HTTP redirect) | 80 | TCP |
| Panel API (internal) | 8080 | TCP |
| Panel Web (internal) | 3000 | TCP |
| MySQL (internal) | 3306 | TCP |
| Redis (internal) | 6379 | TCP |
| MinIO API (internal) | 9000 | TCP |
| MinIO Console (internal) | 9001 | TCP |
| Daemon SFTP | 2022 | TCP |
| Game Servers | Varies | TCP/UDP |
Internal Ports
MySQL, Redis, MinIO, and the internal API/Web ports should not be exposed to the public internet. Only Nginx (80/443), Daemon SFTP (2022), and game server ports need to be publicly accessible.
Next Steps
- Getting Started -- Deploy the Panel with Docker Compose
- Installing the Daemon -- Set up Docker nodes
- Installing the Orchestrator -- Set up Kubernetes clusters