ServiceRadar Docker Setup Guide
This guide walks you through setting up ServiceRadar using Docker Compose, including initial configuration, device setup, and troubleshooting.
Prerequisites
- Docker Engine 20.10+
- Docker Compose 2.0+
- 8GB+ RAM recommended
- 50GB+ available disk space
Quick Start
1. Initial Setup
Clone the repository and navigate to the ServiceRadar directory:
git clone https://github.com/carverauto/serviceradar.git
cd serviceradar
2. First-Time Startup
Start the ServiceRadar stack for the first time:
SERVICERADAR_VERSION=latest docker-compose up -d
Important: On first startup, ServiceRadar will:
- Generate mTLS certificates for secure communication
- Create random passwords and API keys
- Generate a bcrypt hash for the admin user
- Display the admin credentials in the
config-updaterservice logs
3. Retrieve Admin Credentials
To see your admin credentials, check the config-updater logs:
docker-compose logs config-updater
Look for output like:
🔐 IMPORTANT: ServiceRadar Admin Credentials
=============================================
Username: admin
Password: AbC123xYz789
Save this password immediately! You'll need it to log into the ServiceRadar web interface.
4. Access ServiceRadar
Once all services are running, access ServiceRadar at:
- Web Interface: http://localhost
- API Endpoint: http://localhost/api
- Direct Core API: http://localhost:8090
Login with:
- Username:
admin - Password: (from step 3)
Architecture Overview
ServiceRadar consists of these main components:
Core Services
- Core: Main API and business logic service
- Web: Next.js web interface
- Nginx: Reverse proxy and load balancer
- CNPG: Time-series database (TimeBase fork)
Data Collection Services
- Poller: Device polling and monitoring service
- Agent: Network discovery and ICMP monitoring
- Flowgger: Syslog message collector
- Trapd: SNMP trap collector
- Mapper: Network discovery via SNMP
Supporting Services
- NATS: Message bus and event streaming
- KV: Key-value store for configuration
- Sync: Device discovery synchronization
- DB Event Writer: NATS to database bridge
Monitoring Services
- OTEL: OpenTelemetry metrics collector
- Zen: Event processing and alerting engine
Configuration
Environment Variables
Create a .env file in the ServiceRadar directory:
# ServiceRadar Version
SERVICERADAR_VERSION=latest
# Logging Level
LOG_LEVEL=info
RUST_LOG=info
# Database Settings
PROTON_LOG_LEVEL=error
Volume Mounts
ServiceRadar uses the following Docker volumes:
cert-data: mTLS certificates and API keyscredentials: Database passwords and secretsgenerated-config: Generated configuration files*-data: Service-specific data storage
Ports
Default exposed ports:
| Service | Port | Protocol | Purpose |
|---|---|---|---|
| Nginx | 80 | HTTP | Web interface and API |
| Core | 8090 | HTTP | Direct API access |
| NATS | 4222, 8222 | TCP | Message bus |
| Flowgger | 514 | UDP | Syslog collection |
| Trapd | 162 | UDP | SNMP trap collection |
Mapper Service Configuration
Docker Compose mounts docker/compose/mapper.docker.json into /etc/serviceradar/mapper.json for the serviceradar-mapper container. Update this file when you need to adjust SNMP discovery:
- Set
workers,max_active_jobs, and timeouts to match how many concurrent SNMP sessions your network can handle. - Populate
default_credentialsfor blanket SNMP access, then addcredentials[]entries for per-CIDR overrides (v2c or v3). Place the most specific subnets first. - Extend the
oidsblocks if you want Mapper to gather vendor-specific identifiers duringbasic,interfaces, ortopologyruns. - Use
stream_configto tag events and, if needed, rename the CNPG streams used for devices (device_stream), interfaces, and topology discovery. The defaults align with the pipelines described in the Discovery guide. - Configure
scheduled_jobs[]withseeds, discoverytype, interval, concurrency, and retries. Jobs start immediately on boot and then honor their interval. - Add optional
unifi_apis[]entries to poll UniFi Network controllers as part of discovery. Providebase_url,api_key, and only setinsecure_skip_verifyfor lab testing.
After saving changes, redeploy Mapper so it reloads the file:
docker-compose up -d mapper
SPIFFE In Docker Compose
docker compose up -d now launches a local SPIRE control plane automatically:
- The
serviceradar-spire-servercontainer runs the standalone SPIRE server defined indocker/compose/spire/server.conf(SQLite datastore, exposed on port8081inside the Compose network). serviceradar-spire-bootstrapwaits for the server socket, generates a join token for the compose agent, and ensures that every ServiceRadar workload has a registration entry (spiffe://carverauto.dev/services/<name>keyed by the binary path selector likeunix:path:/usr/local/bin/serviceradar-core).serviceradar-spire-agentreads the generated token, dials the server, and exposes the Workload API at/run/spire/sockets/agent.sock. Each ServiceRadar container mounts this socket read-only and now speaks SPIFFE when dialing Core, datasvc, or the other internal gRPC endpoints.
There is no additional override file to apply—the default stack ships with SPIFFE
enabled and docker-compose.spiffe.yml is kept only for backwards compatibility.
To verify the SPIRE components:
docker compose ps spire-server spire-agent
docker logs serviceradar-spire-bootstrap
docker exec serviceradar-spire-server /opt/spire/bin/spire-server entry show
docker exec serviceradar-poller ls /run/spire/sockets/agent.sock
Need the full edge onboarding workflow (where a poller runs outside Docker and
bootstraps against the demo namespace)? See the dedicated
Secure Edge Onboarding runbook, which still relies on
the helper scripts under docker/compose/edge-*.
Edge Poller Against the Kubernetes Core
When you need to run the poller away from the cluster (for example on an edge host) but still connect back to the demo namespace:
Need everything reset in one shot? Run:
docker/compose/edge-poller-restart.shThe helper stops the stack, clears the nested SPIRE runtime volume, regenerates configs, refreshes the upstream join token/bundle, and restarts the poller/agent pair. Use
--skip-refreshif you want to reuse an existing join token or--dry-runto preview the steps.
-
Collect upstream credentials from the cluster.
docker/compose/refresh-upstream-credentials.sh \
--namespace demo \
--spiffe-id spiffe://carverauto.dev/ns/edge/poller-nested-spireThe helper contacts the demo SPIRE server via
kubectl, generates a fresh downstream join token, recreates the registration entry with the standardunix:*selectors, and writes the artifacts todocker/compose/spire/upstream-join-tokenanddocker/compose/spire/upstream-bundle.pem. Use--ttl,--selectors, or--no-bundleif you need different defaults. -
Prime the Docker volumes with certs/config.
docker compose --env-file edge-poller.env \
-f docker/compose/poller-stack.compose.yml up -d config-updater
docker compose -f docker/compose/poller-stack.compose.yml stop config-updater -
Rewrite the generated poller configuration for the edge environment.
CORE_ADDRESS=23.138.124.18:50052 \
POLLERS_AGENT_ADDRESS=agent:50051 \
docker/compose/setup-edge-poller.shThe helper copies the SPIFFE template into the config volume, updates
core_address, and stages a nested SPIRE config fromdocker/compose/edge/poller-spire/. WhenKV_ADDRESSis omitted, it also removes the sample sweep/sysmon checker configs so the Docker agent only talks to the local services required for edge validation. -
Start the poller (and optional agent) without re-running the config-updater.
docker compose --env-file edge-poller.env \
-f docker/compose/poller-stack.compose.yml up -d --no-deps poller agent
Provisioning Packages from the Core API
Edge installers no longer need direct cluster access once an operator mints an onboarding package through Core:
- Create packages from the admin UI at
/admin/edge-packagesor via the API (POST /api/admin/edge-packages). - Download bundles securely:
serviceradar-cli edge-package-download --core-url https://core.example.org --id <package> --download-token <token> --output edge-package.tar.gzwrites a tarball containingedge-poller.env, the SPIRE artifacts, and a README. Extract it on the edge host and rundocker/compose/edge-poller-restart.sh --env-file edge-poller.envto apply the configuration. - Revoke compromised or unused packages:
serviceradar-cli edge-package-revoke --core-url https://core.example.org --id <package> --reason "Rotated edge host"deletes the downstream SPIRE entry and blocks future agent attestations.
All /api/admin/edge-packages routes and the /admin/edge-packages UI are
RBAC-protected (admin role).
All generated nested SPIRE configuration lives under generated-config/poller-spire/. Re-run the config-updater container whenever you change trust domains or upstream addresses so the HCL files stay in sync.
Device Configuration
SNMP Device Setup
To monitor devices via SNMP, configure your network devices to:
- Enable SNMP v2c/v3 on the device
- Set community string (default:
publicfor v2c) - Allow SNMP access from ServiceRadar's IP address
Example Cisco configuration:
snmp-server community public RO
snmp-server location "Data Center 1"
snmp-server contact "[email protected]"
Syslog Configuration
Configure devices to send syslog messages to ServiceRadar:
- Point syslog to ServiceRadar IP on port 514/UDP
- Set appropriate log levels (info, warning, error)
Example Cisco configuration:
logging host <serviceradar-ip>
logging facility local0
logging source-interface Loopback0
Example Linux rsyslog configuration:
# /etc/rsyslog.conf
*.* @<serviceradar-ip>:514
SNMP Trap Configuration
Configure devices to send SNMP traps:
Example Cisco configuration:
snmp-server enable traps
snmp-server host <serviceradar-ip> public
Adding Devices to Monitoring
Via Web Interface
- Login to the ServiceRadar web interface
- Navigate to "Devices" → "Add Device"
- Enter device details:
- IP address or hostname
- SNMP community string
- Device type/vendor
- Save the configuration
Via API
Use the ServiceRadar API to add devices programmatically:
curl -X POST http://localhost/api/devices \
-H "Content-Type: application/json" \
-H "X-API-Key: <your-api-key>" \
-d '{
"ip": "192.168.1.1",
"hostname": "router-01",
"snmp_community": "public",
"device_type": "cisco_ios"
}'
Bulk Import
For bulk device import, use the ServiceRadar CLI:
# Create a CSV file with device information
echo "ip,hostname,snmp_community,device_type" > devices.csv
echo "192.168.1.1,router-01,public,cisco_ios" >> devices.csv
echo "192.168.1.2,switch-01,public,cisco_ios" >> devices.csv
# Import devices
docker-compose exec core serviceradar-cli import-devices --file=/path/to/devices.csv
Monitoring and Maintenance
Service Health
Check service status:
docker-compose ps
Check service logs:
# View all logs
docker-compose logs
# View specific service logs
docker-compose logs core
docker-compose logs web
Database Maintenance
Use Postgres tooling (psql/pg_dump) against CNPG (default host cnpg-rw:5432, database serviceradar). Example health check:
psql "postgres://serviceradar:<password>@cnpg-rw:5432/serviceradar?sslmode=verify-full" -c "SELECT 1;"
Security Considerations
Default Security Features
ServiceRadar implements several security features by default:
- mTLS Communication: All inter-service communication uses mutual TLS
- API Authentication: JWT-based authentication for API access
- Network Isolation: Services run in isolated Docker networks
- Credential Rotation: Automatic generation of secure passwords and keys
Post-Installation Security
After initial setup:
- Change the admin password immediately
- Remove the password file:
docker-compose exec core rm /etc/serviceradar/certs/password.txt - Restrict network access to ServiceRadar ports
- Enable HTTPS for production deployments
- Regular backups of configuration and data
Changing Admin Password
# Generate new bcrypt hash
echo 'your-new-secure-password' | docker-compose exec -T core serviceradar-cli
# Update configuration (replace <new-hash> with output from above)
docker-compose exec core serviceradar-cli update-config \
-file=/etc/serviceradar/config/core.json \
-admin-hash='<new-hash>'
# Restart core service
docker-compose restart core
Troubleshooting
Common Issues
Services Won't Start
- Check logs:
docker-compose logs <service-name> - Verify prerequisites: Docker version, available resources
- Check port conflicts: Ensure required ports are available
Can't Access Web Interface
- Check nginx status:
docker-compose ps nginx - Check nginx logs:
docker-compose logs nginx - Verify core service:
docker-compose ps core - Test direct access:
curl http://localhost:8090/api/status
Database Connection Issues
- Check CNPG status (cluster or managed DB)
- Test database connection:
psql "postgres://serviceradar:<password>@cnpg-rw:5432/serviceradar?sslmode=verify-full" -c "SELECT 1;"
Certificate Issues
If you see certificate-related errors:
-
Regenerate certificates:
docker-compose down
docker volume rm serviceradar_cert-data
docker-compose up cert-generator -
Check certificate validity:
docker-compose exec core openssl x509 -in /etc/serviceradar/certs/core.pem -text -noout
Log Analysis
Service-Specific Logs
# Core service logs
docker-compose logs core | grep ERROR
# Web interface logs
docker-compose logs web | grep -E "(error|Error)"
# Network logs
docker-compose logs agent poller mapper
Real-Time Monitoring
# Follow all logs
docker-compose logs -f
# Follow specific service
docker-compose logs -f core
# Follow multiple services
docker-compose logs -f core web
Performance Tuning
Resource Allocation
For production deployments, consider:
# docker-compose.override.yml
version: '3.8'
services:
core:
deploy:
resources:
limits:
memory: 2G
cpus: '1.0'
Database Optimization
-- Optimize CNPG database settings
ALTER SETTINGS max_memory_usage = 4000000000;
ALTER SETTINGS max_threads = 8;
Scaling and High Availability
Horizontal Scaling
ServiceRadar supports horizontal scaling of certain components:
- Multiple Pollers: Deploy additional poller instances for load distribution
- Multiple Agents: Deploy agents across different network segments
- Database Clustering: Configure CNPG in cluster mode (Enterprise feature)
Load Balancing
For high availability, deploy multiple ServiceRadar instances behind a load balancer:
# docker-compose.ha.yml
version: '3.8'
services:
nginx-lb:
image: nginx:alpine
ports:
- "443:443"
volumes:
- ./nginx-lb.conf:/etc/nginx/nginx.conf
depends_on:
- serviceradar-1
- serviceradar-2
Migration and Upgrades
Version Upgrades
- Backup current installation
- Update version in environment:
export SERVICERADAR_VERSION=v1.1.0 - Pull new images:
docker-compose pull - Restart services:
docker-compose up -d
Data Migration
When migrating between major versions:
- Export existing data
- Update configuration format if needed
- Import data to new installation
- Verify data integrity
Support and Resources
- Documentation: ServiceRadar Docs
- GitHub Issues: Report bugs and feature requests
- Community: [Discord/Slack community links]
- Enterprise Support: [Contact information for enterprise customers]
Appendix
Default Configuration Files
Default configuration files are available in the docker/compose/ directory:
core.docker.json: Core service configurationpoller.docker.json: Poller service configurationweb.docker.json: Web interface configurationnats.docker.conf: NATS message bus configuration
Service Dependencies
API Reference
Key API endpoints:
GET /api/status: Service health statusGET /api/devices: List monitored devicesPOST /api/devices: Add new deviceGET /api/events: Query system eventsPOST /api/query: Execute SRQL queries
For complete API documentation, visit /swagger when ServiceRadar is running.