Skip to main content

ServiceRadar Docker Setup Guide

This guide walks you through setting up ServiceRadar using Docker Compose, including initial configuration, device setup, and troubleshooting.

Prerequisites

  • Docker Engine 20.10+
  • Docker Compose 2.0+
  • 8GB+ RAM recommended
  • 50GB+ available disk space

Quick Start

1. Initial Setup

Clone the repository and navigate to the ServiceRadar directory:

git clone https://github.com/carverauto/serviceradar.git
cd serviceradar

2. First-Time Startup

Start the ServiceRadar stack for the first time:

SERVICERADAR_VERSION=latest docker-compose up -d

Important: On first startup, ServiceRadar will:

  • Generate mTLS certificates for secure communication
  • Create random passwords and API keys
  • Generate a bcrypt hash for the admin user
  • Display the admin credentials in the config-updater service logs

3. Retrieve Admin Credentials

To see your admin credentials, check the config-updater logs:

docker-compose logs config-updater

Look for output like:

🔐 IMPORTANT: ServiceRadar Admin Credentials
=============================================
Username: admin
Password: AbC123xYz789

Save this password immediately! You'll need it to log into the ServiceRadar web interface.

4. Access ServiceRadar

Once all services are running, access ServiceRadar at:

Login with:

  • Username: admin
  • Password: (from step 3)

Architecture Overview

ServiceRadar consists of these main components:

Core Services

  • Core: Main API and business logic service
  • Web: Next.js web interface
  • Nginx: Reverse proxy and load balancer
  • CNPG: Time-series database (TimeBase fork)

Data Collection Services

  • Poller: Device polling and monitoring service
  • Agent: Network discovery and ICMP monitoring
  • Flowgger: Syslog message collector
  • Trapd: SNMP trap collector
  • Mapper: Network discovery via SNMP

Supporting Services

  • NATS: Message bus and event streaming
  • KV: Key-value store for configuration
  • Sync: Device discovery synchronization
  • DB Event Writer: NATS to database bridge

Monitoring Services

  • OTEL: OpenTelemetry metrics collector
  • Zen: Event processing and alerting engine

Configuration

Environment Variables

Create a .env file in the ServiceRadar directory:

# ServiceRadar Version
SERVICERADAR_VERSION=latest

# Logging Level
LOG_LEVEL=info
RUST_LOG=info

# Database Settings
PROTON_LOG_LEVEL=error

Volume Mounts

ServiceRadar uses the following Docker volumes:

  • cert-data: mTLS certificates and API keys
  • credentials: Database passwords and secrets
  • generated-config: Generated configuration files
  • *-data: Service-specific data storage

Ports

Default exposed ports:

ServicePortProtocolPurpose
Nginx80HTTPWeb interface and API
Core8090HTTPDirect API access
NATS4222, 8222TCPMessage bus
Flowgger514UDPSyslog collection
Trapd162UDPSNMP trap collection

Mapper Service Configuration

Docker Compose mounts docker/compose/mapper.docker.json into /etc/serviceradar/mapper.json for the serviceradar-mapper container. Update this file when you need to adjust SNMP discovery:

  • Set workers, max_active_jobs, and timeouts to match how many concurrent SNMP sessions your network can handle.
  • Populate default_credentials for blanket SNMP access, then add credentials[] entries for per-CIDR overrides (v2c or v3). Place the most specific subnets first.
  • Extend the oids blocks if you want Mapper to gather vendor-specific identifiers during basic, interfaces, or topology runs.
  • Use stream_config to tag events and, if needed, rename the CNPG streams used for devices (device_stream), interfaces, and topology discovery. The defaults align with the pipelines described in the Discovery guide.
  • Configure scheduled_jobs[] with seeds, discovery type, interval, concurrency, and retries. Jobs start immediately on boot and then honor their interval.
  • Add optional unifi_apis[] entries to poll UniFi Network controllers as part of discovery. Provide base_url, api_key, and only set insecure_skip_verify for lab testing.

After saving changes, redeploy Mapper so it reloads the file:

docker-compose up -d mapper

SPIFFE In Docker Compose

docker compose up -d now launches a local SPIRE control plane automatically:

  1. The serviceradar-spire-server container runs the standalone SPIRE server defined in docker/compose/spire/server.conf (SQLite datastore, exposed on port 8081 inside the Compose network).
  2. serviceradar-spire-bootstrap waits for the server socket, generates a join token for the compose agent, and ensures that every ServiceRadar workload has a registration entry (spiffe://carverauto.dev/services/<name> keyed by the binary path selector like unix:path:/usr/local/bin/serviceradar-core).
  3. serviceradar-spire-agent reads the generated token, dials the server, and exposes the Workload API at /run/spire/sockets/agent.sock. Each ServiceRadar container mounts this socket read-only and now speaks SPIFFE when dialing Core, datasvc, or the other internal gRPC endpoints.

There is no additional override file to apply—the default stack ships with SPIFFE enabled and docker-compose.spiffe.yml is kept only for backwards compatibility. To verify the SPIRE components:

docker compose ps spire-server spire-agent
docker logs serviceradar-spire-bootstrap
docker exec serviceradar-spire-server /opt/spire/bin/spire-server entry show
docker exec serviceradar-poller ls /run/spire/sockets/agent.sock

Need the full edge onboarding workflow (where a poller runs outside Docker and bootstraps against the demo namespace)? See the dedicated Secure Edge Onboarding runbook, which still relies on the helper scripts under docker/compose/edge-*.

Edge Poller Against the Kubernetes Core

When you need to run the poller away from the cluster (for example on an edge host) but still connect back to the demo namespace:

Need everything reset in one shot? Run:

docker/compose/edge-poller-restart.sh

The helper stops the stack, clears the nested SPIRE runtime volume, regenerates configs, refreshes the upstream join token/bundle, and restarts the poller/agent pair. Use --skip-refresh if you want to reuse an existing join token or --dry-run to preview the steps.

  1. Collect upstream credentials from the cluster.

    docker/compose/refresh-upstream-credentials.sh \
    --namespace demo \
    --spiffe-id spiffe://carverauto.dev/ns/edge/poller-nested-spire

    The helper contacts the demo SPIRE server via kubectl, generates a fresh downstream join token, recreates the registration entry with the standard unix:* selectors, and writes the artifacts to docker/compose/spire/upstream-join-token and docker/compose/spire/upstream-bundle.pem. Use --ttl, --selectors, or --no-bundle if you need different defaults.

  2. Prime the Docker volumes with certs/config.

    docker compose --env-file edge-poller.env \
    -f docker/compose/poller-stack.compose.yml up -d config-updater
    docker compose -f docker/compose/poller-stack.compose.yml stop config-updater
  3. Rewrite the generated poller configuration for the edge environment.

    CORE_ADDRESS=23.138.124.18:50052 \
    POLLERS_AGENT_ADDRESS=agent:50051 \
    docker/compose/setup-edge-poller.sh

    The helper copies the SPIFFE template into the config volume, updates core_address, and stages a nested SPIRE config from docker/compose/edge/poller-spire/. When KV_ADDRESS is omitted, it also removes the sample sweep/sysmon checker configs so the Docker agent only talks to the local services required for edge validation.

  4. Start the poller (and optional agent) without re-running the config-updater.

    docker compose --env-file edge-poller.env \
    -f docker/compose/poller-stack.compose.yml up -d --no-deps poller agent

Provisioning Packages from the Core API

Edge installers no longer need direct cluster access once an operator mints an onboarding package through Core:

  • Create packages from the admin UI at /admin/edge-packages or via the API (POST /api/admin/edge-packages).
  • Download bundles securely: serviceradar-cli edge-package-download --core-url https://core.example.org --id <package> --download-token <token> --output edge-package.tar.gz writes a tarball containing edge-poller.env, the SPIRE artifacts, and a README. Extract it on the edge host and run docker/compose/edge-poller-restart.sh --env-file edge-poller.env to apply the configuration.
  • Revoke compromised or unused packages:
    serviceradar-cli edge-package-revoke --core-url https://core.example.org --id <package> --reason "Rotated edge host" deletes the downstream SPIRE entry and blocks future agent attestations.

All /api/admin/edge-packages routes and the /admin/edge-packages UI are RBAC-protected (admin role).

All generated nested SPIRE configuration lives under generated-config/poller-spire/. Re-run the config-updater container whenever you change trust domains or upstream addresses so the HCL files stay in sync.

Device Configuration

SNMP Device Setup

To monitor devices via SNMP, configure your network devices to:

  1. Enable SNMP v2c/v3 on the device
  2. Set community string (default: public for v2c)
  3. Allow SNMP access from ServiceRadar's IP address

Example Cisco configuration:

snmp-server community public RO
snmp-server location "Data Center 1"
snmp-server contact "[email protected]"

Syslog Configuration

Configure devices to send syslog messages to ServiceRadar:

  1. Point syslog to ServiceRadar IP on port 514/UDP
  2. Set appropriate log levels (info, warning, error)

Example Cisco configuration:

logging host <serviceradar-ip>
logging facility local0
logging source-interface Loopback0

Example Linux rsyslog configuration:

# /etc/rsyslog.conf
*.* @<serviceradar-ip>:514

SNMP Trap Configuration

Configure devices to send SNMP traps:

Example Cisco configuration:

snmp-server enable traps
snmp-server host <serviceradar-ip> public

Adding Devices to Monitoring

Via Web Interface

  1. Login to the ServiceRadar web interface
  2. Navigate to "Devices" → "Add Device"
  3. Enter device details:
    • IP address or hostname
    • SNMP community string
    • Device type/vendor
  4. Save the configuration

Via API

Use the ServiceRadar API to add devices programmatically:

curl -X POST http://localhost/api/devices \
-H "Content-Type: application/json" \
-H "X-API-Key: <your-api-key>" \
-d '{
"ip": "192.168.1.1",
"hostname": "router-01",
"snmp_community": "public",
"device_type": "cisco_ios"
}'

Bulk Import

For bulk device import, use the ServiceRadar CLI:

# Create a CSV file with device information
echo "ip,hostname,snmp_community,device_type" > devices.csv
echo "192.168.1.1,router-01,public,cisco_ios" >> devices.csv
echo "192.168.1.2,switch-01,public,cisco_ios" >> devices.csv

# Import devices
docker-compose exec core serviceradar-cli import-devices --file=/path/to/devices.csv

Monitoring and Maintenance

Service Health

Check service status:

docker-compose ps

Check service logs:

# View all logs
docker-compose logs

# View specific service logs
docker-compose logs core
docker-compose logs web

Database Maintenance

Use Postgres tooling (psql/pg_dump) against CNPG (default host cnpg-rw:5432, database serviceradar). Example health check:

psql "postgres://serviceradar:<password>@cnpg-rw:5432/serviceradar?sslmode=verify-full" -c "SELECT 1;"

Security Considerations

Default Security Features

ServiceRadar implements several security features by default:

  1. mTLS Communication: All inter-service communication uses mutual TLS
  2. API Authentication: JWT-based authentication for API access
  3. Network Isolation: Services run in isolated Docker networks
  4. Credential Rotation: Automatic generation of secure passwords and keys

Post-Installation Security

After initial setup:

  1. Change the admin password immediately
  2. Remove the password file: docker-compose exec core rm /etc/serviceradar/certs/password.txt
  3. Restrict network access to ServiceRadar ports
  4. Enable HTTPS for production deployments
  5. Regular backups of configuration and data

Changing Admin Password

# Generate new bcrypt hash
echo 'your-new-secure-password' | docker-compose exec -T core serviceradar-cli

# Update configuration (replace <new-hash> with output from above)
docker-compose exec core serviceradar-cli update-config \
-file=/etc/serviceradar/config/core.json \
-admin-hash='<new-hash>'

# Restart core service
docker-compose restart core

Troubleshooting

Common Issues

Services Won't Start

  1. Check logs: docker-compose logs <service-name>
  2. Verify prerequisites: Docker version, available resources
  3. Check port conflicts: Ensure required ports are available

Can't Access Web Interface

  1. Check nginx status: docker-compose ps nginx
  2. Check nginx logs: docker-compose logs nginx
  3. Verify core service: docker-compose ps core
  4. Test direct access: curl http://localhost:8090/api/status

Database Connection Issues

  1. Check CNPG status (cluster or managed DB)
  2. Test database connection:
    psql "postgres://serviceradar:<password>@cnpg-rw:5432/serviceradar?sslmode=verify-full" -c "SELECT 1;"

Certificate Issues

If you see certificate-related errors:

  1. Regenerate certificates:

    docker-compose down
    docker volume rm serviceradar_cert-data
    docker-compose up cert-generator
  2. Check certificate validity:

    docker-compose exec core openssl x509 -in /etc/serviceradar/certs/core.pem -text -noout

Log Analysis

Service-Specific Logs

# Core service logs
docker-compose logs core | grep ERROR

# Web interface logs
docker-compose logs web | grep -E "(error|Error)"

# Network logs
docker-compose logs agent poller mapper

Real-Time Monitoring

# Follow all logs
docker-compose logs -f

# Follow specific service
docker-compose logs -f core

# Follow multiple services
docker-compose logs -f core web

Performance Tuning

Resource Allocation

For production deployments, consider:

# docker-compose.override.yml
version: '3.8'
services:
core:
deploy:
resources:
limits:
memory: 2G
cpus: '1.0'

Database Optimization

-- Optimize CNPG database settings
ALTER SETTINGS max_memory_usage = 4000000000;
ALTER SETTINGS max_threads = 8;

Scaling and High Availability

Horizontal Scaling

ServiceRadar supports horizontal scaling of certain components:

  1. Multiple Pollers: Deploy additional poller instances for load distribution
  2. Multiple Agents: Deploy agents across different network segments
  3. Database Clustering: Configure CNPG in cluster mode (Enterprise feature)

Load Balancing

For high availability, deploy multiple ServiceRadar instances behind a load balancer:

# docker-compose.ha.yml
version: '3.8'
services:
nginx-lb:
image: nginx:alpine
ports:
- "443:443"
volumes:
- ./nginx-lb.conf:/etc/nginx/nginx.conf
depends_on:
- serviceradar-1
- serviceradar-2

Migration and Upgrades

Version Upgrades

  1. Backup current installation
  2. Update version in environment:
    export SERVICERADAR_VERSION=v1.1.0
  3. Pull new images:
    docker-compose pull
  4. Restart services:
    docker-compose up -d

Data Migration

When migrating between major versions:

  1. Export existing data
  2. Update configuration format if needed
  3. Import data to new installation
  4. Verify data integrity

Support and Resources

Appendix

Default Configuration Files

Default configuration files are available in the docker/compose/ directory:

  • core.docker.json: Core service configuration
  • poller.docker.json: Poller service configuration
  • web.docker.json: Web interface configuration
  • nats.docker.conf: NATS message bus configuration

Service Dependencies

API Reference

Key API endpoints:

  • GET /api/status: Service health status
  • GET /api/devices: List monitored devices
  • POST /api/devices: Add new device
  • GET /api/events: Query system events
  • POST /api/query: Execute SRQL queries

For complete API documentation, visit /swagger when ServiceRadar is running.