ServiceRadar Cluster
The ServiceRadar demo cluster bundles the core platform services into a single Kubernetes namespace so you can explore the full data path end to end. Use this page when you need to understand which workloads are running, how they communicate, and where to look during incident response.
Core Services
| Component | Purpose | Default Deployment |
|---|---|---|
| Core API | Accepts poller reports, exposes the public API, and fans out notifications. | deploy/serviceradar-core |
| Poller | Coordinates health checks against agents and external targets. | deploy/serviceradar-poller |
| Sync | Ingests metadata from external systems (e.g., NetBox, Armis) and keeps the registry current. | deploy/serviceradar-sync |
| Registry | Stores canonical device inventory and service relationships. | statefulset/serviceradar-registry |
| Data Service | Provides dynamic configuration (KV) and Object Store via NATS JetStream. | statefulset/serviceradar-datasvc |
| Web UI | Serves dashboards and embeds SRQL explorers. | deploy/serviceradar-web |
Each deployment surfaces the serviceradar.io/component label; use it to filter logs and metrics when debugging clustered issues.
Supporting Data Plane
- CNPG / Timescale: CloudNativePG cluster that stores registry state plus every telemetry hypertable (events, logs, OTEL metrics/traces). The demo namespace creates it via
cnpg-cluster.yamland exposes the RW service atcnpg-rw.<namespace>.svc. - Faker: Generates synthetic Armis datasets for demos and developer testing. Deployed as
deploy/serviceradar-fakerand backed bypvc/serviceradar-faker-data. - Ingress: The
serviceradar-gatewayservice exposes HTTPS endpoints for the web UI and API; mutual TLS is enforced between internal components viaserviceradar-ca.
Observability Hooks
- Logs: All pods write to STDOUT/STDERR; aggregate with
kubectl logs -n demo -l serviceradar.io/component=<name>. - Metrics: Pollers scrape Sysmon VM exporters every 60 seconds; ensure the jobs stay within the five-minute hostfreq retention window.
- Tracing: Distributed traces flow through the OTLP gateway (
service/serviceradar-otel) and land in CNPG/Timescale for correlation with SRQL queries.
Operational Tips
- Use
kubectl get pods -n demoto verify rollouts. Most deployments support at least two replicas; scaleserviceradar-syncduring heavy reconciliation. - Persistent stores (
registry,kv,cnpg,faker) rely on PVCs; confirm volume mounts before recycling pods. - The demo namespace is designed for experimentation. When you need a clean slate, follow the runbooks in
agents.mdto reset Faker, truncate the CNPG hypertables, and rebuild materialized views.
For component-specific configuration, see the guides under Deployment and Get Data In. SRQL-specific authentication and rate limit settings live in the SRQL Service Configuration guide.