Skip to main content

Collector Resource Utilization

The Speedscale eBPF collector (nettap) runs as a Kubernetes DaemonSet with two containers per pod: capture and ingest. The capture container runs eBPF programs in kernel space to observe network traffic while the ingest container buffers and forwards captured data to the Speedscale cloud.

Because the collector runs on every node in the cluster, understanding its resource footprint is important for capacity planning and node sizing.

Observed Resource Usage

The table below shows peak resource usage per container at various request rates. Measurements were taken against a Go HTTP application making outbound TLS calls, running on a single captured workload.

QPSCapture CPUCapture MemIngest CPUIngest MemTotal CPUTotal Mem
idle1m408Mi1m26Mi2m434Mi
12m408Mi2m26Mi4m434Mi
106m408Mi6m29Mi12m437Mi
5018m410Mi9m33Mi27m443Mi
10068m417Mi16m40Mi84m457Mi
20091m421Mi38m60Mi129m481Mi
500208m429Mi87m153Mi295m582Mi
1000414m487Mi116m340Mi530m827Mi
note

These observations were made while capturing a single workload with outbound TLS traffic. Your results will vary based on the number of captured workloads, traffic patterns, payload sizes, and protocol mix.

Key Observations

Memory

The capture container has a relatively high baseline memory footprint at idle due to eBPF maps and kernel BTF (BPF Type Format) data that must be loaded at startup. Memory usage grows modestly with traffic volume. The ingest container starts small but grows under load as it buffers captured data for forwarding.

CPU

CPU usage scales roughly linearly with request volume. The capture container is the heavier consumer of the two.

Post-Load Drain

After a high-throughput period ends, the collector may continue consuming elevated resources while it drains buffered data. Plan for this lag when evaluating resource headroom.

Log Level Impact

The observations above were taken at the default info log level. Running the collector at debug level significantly increases CPU usage in the ingest container. Under debug logging the ingest container can become CPU-throttled at moderate traffic rates, which limits its ability to forward captured data. The capture container is largely unaffected by log level.

Avoid debug logging under production load. If debug logging is needed for troubleshooting, use it briefly during low-traffic periods or temporarily increase the ingest container's CPU limit.

Default Configuration

Collector resource requests and limits are configured through the Helm chart. The defaults are applied per container (capture and ingest):

SettingDefault
CPU request100m
Memory request256M
CPU limit500m
Memory limit1G

To override these defaults, set the appropriate values in your Helm values file. For example:

ebpf:
enabled: true
resources:
requests:
cpu: "200m"
memory: "512M"
limits:
cpu: "1000m"
memory: "2G"

Monitoring

Monitor collector pod resource usage through your cluster monitoring solution. Watch for:

  • CPU throttling on the capture container, which can cause dropped traffic.
  • Memory pressure on the ingest container under sustained high throughput.
  • OOMKill events if limits are set too low for the observed traffic volume.
  • Scheduling issues where the default memory request is below the actual idle footprint of the capture container.

The collector exposes a /stats endpoint that reports probe call counts, bytes processed, and chunks published. Use this for application-level monitoring of the capture pipeline.

If you observe any of these issues, increase the corresponding resource requests or limits in your Helm values.