Logger System 0.1.3
High-performance C++20 thread-safe logging system with asynchronous capabilities
Loading...
Searching...
No Matches
Tutorial: Production Configuration

This tutorial covers the configuration choices that matter when running logger_system in production: high-throughput async pipelines, log rotation and retention, OpenTelemetry export, and the operational hygiene that keeps your logging stack reliable under load.

Production Goals

A well-tuned logger should:

  • Never block the request path. Logging must be effectively zero-cost to the calling thread.
  • Survive spikes. Bursts of 10x normal load should not lose messages.
  • Bound disk usage. Files must rotate and old segments must expire.
  • Be observable. Operators need to query, correlate, and alert on logs.
  • Fail loudly but safely. Backpressure and disk-full conditions should surface metrics, not crash the process.

High-Throughput Configuration

logger_system targets 4M+ messages/sec with sub-microsecond enqueue latency. Reaching that ceiling requires the right combination of async, buffered, and batched writers.

#include <kcenon/logger/builders/writer_builder.h>
using namespace kcenon::logger;
auto bulk = writer_builder()
.rotating_file("logs/app.log",
100 * 1024 * 1024, // 100 MB per segment
20) // keep 20 segments (~2 GB)
.buffered(8192) // 8k entries before flushing
.batch(2048) // group syscalls into 2k-entry batches
.async(131072) // 128k async queue
.build();
auto log = logger_builder()
.with_async(true)
.with_queue_size(131072)
.add_writer(std::move(bulk))
.build();
Builder pattern for logger construction with validation.
logger_builder & with_async(bool async=true)
result< std::unique_ptr< logger > > build()
logger_builder & add_writer(const std::string &name, log_writer_ptr writer)
Add a writer to the logger.
Builder pattern implementation for flexible logger configuration kcenon.

Tuning checklist:

  • Queue size. Default is 8k entries; raise it for bursty workloads. Each slot is roughly 256 B, so a 128k queue costs ~32 MB of RSS.
  • Buffer size. Larger buffers mean fewer syscalls and better throughput, at the cost of higher per-entry latency. 4k-16k is a sensible range for file-backed writers.
  • Batch size. The batch decorator groups physical writes; pair it with a buffered layer for optimal cache and syscall behaviour.
  • Backend. Build with LOGGER_USE_THREAD_SYSTEM=ON to share a thread pool across multiple subsystems. The default standalone backend uses std::jthread and works without external dependencies.

Async Logging

Asynchronous mode is the default and almost always the right choice. The calling thread enqueues an entry into a lock-free MPSC queue; a dedicated worker thread drains the queue and forwards entries to the inner chain.

auto async_chain = writer_builder()
.file("audit.log")
.buffered(2048)
.async(32768)
.build();

Operational notes:

  • Always call logger::start() before logging and logger::stop() during shutdown. stop() flushes the queue, so do not skip it on the happy path.
  • The worker thread is interruptible (std::jthread); calling stop() from a signal handler is safe as long as you do not also delete the logger from the same handler.
  • If enqueue returns an error result the queue is full. Decide between drop (best-effort logs) and block (critical audit) based on the log type. Use critical_writer for the latter.

Log Rotation

rotating_file_writer rotates by file size and keeps a configurable number of historical segments. Pair it with a retention policy that matches your storage budget.

auto rotating = writer_builder()
.rotating_file(
"logs/app.log", // base path; segments are app.1.log, app.2.log, ...
50 * 1024 * 1024, // 50 MB per segment
14) // keep 14 segments (~700 MB)
.buffered(4096)
.async()
.build();

Tips:

  • Place log files on a dedicated volume so a rogue logger cannot exhaust the system root partition.
  • Compress rotated segments out-of-band (logrotate, cronolog, or a systemd timer running gzip). logger_system intentionally avoids in-process compression to keep the hot path predictable.
  • Combine size-based rotation with daily timestamped paths (logs/app-Y-m-d.log) using a wrapper script if you need calendar-aligned shipping.

OpenTelemetry Export

When LOGGER_ENABLE_OTLP=ON, the otlp_writer exports log records to any OTLP-compatible collector via HTTP/Protobuf or gRPC. It batches records, retries with exponential backoff, and propagates trace/span IDs that the structured logger has placed in the entry's context.

#include <kcenon/logger/builders/writer_builder.h>
#include <kcenon/logger/otlp/otlp_context.h>
using namespace kcenon::logger;
otlp::otlp_endpoint endpoint;
endpoint.url = "https://otel-collector.internal:4318/v1/logs";
endpoint.protocol = otlp::otlp_protocol::http_protobuf;
endpoint.headers["x-tenant"] = "checkout";
auto otlp = writer_builder()
.otlp(endpoint)
.buffered(1024)
.async(16384)
.build();

Operational guidance:

  • Run a local collector (Otel Collector, Vector, Fluent Bit) on the same host to absorb network hiccups and batch upstream traffic.
  • Set service.name and service.version resource attributes via the otlp_context so logs join distributed traces correctly.
  • Combine OTLP with a local file writer in a composite_writer so you keep a durable copy when the collector is unreachable.

Three Production Examples

Example 1: Stateless Web Service

Console for human operators, rotating file for forensic analysis, OTLP for SRE dashboards.

auto console = writer_builder().console().build();
auto rotating = writer_builder()
.rotating_file("logs/web.log", 100 * 1024 * 1024, 10)
.buffered(4096)
.async(32768)
.build();
auto otlp = writer_builder()
.otlp({.url = "http://localhost:4318/v1/logs",
.protocol = otlp::otlp_protocol::http_protobuf})
.buffered(1024)
.async(16384)
.build();
auto log = logger_builder()
.add_writer(std::move(console))
.add_writer(std::move(rotating))
.add_writer(std::move(otlp))
.build();

Example 2: Compliance / Audit Pipeline

Encrypted at rest, durable on flush, never lossy.

auto key = security::secure_key_storage::generate_key(32).value();
auto audit = writer_builder()
.file("logs/audit.log.enc")
.encrypted(std::move(key))
.buffered(1024)
.critical() // wait-free, blocking enqueue when full
.build();
auto log = logger_builder()
.with_async(false) // sync to guarantee durability per call
.add_writer(std::move(audit))
.build();
RAII wrapper for encryption keys with secure memory management.

Example 3: Batch Job with Bulk Throughput

A nightly ETL job that processes millions of records and needs maximum throughput. Keep the console for the operator, dump everything to a high- throughput file pipeline.

auto console = writer_builder().console().build();
auto bulk = writer_builder()
.rotating_file("logs/etl.log", 256 * 1024 * 1024, 5)
.buffered(16384)
.batch(4096)
.async(262144) // 256k async queue
.build();
auto log = logger_builder()
.with_async(true)
.with_queue_size(262144)
.add_writer(std::move(console))
.add_writer(std::move(bulk))
.build();
log->start();
run_etl(*log);
log->stop(); // flushes ~256k entries on shutdown

Metrics and Self-Observability

Build with the monitoring backend (LOGGER_USE_MONITORING=ON) or attach the metrics collector exposed by logger::metrics() to gain visibility into:

  • Queue depth and high-water mark
  • Drop counts (per queue and per filter)
  • Bytes written, syscalls issued, retries triggered
  • Per-writer latency histograms

Export those counters to your monitoring system (Prometheus, OpenTelemetry, or monitoring_system) and alert on dropped_total > 0 and queue_high_water > 0.8 * capacity.

Next Steps