autotoc_md2144
doc_id: "LOG-INTR-002" doc_title: "OpenTelemetry Integration Guide" doc_version: "1.0.0" doc_date: "2026-04-04" doc_status: "Released" project: "logger_system"
category: "INTR"
OpenTelemetry Integration Guide
SSOT: This document is the single source of truth for OpenTelemetry Integration Guide.
Version: 0.3.0.0+ Status: Stable
This guide covers the OpenTelemetry (OTLP) integration in logger_system, enabling seamless correlation between logs, traces, and metrics for cloud-native observability.
Table of Contents
Overview
OpenTelemetry is the CNCF standard for observability, providing unified APIs for traces, metrics, and logs. Logger System's OTLP integration enables:
| Feature | Description |
| Trace Correlation | Logs linked to distributed traces via trace_id/span_id |
| Unified Export | Single protocol for exporting to observability platforms |
| Ecosystem Support | Works with Jaeger, Zipkin, Prometheus, Grafana, Datadog |
| Context Propagation | Automatic thread-local context handling |
Installation
With vcpkg
# Install with OTLP support
vcpkg install kcenon-logger-system[otlp]
With CMake
cmake -B build -DLOGGER_ENABLE_OTLP=ON
cmake --build build
Dependencies
The OTLP feature requires:
- opentelemetry-cpp
- protobuf
- grpc (for gRPC transport)
Quick Start
Basic OTLP Export
.endpoint = "http://otel-collector:4318/v1/logs",
.protocol = otlp_writer::protocol_type::http,
.service_name = "my-service",
.service_version = "1.0.0",
.resource_attributes = {
{"environment", "production"},
{"region", "us-east-1"}
}
};
.
add_writer(
"otlp", std::make_unique<otlp_writer>(otlp_cfg))
return -1;
}
logger->log(log_level::info,
"Application started");
return 0;
}
Builder pattern for logger construction with validation.
logger_builder & with_async(bool async=true)
result< std::unique_ptr< logger > > build()
logger_builder & add_writer(const std::string &name, log_writer_ptr writer)
Add a writer to the logger.
Builder pattern implementation for flexible logger configuration kcenon.
OpenTelemetry context structure for trace correlation kcenon.
OpenTelemetry Protocol (OTLP) log writer for observability kcenon.
Configuration for OTLP writer.
With Trace Correlation
void handle_request(
const Request& req,
logger* log) {
.trace_id = req.headers["traceparent"].substr(3, 32),
.span_id = req.headers["traceparent"].substr(36, 16),
.trace_flags = "01"
};
log->set_otel_context(ctx);
log->
log(log_level::info,
"Processing request");
process_request(req);
log->
log(log_level::info,
"Request completed");
log->clear_otel_context();
}
common::VoidResult log(common::interfaces::log_level level, const std::string &message) override
Log a message with specified level (ILogger interface)
OpenTelemetry context for trace correlation.
Using RAII Scope Guard
void handle_request(const Request& req) {
.trace_id = extract_trace_id(req),
.span_id = extract_span_id(req)
};
process_request(req);
}
RAII guard for OpenTelemetry context.
Configuration
OTLP Writer Configuration
.endpoint = "http://localhost:4318/v1/logs",
.protocol = otlp_writer::protocol_type::http,
.timeout = std::chrono::milliseconds{5000},
.use_tls = false,
.service_name = "my-service",
.service_version = "1.0.0",
.service_namespace = "production",
.service_instance_id = "pod-abc123",
.resource_attributes = {
{"deployment.environment", "production"},
{"cloud.region", "us-east-1"},
{"host.name", "server-01"}
},
.max_batch_size = 512,
.flush_interval = std::chrono::milliseconds{5000},
.max_queue_size = 10000,
.max_retries = 3,
.retry_delay = std::chrono::milliseconds{100},
.headers = {
{"Authorization", "Bearer token123"}
}
};
Protocol Selection
| Protocol | Port | Use Case |
| HTTP | 4318 | Simpler setup, firewall-friendly |
| gRPC | 4317 | Better performance, streaming support |
cfg.endpoint = "http://collector:4318/v1/logs";
cfg.protocol = otlp_writer::protocol_type::http;
cfg.endpoint = "collector:4317";
cfg.protocol = otlp_writer::protocol_type::grpc;
Trace Context
otel_context Structure
struct otel_context {
std::string trace_id;
std::string span_id;
std::string trace_flags;
std::string trace_state;
bool is_valid() const;
bool is_sampled() const;
};
Thread-Local Storage
otlp::otel_context_storage::set(ctx);
auto ctx_opt = otlp::otel_context_storage::get();
if (ctx_opt) {
std::cout << "Trace ID: " << ctx_opt->trace_id << "\n";
}
if (otlp::otel_context_storage::has_context()) {
}
otlp::otel_context_storage::clear();
Context in Log Entries
.trace_id = "...",
.span_id = "..."
};
Represents a single log entry with all associated metadata.
OTLP Writer
Statistics Monitoring
auto stats = writer.get_stats();
std::cout << "Logs exported: " << stats.logs_exported << "\n";
std::cout << "Logs dropped: " << stats.logs_dropped << "\n";
std::cout << "Export successes: " << stats.export_success << "\n";
std::cout << "Export failures: " << stats.export_failures << "\n";
std::cout << "Retries: " << stats.retries << "\n";
OTLP log exporter for OpenTelemetry integration.
Health Checking
if (!writer.is_healthy()) {
}
Force Flush
writer.force_export();
writer.flush();
Integration Examples
With OpenTelemetry Collector
# otel-collector-config.yaml
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
grpc:
endpoint: 0.0.0.0:4317
exporters:
jaeger:
endpoint: jaeger:14250
tls:
insecure: true
loki:
endpoint: http://loki:3100/loki/api/v1/push
service:
pipelines:
logs:
receivers: [otlp]
exporters: [loki]
traces:
receivers: [otlp]
exporters: [jaeger]
Docker Compose Setup
version: '3.8'
services:
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
volumes:
- ./otel-collector-config.yaml:/etc/otelcol/config.yaml
ports:
- "4317:4317" # gRPC
- "4318:4318" # HTTP
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686" # UI
- "14250:14250" # gRPC
loki:
image: grafana/loki:latest
ports:
- "3100:3100"
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
W3C Trace Context Parsing
std::optional<otlp::otel_context> parse_traceparent(const std::string& header) {
if (header.length() < 55 || header[0] != '0' || header[1] != '0') {
return std::nullopt;
}
.trace_id = header.substr(3, 32),
.span_id = header.substr(36, 16),
.trace_flags = header.substr(53, 2)
};
}
Best Practices
1. Always Set Service Name
cfg.service_name = "api-gateway";
cfg.service_version = "1.2.3";
2. Use RAII Scope Guards
{
}
ctx_storage::set(ctx);
ctx_storage::clear();
3. Configure Appropriate Batch Sizes
cfg.max_batch_size = 1000;
cfg.flush_interval = std::chrono::seconds{1};
cfg.max_batch_size = 100;
cfg.flush_interval = std::chrono::milliseconds{500};
4. Handle Collector Unavailability
cfg.max_queue_size = 50000;
cfg.max_retries = 5;
if (!writer.is_healthy()) {
}
5. Include Meaningful Resource Attributes
cfg.resource_attributes = {
{"deployment.environment", "production"},
{"service.namespace", "payments"},
{"cloud.provider", "aws"},
{"cloud.region", "us-east-1"},
{"k8s.pod.name", get_pod_name()},
{"k8s.namespace.name", get_namespace()}
};
Troubleshooting
Logs Not Appearing in Collector
- Check endpoint URL: Ensure correct port (4318 for HTTP, 4317 for gRPC)
- Verify network connectivity: Test with curl or grpcurl
- Check collector logs: Look for parsing or connection errors
- Verify TLS settings: Match use_tls with collector configuration
High Log Drop Rate
cfg.max_queue_size = 100000;
cfg.max_batch_size = 2000;
auto stats = writer.get_stats();
if (stats.logs_dropped > 0) {
}
Connection Timeouts
cfg.timeout = std::chrono::seconds{30};
cfg.max_retries = 5;
cfg.retry_delay = std::chrono::seconds{1};
Missing Trace Context
- Verify context is set before logging:
log->set_otel_context(ctx);
- Check thread isolation: Context is thread-local
- Use scope guards for automatic cleanup
Related Documentation