Thread System 0.3.1
High-performance C++20 thread pool with work stealing and DAG scheduling
Loading...
Searching...
No Matches
kcenon::thread::diagnostics::thread_pool_diagnostics Class Reference

Comprehensive diagnostics API for thread pool monitoring. More...

#include <thread_pool_diagnostics.h>

Collaboration diagram for kcenon::thread::diagnostics::thread_pool_diagnostics:
Collaboration graph

Public Member Functions

 thread_pool_diagnostics (thread_pool &pool, const diagnostics_config &config={})
 Constructs diagnostics for a thread pool.
 
 ~thread_pool_diagnostics ()
 Destructor.
 
 thread_pool_diagnostics (const thread_pool_diagnostics &)=delete
 
thread_pool_diagnosticsoperator= (const thread_pool_diagnostics &)=delete
 
 thread_pool_diagnostics (thread_pool_diagnostics &&)=delete
 
thread_pool_diagnosticsoperator= (thread_pool_diagnostics &&)=delete
 
auto dump_thread_states () const -> std::vector< thread_info >
 Gets current state of all worker threads.
 
auto format_thread_dump () const -> std::string
 Gets formatted thread dump (human-readable).
 
auto get_active_jobs () const -> std::vector< job_info >
 Gets currently executing jobs.
 
auto get_pending_jobs (std::size_t limit=100) const -> std::vector< job_info >
 Gets pending jobs in queue.
 
auto get_recent_jobs (std::size_t limit=100) const -> std::vector< job_info >
 Gets recent completed/failed jobs.
 
void record_job_completion (const job_info &info)
 Records a job completion for history tracking.
 
auto detect_bottlenecks () const -> bottleneck_report
 Analyzes for bottlenecks.
 
auto health_check () const -> health_status
 Performs comprehensive health check.
 
auto is_healthy () const -> bool
 Quick check if pool is healthy.
 
void enable_tracing (bool enable, std::size_t history_size=1000)
 Enables or disables job execution tracing.
 
auto is_tracing_enabled () const -> bool
 Checks if tracing is enabled.
 
void add_event_listener (std::shared_ptr< execution_event_listener > listener)
 Adds an event listener.
 
void remove_event_listener (std::shared_ptr< execution_event_listener > listener)
 Removes an event listener.
 
void record_event (const job_execution_event &event)
 Records a job execution event.
 
auto get_recent_events (std::size_t limit=100) const -> std::vector< job_execution_event >
 Gets recent execution events.
 
auto to_json () const -> std::string
 Exports diagnostics as JSON.
 
auto to_string () const -> std::string
 Exports diagnostics as formatted string.
 
auto to_prometheus () const -> std::string
 Exports diagnostics as Prometheus-compatible metrics.
 
auto get_config () const -> diagnostics_config
 Gets the current configuration.
 
void set_config (const diagnostics_config &config)
 Updates the configuration.
 

Private Member Functions

auto get_worker_info (const thread_worker &worker, std::size_t index) const -> thread_info
 Gets thread info for a single worker.
 
void notify_listeners (const job_execution_event &event)
 Notifies all event listeners.
 
void generate_recommendations (bottleneck_report &report) const
 Generates recommendations for a bottleneck.
 
auto check_worker_health () const -> component_health
 Checks worker component health.
 
auto check_queue_health () const -> component_health
 Checks queue component health.
 
auto check_metrics_health (double avg_latency_ms, double success_rate) const -> component_health
 Checks metrics component health.
 

Private Attributes

thread_poolpool_
 Reference to the monitored thread pool.
 
diagnostics_config config_
 Configuration for diagnostics.
 
std::atomic< bool > tracing_enabled_ {false}
 Whether event tracing is enabled.
 
std::mutex events_mutex_
 Mutex for event history access.
 
std::deque< job_execution_eventevent_history_
 Ring buffer for event history.
 
std::mutex jobs_mutex_
 Mutex for recent jobs access.
 
std::deque< job_inforecent_jobs_
 Ring buffer for recent job completions.
 
std::mutex listeners_mutex_
 Mutex for event listeners.
 
std::vector< std::shared_ptr< execution_event_listener > > listeners_
 Event listeners.
 
std::atomic< std::uint64_t > next_event_id_ {0}
 Counter for event IDs.
 
std::chrono::steady_clock::time_point start_time_
 Time when the pool was started.
 

Detailed Description

Comprehensive diagnostics API for thread pool monitoring.

Provides thread dump capabilities, job tracing, bottleneck detection, and health check integration for thread pools.

Design Principles

  • Non-intrusive: Minimal overhead when not actively used
  • Thread-safe: All methods can be called from any thread
  • Read-only: Never modifies thread pool state
  • Snapshot-based: Returns point-in-time snapshots

Thread Safety

All public methods are thread-safe and can be called concurrently. Internal state is protected by appropriate synchronization.

Performance Considerations

  • Thread dump: O(n) where n is worker count
  • Job inspection: O(1) for active jobs, O(n) for history
  • Bottleneck detection: O(n) where n is worker count
  • Health check: O(n) including all component checks
  • Event tracing: < 1μs overhead per event when enabled

Usage Example

auto pool = std::make_shared<thread_pool>("MyPool");
pool->start();
// Get thread dump
std::cout << pool->diagnostics().format_thread_dump() << std::endl;
// Check for bottlenecks
auto report = pool->diagnostics().detect_bottlenecks();
if (report.has_bottleneck) {
LOG_WARN("Bottleneck: {}", report.description);
}
// Health check for HTTP endpoint
auto health = pool->diagnostics().health_check();
return http_response(health.http_status_code(), health.to_json());

Definition at line 142 of file thread_pool_diagnostics.h.

Constructor & Destructor Documentation

◆ thread_pool_diagnostics() [1/3]

kcenon::thread::diagnostics::thread_pool_diagnostics::thread_pool_diagnostics ( thread_pool & pool,
const diagnostics_config & config = {} )
explicit

Constructs diagnostics for a thread pool.

Parameters
poolReference to the thread pool to diagnose.
configOptional configuration for diagnostics.

Definition at line 21 of file thread_pool_diagnostics.cpp.

23 : pool_(pool)
24 , config_(config)
25 , tracing_enabled_(config.enable_tracing)
26 , start_time_(std::chrono::steady_clock::now())
27 {
28 }
diagnostics_config config_
Configuration for diagnostics.
thread_pool & pool_
Reference to the monitored thread pool.
std::chrono::steady_clock::time_point start_time_
Time when the pool was started.
std::atomic< bool > tracing_enabled_
Whether event tracing is enabled.

◆ ~thread_pool_diagnostics()

kcenon::thread::diagnostics::thread_pool_diagnostics::~thread_pool_diagnostics ( )
default

Destructor.

◆ thread_pool_diagnostics() [2/3]

kcenon::thread::diagnostics::thread_pool_diagnostics::thread_pool_diagnostics ( const thread_pool_diagnostics & )
delete

◆ thread_pool_diagnostics() [3/3]

kcenon::thread::diagnostics::thread_pool_diagnostics::thread_pool_diagnostics ( thread_pool_diagnostics && )
delete

Member Function Documentation

◆ add_event_listener()

void kcenon::thread::diagnostics::thread_pool_diagnostics::add_event_listener ( std::shared_ptr< execution_event_listener > listener)

Adds an event listener.

Parameters
listenerListener to add.

Definition at line 620 of file thread_pool_diagnostics.cpp.

622 {
623 if (!listener) return;
624
625 std::lock_guard<std::mutex> lock(listeners_mutex_);
626 listeners_.push_back(std::move(listener));
627 }
std::vector< std::shared_ptr< execution_event_listener > > listeners_
Event listeners.

References listeners_, and listeners_mutex_.

◆ check_metrics_health()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::check_metrics_health ( double avg_latency_ms,
double success_rate ) const -> component_health
nodiscardprivate

Checks metrics component health.

Parameters
avg_latency_msCurrent average latency.
success_rateCurrent success rate.
Returns
Component health status for metrics.

Definition at line 546 of file thread_pool_diagnostics.cpp.

548 {
549 component_health health;
550 health.name = "metrics";
551
552 health.details["avg_latency_ms"] = std::format("{:.3f}", avg_latency_ms);
553 health.details["success_rate"] = std::format("{:.4f}", success_rate);
554
555 const auto& thresholds = config_.health_thresholds_config;
556
557 // Check success rate first (more critical)
558 if (success_rate < thresholds.unhealthy_success_rate)
559 {
560 health.state = health_state::unhealthy;
561 health.message = "Success rate critically low: " +
562 std::format("{:.1f}%", success_rate * 100.0);
563 }
564 else if (success_rate < thresholds.min_success_rate)
565 {
566 health.state = health_state::degraded;
567 health.message = "Success rate below threshold: " +
568 std::format("{:.1f}%", success_rate * 100.0);
569 }
570 // Check latency
571 else if (avg_latency_ms > thresholds.degraded_latency_ms)
572 {
573 health.state = health_state::degraded;
574 health.message = "High average latency: " +
575 std::format("{:.2f}ms", avg_latency_ms);
576 }
577 else if (avg_latency_ms > thresholds.max_healthy_latency_ms)
578 {
579 health.state = health_state::degraded;
580 health.message = "Elevated latency: " +
581 std::format("{:.2f}ms", avg_latency_ms);
582 }
583 else
584 {
585 health.state = health_state::healthy;
586 health.message = "Performance metrics within normal range";
587 }
588
589 return health;
590 }
@ healthy
Component is fully operational.
@ degraded
Component is operational but with reduced capacity/performance.
@ unhealthy
Component is not operational or failing.
health_thresholds health_thresholds_config
Configurable thresholds for health status determination.

References kcenon::thread::diagnostics::degraded, kcenon::thread::diagnostics::component_health::details, kcenon::thread::diagnostics::healthy, kcenon::thread::diagnostics::component_health::message, kcenon::thread::diagnostics::component_health::name, kcenon::thread::diagnostics::component_health::state, and kcenon::thread::diagnostics::unhealthy.

Referenced by health_check().

Here is the caller graph for this function:

◆ check_queue_health()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::check_queue_health ( ) const -> component_health
nodiscardprivate

Checks queue component health.

Returns
Component health status for queue.

Definition at line 491 of file thread_pool_diagnostics.cpp.

492 {
493 component_health health;
494 health.name = "queue";
495
496 auto depth = pool_.get_pending_task_count();
497 health.details["depth"] = std::to_string(depth);
498
499 // Get queue capacity and calculate saturation
500 auto queue = pool_.get_job_queue();
501 double saturation = 0.0;
502 if (queue)
503 {
504 auto max_size = queue->get_max_size();
505 if (max_size.has_value() && max_size.value() > 0)
506 {
507 health.details["capacity"] = std::to_string(max_size.value());
508 saturation = static_cast<double>(depth) / static_cast<double>(max_size.value());
509 health.details["saturation"] = std::format("{:.2f}", saturation);
510 }
511 }
512
513 // Note: Job rejection tracking requires backpressure queue
514 // For basic queue, assume no rejections
515 std::uint64_t rejected = 0;
516 health.details["rejected"] = std::to_string(rejected);
517
518 const auto& thresholds = config_.health_thresholds_config;
519
520 if (saturation >= thresholds.queue_saturation_critical)
521 {
522 health.state = health_state::unhealthy;
523 health.message = "Queue at critical capacity";
524 }
525 else if (saturation >= thresholds.queue_saturation_warning || rejected > 0)
526 {
527 health.state = health_state::degraded;
528 if (rejected > 0)
529 {
530 health.message = std::to_string(rejected) + " jobs rejected due to backpressure";
531 }
532 else
533 {
534 health.message = "Queue saturation above warning threshold";
535 }
536 }
537 else
538 {
539 health.state = health_state::healthy;
540 health.message = "Queue operational";
541 }
542
543 return health;
544 }
auto get_pending_task_count() const -> std::size_t
Get the number of pending tasks in the queue.
auto get_job_queue(void) -> std::shared_ptr< job_queue >
Returns the shared job_queue used by this thread pool.

References config_, kcenon::thread::diagnostics::degraded, kcenon::thread::diagnostics::component_health::details, kcenon::thread::thread_pool::get_job_queue(), kcenon::thread::thread_pool::get_pending_task_count(), kcenon::thread::diagnostics::diagnostics_config::health_thresholds_config, kcenon::thread::diagnostics::healthy, kcenon::thread::diagnostics::component_health::message, kcenon::thread::diagnostics::component_health::name, pool_, kcenon::thread::diagnostics::component_health::state, and kcenon::thread::diagnostics::unhealthy.

Referenced by health_check().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ check_worker_health()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::check_worker_health ( ) const -> component_health
nodiscardprivate

Checks worker component health.

Returns
Component health status for workers.

Definition at line 450 of file thread_pool_diagnostics.cpp.

451 {
452 component_health health;
453 health.name = "workers";
454
455 std::size_t total;
456 {
457 std::scoped_lock<std::mutex> lock(pool_.workers_mutex_);
458 total = pool_.workers_.size();
459 }
462
463 health.details["total"] = std::to_string(total);
464 health.details["active"] = std::to_string(active);
465 health.details["idle"] = std::to_string(idle);
466
467 if (!pool_.is_running())
468 {
469 health.state = health_state::unhealthy;
470 health.message = "Thread pool is not running";
471 }
472 else if (total == 0)
473 {
474 health.state = health_state::unhealthy;
475 health.message = "No workers available";
476 }
477 else if (active == total)
478 {
479 health.state = health_state::degraded;
480 health.message = "All workers are busy";
481 }
482 else
483 {
484 health.state = health_state::healthy;
485 health.message = std::to_string(idle) + " workers available";
486 }
487
488 return health;
489 }
std::size_t get_idle_worker_count() const
Get the number of idle workers.
auto get_active_worker_count() const -> std::size_t
Get the current number of active (running) workers.
std::vector< std::unique_ptr< thread_worker > > workers_
A collection of worker threads associated with this pool.
auto is_running() const -> bool
Check if the thread pool is currently running.
std::mutex workers_mutex_
Mutex protecting concurrent access to the workers_ vector.
@ active
Worker is executing a job.
@ idle
Worker is waiting for jobs.

References kcenon::thread::diagnostics::active, kcenon::thread::diagnostics::degraded, kcenon::thread::diagnostics::component_health::details, kcenon::thread::thread_pool::get_active_worker_count(), kcenon::thread::thread_pool::get_idle_worker_count(), kcenon::thread::diagnostics::healthy, kcenon::thread::diagnostics::idle, kcenon::thread::thread_pool::is_running(), kcenon::thread::diagnostics::component_health::message, kcenon::thread::diagnostics::component_health::name, pool_, kcenon::thread::diagnostics::component_health::state, kcenon::thread::diagnostics::unhealthy, kcenon::thread::thread_pool::workers_, and kcenon::thread::thread_pool::workers_mutex_.

Referenced by health_check().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ detect_bottlenecks()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::detect_bottlenecks ( ) const -> bottleneck_report
nodiscard

Analyzes for bottlenecks.

Returns
Bottleneck analysis report.

Definition at line 161 of file thread_pool_diagnostics.cpp.

162 {
163 bottleneck_report report;
164
165 // Gather metrics
166 auto metrics_snap = pool_.metrics().snapshot();
167 std::size_t worker_count;
168 {
169 std::scoped_lock<std::mutex> lock(pool_.workers_mutex_);
170 worker_count = pool_.workers_.size();
171 }
172 auto active_count = pool_.get_active_worker_count();
173 auto idle_count = pool_.get_idle_worker_count();
175
176 report.queue_depth = queue_depth;
177 report.idle_workers = idle_count;
178 report.total_workers = worker_count;
179
180 // Calculate queue saturation
181 auto queue = pool_.get_job_queue();
182 if (queue)
183 {
184 auto max_size = queue->get_max_size();
185 if (max_size.has_value() && max_size.value() > 0)
186 {
187 report.queue_saturation = static_cast<double>(queue_depth) /
188 static_cast<double>(max_size.value());
189 }
190 else if (queue_depth > 0)
191 {
192 // For unbounded queues, use heuristic: saturation based on queue depth vs workers
193 // High queue depth relative to workers indicates potential saturation
194 report.queue_saturation = std::min(1.0,
195 static_cast<double>(queue_depth) / static_cast<double>(worker_count * 10));
196 }
197 }
198
199 // Calculate worker utilization (instantaneous)
200 if (worker_count > 0)
201 {
202 report.worker_utilization = static_cast<double>(active_count) /
203 static_cast<double>(worker_count);
204 }
205
206 // Get per-worker utilization for variance calculation
207 auto thread_states = pool_.collect_worker_diagnostics();
208 if (!thread_states.empty())
209 {
210 // Calculate mean utilization from worker stats
211 double sum_utilization = 0.0;
212 for (const auto& t : thread_states)
213 {
214 sum_utilization += t.utilization;
215 }
216 double mean_utilization = sum_utilization / static_cast<double>(thread_states.size());
217
218 // Calculate variance
219 double variance_sum = 0.0;
220 for (const auto& t : thread_states)
221 {
222 double diff = t.utilization - mean_utilization;
223 variance_sum += diff * diff;
224 }
225 report.utilization_variance = variance_sum / static_cast<double>(thread_states.size());
226
227 // Use mean utilization from actual worker stats if available
228 if (mean_utilization > 0.0)
229 {
230 report.worker_utilization = mean_utilization;
231 }
232 }
233
234 // Calculate average wait time from metrics
235 auto total_jobs = metrics_snap.tasks_executed + metrics_snap.tasks_failed;
236 if (total_jobs > 0)
237 {
238 // Estimate wait time from idle time (approximation)
239 auto avg_idle_ns = metrics_snap.total_idle_time_ns / total_jobs;
240 report.avg_wait_time_ms = static_cast<double>(avg_idle_ns) / 1e6;
241
242 // Calculate estimated backlog time
243 // Average execution time per job
244 double avg_exec_time_ms = 0.0;
245 if (metrics_snap.total_busy_time_ns > 0 && total_jobs > 0)
246 {
247 avg_exec_time_ms = static_cast<double>(metrics_snap.total_busy_time_ns) /
248 static_cast<double>(total_jobs) / 1e6;
249 }
250
251 // Estimated time to clear backlog = (queue_depth * avg_exec_time) / active_workers
252 if (active_count > 0 && avg_exec_time_ms > 0)
253 {
254 report.estimated_backlog_time_ms = static_cast<std::size_t>(
255 (static_cast<double>(queue_depth) * avg_exec_time_ms) /
256 static_cast<double>(active_count));
257 }
258 else if (worker_count > 0 && avg_exec_time_ms > 0)
259 {
260 report.estimated_backlog_time_ms = static_cast<std::size_t>(
261 (static_cast<double>(queue_depth) * avg_exec_time_ms) /
262 static_cast<double>(worker_count));
263 }
264 }
265
266 // Jobs rejected tracking not available in basic metrics
267 report.jobs_rejected = 0;
268
269 // Detect bottleneck type (ordered by severity)
270 // 1. Queue full - most critical
271 if (report.queue_saturation > 0.95 || report.jobs_rejected > 0)
272 {
273 report.has_bottleneck = true;
274 report.type = bottleneck_type::queue_full;
275 report.description = "Queue is at or near capacity, jobs are being rejected";
276 }
277 // 2. Worker starvation - high utilization with growing backlog
278 else if (report.worker_utilization > 0.95 && queue_depth > worker_count * 2)
279 {
280 report.has_bottleneck = true;
282 report.description = "Not enough workers to handle the workload";
283 }
284 // 3. Slow consumer - high wait time with high utilization
285 else if (report.avg_wait_time_ms > config_.wait_time_threshold_ms &&
286 report.worker_utilization > config_.utilization_high_threshold)
287 {
288 report.has_bottleneck = true;
289 report.type = bottleneck_type::slow_consumer;
290 report.description = "Workers cannot keep up with job submission rate";
291 }
292 // 4. Uneven distribution - high variance in worker utilization
293 else if (report.utilization_variance > 0.1 && worker_count > 1)
294 {
295 // Variance > 0.1 means standard deviation > ~0.32 which is significant
296 report.has_bottleneck = true;
298 report.description = "Work is not evenly distributed across workers";
299 }
300 // 5. Lock contention - high wait time but low utilization (workers waiting on locks)
301 else if (report.avg_wait_time_ms > config_.wait_time_threshold_ms * 2 &&
302 report.worker_utilization < 0.5 && active_count > 0)
303 {
304 report.has_bottleneck = true;
306 report.description = "High wait times with low utilization suggests lock contention";
307 }
308 // 6. Memory pressure - check queue memory usage
309 else if (queue)
310 {
311 auto mem_stats = queue->get_memory_stats();
312 // Consider memory pressure if queue uses more than 100MB
313 constexpr std::size_t memory_threshold = 100 * 1024 * 1024;
314 if (mem_stats.queue_size_bytes > memory_threshold)
315 {
316 report.has_bottleneck = true;
318 report.description = "Excessive memory usage in job queue";
319 }
320 }
321
322 // Generate recommendations if bottleneck detected
323 if (report.has_bottleneck)
324 {
326 }
327
328 return report;
329 }
void generate_recommendations(bottleneck_report &report) const
Generates recommendations for a bottleneck.
Snapshot snapshot() const
Get a snapshot of all metrics.
auto collect_worker_diagnostics() const -> std::vector< diagnostics::thread_info >
Collects diagnostics information from all workers.
const metrics::ThreadPoolMetrics & metrics() const noexcept
Access aggregated runtime metrics (read-only reference).
@ slow_consumer
Workers can't keep up with job submission rate.
@ lock_contention
High mutex wait times affecting throughput.
@ worker_starvation
Not enough workers for the workload.
@ memory_pressure
Excessive memory allocations causing slowdown.
@ uneven_distribution
Work is not evenly distributed (work stealing needed)
@ queue_depth
Queue depth threshold exceeded.
double wait_time_threshold_ms
Wait time threshold (ms) for slow consumer detection.
double utilization_high_threshold
Worker utilization threshold for bottleneck detection.

References kcenon::thread::diagnostics::bottleneck_report::avg_wait_time_ms, kcenon::thread::thread_pool::collect_worker_diagnostics(), config_, kcenon::thread::diagnostics::bottleneck_report::description, kcenon::thread::diagnostics::bottleneck_report::estimated_backlog_time_ms, generate_recommendations(), kcenon::thread::thread_pool::get_active_worker_count(), kcenon::thread::thread_pool::get_idle_worker_count(), kcenon::thread::thread_pool::get_job_queue(), kcenon::thread::thread_pool::get_pending_task_count(), kcenon::thread::diagnostics::bottleneck_report::has_bottleneck, kcenon::thread::diagnostics::bottleneck_report::idle_workers, kcenon::thread::diagnostics::bottleneck_report::jobs_rejected, kcenon::thread::diagnostics::lock_contention, kcenon::thread::diagnostics::memory_pressure, kcenon::thread::thread_pool::metrics(), pool_, kcenon::thread::diagnostics::bottleneck_report::queue_depth, kcenon::thread::queue_depth, kcenon::thread::diagnostics::queue_full, kcenon::thread::diagnostics::bottleneck_report::queue_saturation, kcenon::thread::diagnostics::slow_consumer, kcenon::thread::metrics::ThreadPoolMetrics::snapshot(), kcenon::thread::diagnostics::bottleneck_report::total_workers, kcenon::thread::diagnostics::bottleneck_report::type, kcenon::thread::diagnostics::uneven_distribution, kcenon::thread::diagnostics::diagnostics_config::utilization_high_threshold, kcenon::thread::diagnostics::bottleneck_report::utilization_variance, kcenon::thread::diagnostics::diagnostics_config::wait_time_threshold_ms, kcenon::thread::diagnostics::worker_starvation, kcenon::thread::diagnostics::bottleneck_report::worker_utilization, kcenon::thread::thread_pool::workers_, and kcenon::thread::thread_pool::workers_mutex_.

Referenced by to_json().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ dump_thread_states()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::dump_thread_states ( ) const -> std::vector<thread_info>
nodiscard

Gets current state of all worker threads.

Returns
Vector of thread information.

Thread-safe: Can be called from any thread.

Definition at line 36 of file thread_pool_diagnostics.cpp.

37 {
38 // Delegate to thread_pool's collect_worker_diagnostics for actual worker info
40 }

References kcenon::thread::thread_pool::collect_worker_diagnostics(), and pool_.

Referenced by format_thread_dump(), and get_active_jobs().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ enable_tracing()

void kcenon::thread::diagnostics::thread_pool_diagnostics::enable_tracing ( bool enable,
std::size_t history_size = 1000 )

Enables or disables job execution tracing.

Parameters
enableEnable or disable tracing.
history_sizeNumber of events to retain.

Definition at line 596 of file thread_pool_diagnostics.cpp.

597 {
598 tracing_enabled_.store(enable, std::memory_order_relaxed);
599
600 if (enable)
601 {
602 std::lock_guard<std::mutex> lock(events_mutex_);
603 // Clear and resize if needed
604 while (event_history_.size() > history_size)
605 {
606 event_history_.pop_front();
607 }
608 }
609
610 // Update config
611 config_.event_history_size = history_size;
612 config_.enable_tracing = enable;
613 }
std::deque< job_execution_event > event_history_
Ring buffer for event history.
std::mutex events_mutex_
Mutex for event history access.
bool enable_tracing
Enable automatic event tracing.
std::size_t event_history_size
Maximum number of events to retain in history.

References config_, kcenon::thread::diagnostics::diagnostics_config::enable_tracing, event_history_, kcenon::thread::diagnostics::diagnostics_config::event_history_size, events_mutex_, and tracing_enabled_.

◆ format_thread_dump()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::format_thread_dump ( ) const -> std::string
nodiscard

Gets formatted thread dump (human-readable).

Returns
Multi-line string with thread dump.

Output format:

=== Thread Pool Dump: MyPool ===
Time: 2025-01-08T10:30:00Z
Workers: 8, Active: 5, Idle: 3
Worker-0 [tid:12345] ACTIVE (2.5s)
Current Job: ProcessOrder#1234 (running 150ms)
Jobs: 1523 completed, 2 failed
Utilization: 87.3%
...
@ running
Job is currently being executed.
@ failed
Job failed with an error.
@ completed
Job completed successfully.

Definition at line 42 of file thread_pool_diagnostics.cpp.

43 {
44 std::ostringstream oss;
45
46 auto threads = dump_thread_states();
47 auto now = std::chrono::system_clock::now();
48 auto time_t = std::chrono::system_clock::to_time_t(now);
49
50 std::size_t worker_count;
51 {
52 std::scoped_lock<std::mutex> lock(pool_.workers_mutex_);
53 worker_count = pool_.workers_.size();
54 }
55 auto active_count = pool_.get_active_worker_count();
56 auto idle_count = pool_.get_idle_worker_count();
57
58 // Header
59 oss << "=== Thread Pool Dump: " << pool_.to_string() << " ===\n";
60 oss << "Time: " << std::put_time(std::gmtime(&time_t), "%Y-%m-%dT%H:%M:%SZ") << "\n";
61 oss << "Workers: " << worker_count << ", Active: " << active_count
62 << ", Idle: " << idle_count << "\n\n";
63
64 // Worker details
65 for (const auto& t : threads)
66 {
67 auto state_duration = t.state_duration();
68 auto duration_sec = std::chrono::duration<double>(state_duration).count();
69
70 oss << t.thread_name << " [tid:" << t.thread_id << "] "
71 << worker_state_to_string(t.state)
72 << " (" << std::fixed << std::setprecision(1) << duration_sec << "s)\n";
73
74 if (t.current_job.has_value())
75 {
76 const auto& job = t.current_job.value();
77 auto exec_time_ms = std::chrono::duration<double, std::milli>(
78 job.execution_time).count();
79 oss << " Current Job: " << job.job_name << "#" << job.job_id
80 << " (running " << std::fixed << std::setprecision(0)
81 << exec_time_ms << "ms)\n";
82 }
83
84 oss << " Jobs: " << t.jobs_completed << " completed, "
85 << t.jobs_failed << " failed\n";
86 oss << " Utilization: " << std::fixed << std::setprecision(1)
87 << (t.utilization * 100.0) << "%\n\n";
88 }
89
90 return oss.str();
91 }
auto dump_thread_states() const -> std::vector< thread_info >
Gets current state of all worker threads.
auto to_string(void) const -> std::string
Provides a string representation of this thread_pool.
auto worker_state_to_string(worker_state state) -> std::string
Converts worker_state to human-readable string.
Definition thread_info.h:47

References dump_thread_states(), kcenon::thread::thread_pool::get_active_worker_count(), kcenon::thread::thread_pool::get_idle_worker_count(), pool_, kcenon::thread::thread_pool::to_string(), kcenon::thread::diagnostics::worker_state_to_string(), kcenon::thread::thread_pool::workers_, and kcenon::thread::thread_pool::workers_mutex_.

Referenced by to_string().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ generate_recommendations()

void kcenon::thread::diagnostics::thread_pool_diagnostics::generate_recommendations ( bottleneck_report & report) const
private

Generates recommendations for a bottleneck.

Parameters
reportThe bottleneck report to add recommendations to.

Definition at line 331 of file thread_pool_diagnostics.cpp.

332 {
333 switch (report.type)
334 {
336 report.recommendations.push_back("Consider increasing queue capacity");
337 report.recommendations.push_back("Enable backpressure with adaptive policy");
338 report.recommendations.push_back("Add more worker threads if CPU permits");
339 break;
340
342 report.recommendations.push_back("Add more worker threads");
343 report.recommendations.push_back("Optimize job execution time");
344 report.recommendations.push_back("Consider job batching for small tasks");
345 break;
346
348 report.recommendations.push_back("Increase worker thread count");
349 report.recommendations.push_back("Consider scaling based on hardware cores");
350 report.recommendations.push_back("Enable autoscaling for dynamic adjustment");
351 break;
352
354 report.recommendations.push_back("Enable work stealing if not already");
355 report.recommendations.push_back("Review job distribution patterns");
356 report.recommendations.push_back("Consider using priority-based scheduling");
357 break;
358
360 report.recommendations.push_back("Review shared resource access patterns");
361 report.recommendations.push_back("Consider using lock-free data structures");
362 report.recommendations.push_back("Reduce critical section scope");
363 report.recommendations.push_back("Use finer-grained locking strategies");
364 break;
365
367 report.recommendations.push_back("Reduce queue capacity or enable backpressure");
368 report.recommendations.push_back("Optimize job object size");
369 report.recommendations.push_back("Add more workers to process jobs faster");
370 report.recommendations.push_back("Consider job prioritization to clear backlog");
371 break;
372
374 default:
375 break;
376 }
377 }

References kcenon::thread::diagnostics::lock_contention, kcenon::thread::diagnostics::memory_pressure, kcenon::thread::diagnostics::none, kcenon::thread::diagnostics::queue_full, kcenon::thread::diagnostics::bottleneck_report::recommendations, kcenon::thread::diagnostics::slow_consumer, kcenon::thread::diagnostics::bottleneck_report::type, kcenon::thread::diagnostics::uneven_distribution, and kcenon::thread::diagnostics::worker_starvation.

Referenced by detect_bottlenecks().

Here is the caller graph for this function:

◆ get_active_jobs()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::get_active_jobs ( ) const -> std::vector<job_info>
nodiscard

Gets currently executing jobs.

Returns
Vector of active job information.

Definition at line 97 of file thread_pool_diagnostics.cpp.

98 {
99 std::vector<job_info> result;
100
101 // Get thread states which include current job info
102 auto threads = dump_thread_states();
103
104 for (const auto& thread : threads)
105 {
106 if (thread.current_job.has_value())
107 {
108 result.push_back(thread.current_job.value());
109 }
110 }
111
112 return result;
113 }

References dump_thread_states().

Here is the call graph for this function:

◆ get_config()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::get_config ( ) const -> diagnostics_config
nodiscard

Gets the current configuration.

Returns
Current diagnostics configuration.

Definition at line 758 of file thread_pool_diagnostics.cpp.

759 {
760 return config_;
761 }

References config_.

◆ get_pending_jobs()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::get_pending_jobs ( std::size_t limit = 100) const -> std::vector<job_info>
nodiscard

Gets pending jobs in queue.

Parameters
limitMaximum number to return (0 = all).
Returns
Vector of pending job information.

Definition at line 115 of file thread_pool_diagnostics.cpp.

117 {
118 // Delegate to job_queue's inspect_pending_jobs
119 auto queue = pool_.get_job_queue();
120 if (!queue)
121 {
122 return {};
123 }
124
125 return queue->inspect_pending_jobs(limit);
126 }

◆ get_recent_events()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::get_recent_events ( std::size_t limit = 100) const -> std::vector<job_execution_event>
nodiscard

Gets recent execution events.

Parameters
limitMaximum events to return.
Returns
Vector of recent events.

Definition at line 680 of file thread_pool_diagnostics.cpp.

682 {
683 std::lock_guard<std::mutex> lock(events_mutex_);
684
685 std::vector<job_execution_event> result;
686 auto count = std::min(limit, event_history_.size());
687 result.reserve(count);
688
689 auto it = event_history_.rbegin();
690 for (std::size_t i = 0; i < count && it != event_history_.rend(); ++i, ++it)
691 {
692 result.push_back(*it);
693 }
694
695 return result;
696 }

◆ get_recent_jobs()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::get_recent_jobs ( std::size_t limit = 100) const -> std::vector<job_info>
nodiscard

Gets recent completed/failed jobs.

Parameters
limitMaximum number to return.
Returns
Vector of recent job information.

Definition at line 128 of file thread_pool_diagnostics.cpp.

130 {
131 std::lock_guard<std::mutex> lock(jobs_mutex_);
132
133 std::vector<job_info> result;
134 auto count = std::min(limit, recent_jobs_.size());
135 result.reserve(count);
136
137 auto it = recent_jobs_.rbegin();
138 for (std::size_t i = 0; i < count && it != recent_jobs_.rend(); ++i, ++it)
139 {
140 result.push_back(*it);
141 }
142
143 return result;
144 }
std::deque< job_info > recent_jobs_
Ring buffer for recent job completions.

◆ get_worker_info()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::get_worker_info ( const thread_worker & worker,
std::size_t index ) const -> thread_info
nodiscardprivate

Gets thread info for a single worker.

Parameters
workerThe worker to query.
indexWorker index in the pool.
Returns
Thread information.

Definition at line 769 of file thread_pool_diagnostics.cpp.

771 {
772 thread_info info;
773 info.worker_id = worker.get_worker_id();
774 info.thread_name = "Worker-" + std::to_string(index);
775 info.state = worker.is_idle() ? worker_state::idle : worker_state::active;
776 info.state_since = std::chrono::steady_clock::now();
777 return info;
778 }
@ info
Informational messages highlighting progress.

References kcenon::thread::diagnostics::active, kcenon::thread::diagnostics::idle, kcenon::thread::info, and kcenon::thread::diagnostics::thread_info::worker_id.

◆ health_check()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::health_check ( ) const -> health_status
nodiscard

Performs comprehensive health check.

Returns
Health status with all component states.

Definition at line 383 of file thread_pool_diagnostics.cpp.

384 {
385 health_status status;
386 status.check_time = std::chrono::steady_clock::now();
387
388 // Calculate uptime
389 auto uptime = status.check_time - start_time_;
390 status.uptime_seconds = std::chrono::duration<double>(uptime).count();
391
392 // Get metrics
393 auto metrics_snap = pool_.metrics().snapshot();
394 status.total_jobs_processed = metrics_snap.tasks_executed +
395 metrics_snap.tasks_failed;
396
397 if (status.total_jobs_processed > 0)
398 {
399 status.success_rate = static_cast<double>(metrics_snap.tasks_executed) /
400 static_cast<double>(status.total_jobs_processed);
401
402 // Calculate average latency (total execution time / total jobs)
403 // busy_time represents total execution time across all workers
404 double total_exec_time_ms = static_cast<double>(metrics_snap.total_busy_time_ns) / 1e6;
405 status.avg_latency_ms = total_exec_time_ms /
406 static_cast<double>(status.total_jobs_processed);
407 }
408
409 // Worker stats
410 {
411 std::scoped_lock<std::mutex> lock(pool_.workers_mutex_);
412 status.total_workers = pool_.workers_.size();
413 }
414 status.active_workers = pool_.get_active_worker_count();
415 status.queue_depth = pool_.get_pending_task_count();
416
417 // Get queue capacity
418 auto queue = pool_.get_job_queue();
419 if (queue)
420 {
421 auto max_size = queue->get_max_size();
422 if (max_size.has_value())
423 {
424 status.queue_capacity = max_size.value();
425 }
426 }
427
428 // Check components
429 status.components.push_back(check_worker_health());
430 status.components.push_back(check_queue_health());
431 status.components.push_back(check_metrics_health(status.avg_latency_ms,
432 status.success_rate));
433
434 // Calculate overall status
435 status.calculate_overall_status();
436
437 return status;
438 }
auto check_queue_health() const -> component_health
Checks queue component health.
auto check_metrics_health(double avg_latency_ms, double success_rate) const -> component_health
Checks metrics component health.
auto check_worker_health() const -> component_health
Checks worker component health.

References kcenon::thread::diagnostics::health_status::active_workers, kcenon::thread::diagnostics::health_status::avg_latency_ms, kcenon::thread::diagnostics::health_status::calculate_overall_status(), check_metrics_health(), check_queue_health(), kcenon::thread::diagnostics::health_status::check_time, check_worker_health(), kcenon::thread::diagnostics::health_status::components, kcenon::thread::thread_pool::get_active_worker_count(), kcenon::thread::thread_pool::get_job_queue(), kcenon::thread::thread_pool::get_pending_task_count(), kcenon::thread::thread_pool::metrics(), pool_, kcenon::thread::diagnostics::health_status::queue_capacity, kcenon::thread::diagnostics::health_status::queue_depth, kcenon::thread::metrics::ThreadPoolMetrics::snapshot(), start_time_, kcenon::thread::diagnostics::health_status::success_rate, kcenon::thread::diagnostics::health_status::total_jobs_processed, kcenon::thread::diagnostics::health_status::total_workers, kcenon::thread::diagnostics::health_status::uptime_seconds, kcenon::thread::thread_pool::workers_, and kcenon::thread::thread_pool::workers_mutex_.

Referenced by to_json(), and to_prometheus().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_healthy()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::is_healthy ( ) const -> bool
nodiscard

Quick check if pool is healthy.

Returns
true if pool is operational.

Definition at line 440 of file thread_pool_diagnostics.cpp.

441 {
442 std::size_t worker_count;
443 {
444 std::scoped_lock<std::mutex> lock(pool_.workers_mutex_);
445 worker_count = pool_.workers_.size();
446 }
447 return pool_.is_running() && worker_count > 0;
448 }

References kcenon::thread::thread_pool::is_running(), pool_, kcenon::thread::thread_pool::workers_, and kcenon::thread::thread_pool::workers_mutex_.

Here is the call graph for this function:

◆ is_tracing_enabled()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::is_tracing_enabled ( ) const -> bool
nodiscard

Checks if tracing is enabled.

Returns
true if tracing is enabled.

Definition at line 615 of file thread_pool_diagnostics.cpp.

616 {
617 return tracing_enabled_.load(std::memory_order_relaxed);
618 }

References tracing_enabled_.

◆ notify_listeners()

void kcenon::thread::diagnostics::thread_pool_diagnostics::notify_listeners ( const job_execution_event & event)
private

Notifies all event listeners.

Parameters
eventThe event to broadcast.

Definition at line 663 of file thread_pool_diagnostics.cpp.

664 {
665 std::vector<std::shared_ptr<execution_event_listener>> listeners_copy;
666 {
667 std::lock_guard<std::mutex> lock(listeners_mutex_);
668 listeners_copy = listeners_;
669 }
670
671 for (const auto& listener : listeners_copy)
672 {
673 if (listener)
674 {
675 listener->on_event(event);
676 }
677 }
678 }

References listeners_, and listeners_mutex_.

Referenced by record_event().

Here is the caller graph for this function:

◆ operator=() [1/2]

thread_pool_diagnostics & kcenon::thread::diagnostics::thread_pool_diagnostics::operator= ( const thread_pool_diagnostics & )
delete

◆ operator=() [2/2]

thread_pool_diagnostics & kcenon::thread::diagnostics::thread_pool_diagnostics::operator= ( thread_pool_diagnostics && )
delete

◆ record_event()

void kcenon::thread::diagnostics::thread_pool_diagnostics::record_event ( const job_execution_event & event)

Records a job execution event.

Parameters
eventThe event to record.

Called internally by the thread pool on job lifecycle events.

Definition at line 642 of file thread_pool_diagnostics.cpp.

643 {
644 if (!tracing_enabled_.load(std::memory_order_relaxed))
645 {
646 return;
647 }
648
649 // Store in history
650 {
651 std::lock_guard<std::mutex> lock(events_mutex_);
652 event_history_.push_back(event);
654 {
655 event_history_.pop_front();
656 }
657 }
658
659 // Notify listeners
660 notify_listeners(event);
661 }
void notify_listeners(const job_execution_event &event)
Notifies all event listeners.

References config_, event_history_, kcenon::thread::diagnostics::diagnostics_config::event_history_size, events_mutex_, notify_listeners(), and tracing_enabled_.

Here is the call graph for this function:

◆ record_job_completion()

void kcenon::thread::diagnostics::thread_pool_diagnostics::record_job_completion ( const job_info & info)

Records a job completion for history tracking.

Parameters
infoThe job information to record.

Called internally by the thread pool when jobs complete.

Definition at line 146 of file thread_pool_diagnostics.cpp.

147 {
148 std::lock_guard<std::mutex> lock(jobs_mutex_);
149
150 recent_jobs_.push_back(info);
152 {
153 recent_jobs_.pop_front();
154 }
155 }
std::size_t recent_jobs_capacity
Maximum number of recent jobs to track.

References config_, kcenon::thread::info, jobs_mutex_, recent_jobs_, and kcenon::thread::diagnostics::diagnostics_config::recent_jobs_capacity.

◆ remove_event_listener()

void kcenon::thread::diagnostics::thread_pool_diagnostics::remove_event_listener ( std::shared_ptr< execution_event_listener > listener)

Removes an event listener.

Parameters
listenerListener to remove.

Definition at line 629 of file thread_pool_diagnostics.cpp.

631 {
632 if (!listener) return;
633
634 std::lock_guard<std::mutex> lock(listeners_mutex_);
635 auto it = std::find(listeners_.begin(), listeners_.end(), listener);
636 if (it != listeners_.end())
637 {
638 listeners_.erase(it);
639 }
640 }

References listeners_, and listeners_mutex_.

◆ set_config()

void kcenon::thread::diagnostics::thread_pool_diagnostics::set_config ( const diagnostics_config & config)

Updates the configuration.

Parameters
configNew configuration to apply.

Definition at line 763 of file thread_pool_diagnostics.cpp.

764 {
765 config_ = config;
766 tracing_enabled_.store(config.enable_tracing, std::memory_order_relaxed);
767 }

References config_, kcenon::thread::diagnostics::diagnostics_config::enable_tracing, and tracing_enabled_.

◆ to_json()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::to_json ( ) const -> std::string
nodiscard

Exports diagnostics as JSON.

Returns
JSON string with all diagnostic data.

Definition at line 702 of file thread_pool_diagnostics.cpp.

703 {
704 std::ostringstream oss;
705 oss << "{\n";
706
707 // Health status
708 auto health = health_check();
709 oss << " \"health\": {\n";
710 oss << " \"status\": \"" << health_state_to_string(health.overall_status) << "\",\n";
711 oss << " \"message\": \"" << health.status_message << "\",\n";
712 oss << " \"uptime_seconds\": " << std::fixed << std::setprecision(2)
713 << health.uptime_seconds << ",\n";
714 oss << " \"total_jobs_processed\": " << health.total_jobs_processed << ",\n";
715 oss << " \"success_rate\": " << std::fixed << std::setprecision(4)
716 << health.success_rate << "\n";
717 oss << " },\n";
718
719 // Workers
720 oss << " \"workers\": {\n";
721 oss << " \"total\": " << health.total_workers << ",\n";
722 oss << " \"active\": " << health.active_workers << ",\n";
723 oss << " \"idle\": " << (health.total_workers - health.active_workers) << "\n";
724 oss << " },\n";
725
726 // Queue
727 oss << " \"queue\": {\n";
728 oss << " \"depth\": " << health.queue_depth << "\n";
729 oss << " },\n";
730
731 // Bottleneck
732 auto bottleneck = detect_bottlenecks();
733 oss << " \"bottleneck\": {\n";
734 oss << " \"detected\": " << (bottleneck.has_bottleneck ? "true" : "false") << ",\n";
735 oss << " \"type\": \"" << bottleneck_type_to_string(bottleneck.type) << "\",\n";
736 oss << " \"severity\": \"" << bottleneck.severity_string() << "\"\n";
737 oss << " }\n";
738
739 oss << "}";
740 return oss.str();
741 }
auto detect_bottlenecks() const -> bottleneck_report
Analyzes for bottlenecks.
auto health_check() const -> health_status
Performs comprehensive health check.
auto bottleneck_type_to_string(bottleneck_type type) -> std::string
Converts bottleneck_type to human-readable string.
auto health_state_to_string(health_state state) -> std::string
Converts health_state to human-readable string.

References kcenon::thread::diagnostics::bottleneck_type_to_string(), detect_bottlenecks(), health_check(), and kcenon::thread::diagnostics::health_state_to_string().

Here is the call graph for this function:

◆ to_prometheus()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::to_prometheus ( ) const -> std::string
nodiscard

Exports diagnostics as Prometheus-compatible metrics.

Returns
Prometheus exposition format string.

Produces metrics suitable for scraping by Prometheus or compatible monitoring systems. Includes health status, worker metrics, queue metrics, and job statistics.

Definition at line 748 of file thread_pool_diagnostics.cpp.

749 {
750 auto health = health_check();
751 return health.to_prometheus(pool_.to_string());
752 }

References health_check(), pool_, and kcenon::thread::thread_pool::to_string().

Here is the call graph for this function:

◆ to_string()

auto kcenon::thread::diagnostics::thread_pool_diagnostics::to_string ( ) const -> std::string
nodiscard

Exports diagnostics as formatted string.

Returns
Human-readable string.

Definition at line 743 of file thread_pool_diagnostics.cpp.

744 {
745 return format_thread_dump();
746 }
auto format_thread_dump() const -> std::string
Gets formatted thread dump (human-readable).

References format_thread_dump().

Here is the call graph for this function:

Member Data Documentation

◆ config_

diagnostics_config kcenon::thread::diagnostics::thread_pool_diagnostics::config_
private

◆ event_history_

std::deque<job_execution_event> kcenon::thread::diagnostics::thread_pool_diagnostics::event_history_
private

Ring buffer for event history.

Definition at line 366 of file thread_pool_diagnostics.h.

Referenced by enable_tracing(), and record_event().

◆ events_mutex_

std::mutex kcenon::thread::diagnostics::thread_pool_diagnostics::events_mutex_
mutableprivate

Mutex for event history access.

Definition at line 361 of file thread_pool_diagnostics.h.

Referenced by enable_tracing(), and record_event().

◆ jobs_mutex_

std::mutex kcenon::thread::diagnostics::thread_pool_diagnostics::jobs_mutex_
mutableprivate

Mutex for recent jobs access.

Definition at line 371 of file thread_pool_diagnostics.h.

Referenced by record_job_completion().

◆ listeners_

std::vector<std::shared_ptr<execution_event_listener> > kcenon::thread::diagnostics::thread_pool_diagnostics::listeners_
private

Event listeners.

Definition at line 386 of file thread_pool_diagnostics.h.

Referenced by add_event_listener(), notify_listeners(), and remove_event_listener().

◆ listeners_mutex_

std::mutex kcenon::thread::diagnostics::thread_pool_diagnostics::listeners_mutex_
mutableprivate

Mutex for event listeners.

Definition at line 381 of file thread_pool_diagnostics.h.

Referenced by add_event_listener(), notify_listeners(), and remove_event_listener().

◆ next_event_id_

std::atomic<std::uint64_t> kcenon::thread::diagnostics::thread_pool_diagnostics::next_event_id_ {0}
private

Counter for event IDs.

Definition at line 391 of file thread_pool_diagnostics.h.

391{0};

◆ pool_

thread_pool& kcenon::thread::diagnostics::thread_pool_diagnostics::pool_
private

◆ recent_jobs_

std::deque<job_info> kcenon::thread::diagnostics::thread_pool_diagnostics::recent_jobs_
private

Ring buffer for recent job completions.

Definition at line 376 of file thread_pool_diagnostics.h.

Referenced by record_job_completion().

◆ start_time_

std::chrono::steady_clock::time_point kcenon::thread::diagnostics::thread_pool_diagnostics::start_time_
private

Time when the pool was started.

Definition at line 396 of file thread_pool_diagnostics.h.

Referenced by health_check().

◆ tracing_enabled_

std::atomic<bool> kcenon::thread::diagnostics::thread_pool_diagnostics::tracing_enabled_ {false}
private

Whether event tracing is enabled.

Definition at line 356 of file thread_pool_diagnostics.h.

356{false};

Referenced by enable_tracing(), is_tracing_enabled(), record_event(), and set_config().


The documentation for this class was generated from the following files: