|
PACS System 0.1.0
PACS DICOM system library
|
Connector for external AI inference services. More...
#include <ai_service_connector.h>

Public Types | |
| using | status_callback = std::function<void(const inference_status&)> |
| Callback type for status updates. | |
| using | completion_callback |
| Callback type for completion notification. | |
Static Public Member Functions | |
| static auto | initialize (const ai_service_config &config) -> Result< std::monostate > |
| Initialize the AI service connector. | |
| static void | shutdown () |
| Shutdown the AI service connector. | |
| static auto | is_initialized () noexcept -> bool |
| Check if the connector is initialized. | |
| static auto | request_inference (const inference_request &request) -> Result< std::string > |
| Request AI inference for a study. | |
| static auto | check_status (const std::string &job_id) -> Result< inference_status > |
| Check the status of an inference job. | |
| static auto | cancel (const std::string &job_id) -> Result< std::monostate > |
| Cancel an inference job. | |
| static auto | wait_for_completion (const std::string &job_id, std::chrono::milliseconds timeout=std::chrono::minutes{30}, status_callback callback=nullptr) -> Result< inference_status > |
| Wait for a job to complete. | |
| static auto | list_active_jobs () -> Result< std::vector< inference_status > > |
| List active inference jobs. | |
| static auto | list_models () -> Result< std::vector< model_info > > |
| List available AI models. | |
| static auto | get_model_info (const std::string &model_id) -> Result< model_info > |
| Get information about a specific model. | |
| static auto | check_health () -> bool |
| Check AI service health. | |
| static auto | get_latency () -> std::optional< std::chrono::milliseconds > |
| Get current latency to the AI service. | |
| static auto | get_config () -> const ai_service_config & |
| Get the current configuration. | |
| static auto | update_credentials (authentication_type auth_type, const std::string &credentials) -> Result< std::monostate > |
| Update authentication credentials. | |
Private Member Functions | |
| ai_service_connector ()=delete | |
| ~ai_service_connector ()=delete | |
| ai_service_connector (const ai_service_connector &)=delete | |
| ai_service_connector & | operator= (const ai_service_connector &)=delete |
Static Private Attributes | |
| static std::unique_ptr< impl > | pimpl_ |
Connector for external AI inference services.
This class provides a unified interface for interacting with external AI inference services, including:
The connector uses network_system for HTTP communication and integrates with logger_system for audit logging and monitoring_system for metrics.
Thread Safety: All methods are thread-safe.
Definition at line 311 of file ai_service_connector.h.
Callback type for completion notification.
Definition at line 321 of file ai_service_connector.h.
| using kcenon::pacs::ai::ai_service_connector::status_callback = std::function<void(const inference_status&)> |
Callback type for status updates.
Definition at line 318 of file ai_service_connector.h.
|
privatedelete |
|
privatedelete |
|
privatedelete |
|
staticnodiscard |
Cancel an inference job.
Attempts to cancel a pending or running job. Jobs that have already completed cannot be cancelled.
| job_id | The job identifier to cancel |
|
staticnodiscard |
Check AI service health.
|
staticnodiscard |
Check the status of an inference job.
| job_id | The job identifier returned from request_inference |
|
staticnodiscard |
Get the current configuration.
|
staticnodiscard |
Get current latency to the AI service.
|
staticnodiscard |
Get information about a specific model.
| model_id | The model identifier |
|
staticnodiscard |
Initialize the AI service connector.
Must be called before any other operations. Sets up HTTP client, configures authentication, and validates connection.
| config | Configuration options |
|
staticnodiscardnoexcept |
Check if the connector is initialized.
|
staticnodiscard |
List active inference jobs.
Returns all jobs that are currently pending or running.
|
staticnodiscard |
List available AI models.
|
privatedelete |
|
staticnodiscard |
Request AI inference for a study.
Submits a study for AI processing and returns a job ID for tracking.
| request | Inference request parameters |
|
static |
Shutdown the AI service connector.
Cancels pending requests and releases resources.
|
staticnodiscard |
Update authentication credentials.
| auth_type | New authentication type |
| credentials | New credentials (API key, token, or username:password) |
|
staticnodiscard |
Wait for a job to complete.
Blocks until the job completes, fails, or times out.
| job_id | The job identifier to wait for |
| timeout | Maximum time to wait |
| status_callback | Optional callback for status updates |
|
staticprivate |
Definition at line 490 of file ai_service_connector.h.