Skip to main content

AI Copilot

Maturity: Preview

This module is in early access. APIs may change significantly.

Try in Playground

Enterprise AI integration with a unified API across multiple providers. Built-in prompt safety and token budget management.

Why Use AI Copilot?

Integrating LLMs across different providers requires provider-specific code. AI Copilot provides a unified API for chat completions, embeddings, and structured outputs.


Supported Providers


Result Object

The IAIClient.ChatAsync() method returns a completion result with the following properties:

PropertyTypeDescription
ContentstringThe generated text response
RoleChatRoleRole of the response (assistant, tool, etc.)
TokensUsedintTotal tokens consumed
PromptTokensintTokens in the input prompt
CompletionTokensintTokens in the generated response
Modelstring?The model/deployment used
FinishReasonstring?Why generation stopped (stop, length, etc.)
ResponseIdstring?Provider response identifier

Streaming Result

For StreamChatAsync(), each chunk contains:

PropertyTypeDescription
ContentstringPartial text content
IsCompleteboolWhether this is the final chunk
FinishReasonstring?Stop reason if complete

Embedding Result

For GetEmbeddingsAsync():

PropertyTypeDescription
Embeddingsfloat[][]Vector representations
ModelstringEmbedding model used
TokensUsedintTokens consumed

Key Features

  • Unified API - Switch providers via configuration, not code
  • Streaming - Real-time token-by-token responses
  • Embeddings - Generate vectors for semantic search
  • Structured Output - Parse JSON responses automatically
  • Prompt Safety - Built-in injection detection
  • Token Budgets - Per-tenant usage limits

Examples


Quick Install

# .NET
dotnet add package PrimusSaaS.AI

# Node.js
npm install @primus-saas/ai-client

Next Steps

| Want to... | See Guide | |------------|-----------|| | Get started quickly | Integration Guide | | Advanced configuration | Advanced |