Integration Guide
Get AI chat completions and embeddings working in your .NET API in under 5 minutes.
Primus AI SDK runs entirely within your application. No prompts or responses are transmitted to Primus servers. All API calls go directly to your configured AI provider.
Providers
The AI module supports multiple providers. Choose the provider that matches your infrastructure:
Provider 1: Azure OpenAI
Step 1: Install Package
dotnet add package PrimusSaaS.AI
Step 2: Configure in Program.cs
using Primus.AI;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddPrimusAI(ai =>
{
ai.UseAzureOpenAI(opts =>
{
opts.Endpoint = builder.Configuration["AzureOpenAI:Endpoint"];
opts.ApiKey = builder.Configuration["AzureOpenAI:ApiKey"];
opts.DefaultDeployment = "gpt-4o";
});
});
var app = builder.Build();
app.MapControllers();
app.Run();
Step 3: Configure appsettings.json
Add configuration to your appsettings.json:
{
"AzureOpenAI": {
"Endpoint": "https://your-resource.openai.azure.com/",
"ApiKey": "your-api-key",
"DefaultDeployment": "gpt-4o"
}
}
How to Get Azure OpenAI Configuration Values
Azure Portal:
- Go to portal.azure.com
- Navigate to your Azure OpenAI resource
- Under Resource Management, click Keys and Endpoint
- Copy the KEY 1 (this is your
ApiKey) - Copy the Endpoint (this is your
Endpoint) - Go to Model Deployments to find your deployment name (e.g.,
gpt-4o)
Important: Never commit API keys to source control. Use User Secrets or Key Vault in production.
Step 4: Create AI Controller
Create Controllers/AIController.cs:
using Microsoft.AspNetCore.Mvc;
using Primus.AI;
using Primus.AI.Abstractions;
namespace YourApp.Controllers;
[ApiController]
[Route("api/[controller]")]
public class AIController : ControllerBase
{
private readonly IAIClient _aiClient;
private readonly ILogger<AIController> _logger;
public AIController(IAIClient aiClient, ILogger<AIController> logger)
{
_aiClient = aiClient;
_logger = logger;
}
/// <summary>
/// Chat completion endpoint
/// </summary>
[HttpPost("chat")]
public async Task<IActionResult> Chat([FromBody] ChatRequest request)
{
try
{
var response = await _aiClient.ChatAsync(new ChatCompletionRequest
{
Messages =
[
ChatMessage.System("You are a helpful assistant."),
ChatMessage.User(request.Message)
]
});
return Ok(new { response.Content, response.TokensUsed });
}
catch (Exception ex)
{
_logger.LogError(ex, "Error getting completion");
return StatusCode(500, new { error = ex.Message });
}
}
/// <summary>
/// Streaming chat completion
/// </summary>
[HttpPost("chat/stream")]
public async Task ChatStream([FromBody] ChatRequest request)
{
Response.ContentType = "text/event-stream";
var streamRequest = new ChatCompletionRequest
{
Messages =
[
ChatMessage.System("You are a helpful assistant."),
ChatMessage.User(request.Message)
]
};
await foreach (var chunk in _aiClient.StreamChatAsync(streamRequest))
{
if (string.IsNullOrWhiteSpace(chunk.Content))
{
continue;
}
await Response.WriteAsync($"data: {chunk.Content}\n\n");
await Response.Body.FlushAsync();
}
}
}
public record ChatRequest(string Message);
Step 5: Test Endpoints
Test Chat Completion:
curl -X POST http://localhost:5000/api/ai/chat \
-H "Content-Type: application/json" \
-d '{"message": "What is the capital of France?"}'
Expected Response:
{
"content": "The capital of France is Paris.",
"tokensUsed": 42
}
Provider 2: GitHub Models
Step 1: Install Package
dotnet add package PrimusSaaS.AI
Step 2: Configure in Program.cs
using Primus.AI;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddPrimusAI(ai =>
{
ai.UseGitHubModels(opts =>
{
opts.Token = builder.Configuration["GitHub:Token"];
opts.DefaultModel = "gpt-4o";
});
});
var app = builder.Build();
app.MapControllers();
app.Run();
Step 3: Configure appsettings.json
Add configuration to your appsettings.json:
{
"GitHub": {
"Token": "ghp_..."
}
}
How to Get GitHub Token
GitHub Settings:
- Go to github.com/settings/tokens
- Generate a new Personal Access Token (Classic) or Fine-grained token
- Ensure it has access scope for code/models if required (usually read access is enough for basic public models)
- Copy the token (starts with
ghp_)
Step 4 & 5
(Reuse Controller and Test commands from above)
Configuration Reference
Common Options
| Option | Type | Default | Description |
|---|---|---|---|
MaxTokens | int | 4096 | Default maximum tokens |
Temperature | float | 0.7 | Default response randomness |
TopP | float? | null | Default top-p sampling |
PresencePenalty | float? | null | Default presence penalty |
FrequencyPenalty | float? | null | Default frequency penalty |
SystemPrompt | string? | null | Default system prompt |
Azure OpenAI Options
| Option | Type | Description |
|---|---|---|
Endpoint | string | Your Azure resource endpoint URL |
ApiKey | string | Resource key 1 or 2 |
DefaultDeployment | string | Name of your deployment in Azure AI Studio |
GitHub Models Options
| Option | Type | Description |
|---|---|---|
Token | string | GitHub personal access token |
DefaultModel | string | Default model to use |
BaseUrl | string | GitHub Models API base URL |
Streaming
Stream responses token-by-token for low-latency UIs.
var request = new ChatCompletionRequest
{
Messages =
[
ChatMessage.System("You are a helpful assistant."),
ChatMessage.User("Summarize the latest status report.")
]
};
await foreach (var chunk in aiClient.StreamChatAsync(request))
{
Console.Write(chunk.Content);
}
Embeddings
Generate embeddings for search or clustering workflows.
var response = await aiClient.GetEmbeddingsAsync(new EmbeddingsRequest
{
Inputs = ["Quarterly revenue trends for SaaS products"],
Model = "text-embedding-3-small"
});
var vector = response.Embeddings.Count > 0 ? response.Embeddings[0] : null;
Prompt Safety
Enable prompt injection detection to block suspicious inputs.
builder.Services.AddPrimusAI(ai =>
{
ai.UseAzureOpenAI(opts => { /* config */ });
ai.EnablePromptInjectionDetection(opts =>
{
opts.BlockSuspiciousPrompts = true;
opts.LogDetections = true;
});
});
Examples
Example 1: Multi-Turn Conversation
var request = new ChatCompletionRequest
{
Messages =
[
ChatMessage.System("You are a helpful assistant."),
ChatMessage.User("What is the capital of France?"),
ChatMessage.Assistant("The capital of France is Paris."),
ChatMessage.User("What's its population?")
]
};
var response = await aiClient.ChatAsync(request);
Example 2: Prompt Safety
var result = await detector.DetectAsync(userInput);
if (result.IsInjectionDetected)
{
return BadRequest("Potential prompt injection detected");
}
Troubleshooting
Issue: 401 Unauthorized
Error: Access denied due to invalid subscription key
Solution:
- Check that your API Key is correct in
appsettings.json - Verify you copied the full key string without spaces
- Ensure your Azure resource is active or GitHub token has access to models
Issue: 404 Not Found (Azure)
Error: Resource not found
Solution:
- Verify the
DeploymentNamematches exactly what is in Azure AI Studio - Check that the
EndpointURL is correct and includeshttps://