Skip to main content

Integration Guide

Get AI chat completions and embeddings working in your .NET API in under 5 minutes.

Complete Data Isolation

Primus AI SDK runs entirely within your application. No prompts or responses are transmitted to Primus servers. All API calls go directly to your configured AI provider.

Providers

The AI module supports multiple providers. Choose the provider that matches your infrastructure:

Provider 1: Azure OpenAI

Step 1: Install Package

dotnet add package PrimusSaaS.AI

Step 2: Configure in Program.cs

using Primus.AI;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddPrimusAI(ai =>
{
ai.UseAzureOpenAI(opts =>
{
opts.Endpoint = builder.Configuration["AzureOpenAI:Endpoint"];
opts.ApiKey = builder.Configuration["AzureOpenAI:ApiKey"];
opts.DefaultDeployment = "gpt-4o";
});
});

var app = builder.Build();
app.MapControllers();
app.Run();

Step 3: Configure appsettings.json

Add configuration to your appsettings.json:

{
"AzureOpenAI": {
"Endpoint": "https://your-resource.openai.azure.com/",
"ApiKey": "your-api-key",
"DefaultDeployment": "gpt-4o"
}
}
How to Get Azure OpenAI Configuration Values

Azure Portal:

  1. Go to portal.azure.com
  2. Navigate to your Azure OpenAI resource
  3. Under Resource Management, click Keys and Endpoint
  4. Copy the KEY 1 (this is your ApiKey)
  5. Copy the Endpoint (this is your Endpoint)
  6. Go to Model Deployments to find your deployment name (e.g., gpt-4o)

Important: Never commit API keys to source control. Use User Secrets or Key Vault in production.

Step 4: Create AI Controller

Create Controllers/AIController.cs:

using Microsoft.AspNetCore.Mvc;
using Primus.AI;
using Primus.AI.Abstractions;

namespace YourApp.Controllers;

[ApiController]
[Route("api/[controller]")]
public class AIController : ControllerBase
{
private readonly IAIClient _aiClient;
private readonly ILogger<AIController> _logger;

public AIController(IAIClient aiClient, ILogger<AIController> logger)
{
_aiClient = aiClient;
_logger = logger;
}

/// <summary>
/// Chat completion endpoint
/// </summary>
[HttpPost("chat")]
public async Task<IActionResult> Chat([FromBody] ChatRequest request)
{
try
{
var response = await _aiClient.ChatAsync(new ChatCompletionRequest
{
Messages =
[
ChatMessage.System("You are a helpful assistant."),
ChatMessage.User(request.Message)
]
});
return Ok(new { response.Content, response.TokensUsed });
}
catch (Exception ex)
{
_logger.LogError(ex, "Error getting completion");
return StatusCode(500, new { error = ex.Message });
}
}

/// <summary>
/// Streaming chat completion
/// </summary>
[HttpPost("chat/stream")]
public async Task ChatStream([FromBody] ChatRequest request)
{
Response.ContentType = "text/event-stream";
var streamRequest = new ChatCompletionRequest
{
Messages =
[
ChatMessage.System("You are a helpful assistant."),
ChatMessage.User(request.Message)
]
};

await foreach (var chunk in _aiClient.StreamChatAsync(streamRequest))
{
if (string.IsNullOrWhiteSpace(chunk.Content))
{
continue;
}

await Response.WriteAsync($"data: {chunk.Content}\n\n");
await Response.Body.FlushAsync();
}
}
}

public record ChatRequest(string Message);

Step 5: Test Endpoints

Test Chat Completion:

curl -X POST http://localhost:5000/api/ai/chat \
-H "Content-Type: application/json" \
-d '{"message": "What is the capital of France?"}'

Expected Response:

{
"content": "The capital of France is Paris.",
"tokensUsed": 42
}

Provider 2: GitHub Models

Step 1: Install Package

dotnet add package PrimusSaaS.AI

Step 2: Configure in Program.cs

using Primus.AI;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddPrimusAI(ai =>
{
ai.UseGitHubModels(opts =>
{
opts.Token = builder.Configuration["GitHub:Token"];
opts.DefaultModel = "gpt-4o";
});
});

var app = builder.Build();
app.MapControllers();
app.Run();

Step 3: Configure appsettings.json

Add configuration to your appsettings.json:

{
"GitHub": {
"Token": "ghp_..."
}
}
How to Get GitHub Token

GitHub Settings:

  1. Go to github.com/settings/tokens
  2. Generate a new Personal Access Token (Classic) or Fine-grained token
  3. Ensure it has access scope for code/models if required (usually read access is enough for basic public models)
  4. Copy the token (starts with ghp_)

Step 4 & 5

(Reuse Controller and Test commands from above)


Configuration Reference

Common Options

OptionTypeDefaultDescription
MaxTokensint4096Default maximum tokens
Temperaturefloat0.7Default response randomness
TopPfloat?nullDefault top-p sampling
PresencePenaltyfloat?nullDefault presence penalty
FrequencyPenaltyfloat?nullDefault frequency penalty
SystemPromptstring?nullDefault system prompt

Azure OpenAI Options

OptionTypeDescription
EndpointstringYour Azure resource endpoint URL
ApiKeystringResource key 1 or 2
DefaultDeploymentstringName of your deployment in Azure AI Studio

GitHub Models Options

OptionTypeDescription
TokenstringGitHub personal access token
DefaultModelstringDefault model to use
BaseUrlstringGitHub Models API base URL

Streaming

Stream responses token-by-token for low-latency UIs.

var request = new ChatCompletionRequest
{
Messages =
[
ChatMessage.System("You are a helpful assistant."),
ChatMessage.User("Summarize the latest status report.")
]
};

await foreach (var chunk in aiClient.StreamChatAsync(request))
{
Console.Write(chunk.Content);
}

Embeddings

Generate embeddings for search or clustering workflows.

var response = await aiClient.GetEmbeddingsAsync(new EmbeddingsRequest
{
Inputs = ["Quarterly revenue trends for SaaS products"],
Model = "text-embedding-3-small"
});

var vector = response.Embeddings.Count > 0 ? response.Embeddings[0] : null;

Prompt Safety

Enable prompt injection detection to block suspicious inputs.

builder.Services.AddPrimusAI(ai =>
{
ai.UseAzureOpenAI(opts => { /* config */ });

ai.EnablePromptInjectionDetection(opts =>
{
opts.BlockSuspiciousPrompts = true;
opts.LogDetections = true;
});
});

Examples

Example 1: Multi-Turn Conversation

var request = new ChatCompletionRequest
{
Messages =
[
ChatMessage.System("You are a helpful assistant."),
ChatMessage.User("What is the capital of France?"),
ChatMessage.Assistant("The capital of France is Paris."),
ChatMessage.User("What's its population?")
]
};

var response = await aiClient.ChatAsync(request);

Example 2: Prompt Safety

var result = await detector.DetectAsync(userInput);
if (result.IsInjectionDetected)
{
return BadRequest("Potential prompt injection detected");
}

Troubleshooting

Issue: 401 Unauthorized

Error: Access denied due to invalid subscription key

Solution:

  1. Check that your API Key is correct in appsettings.json
  2. Verify you copied the full key string without spaces
  3. Ensure your Azure resource is active or GitHub token has access to models

Issue: 404 Not Found (Azure)

Error: Resource not found

Solution:

  1. Verify the DeploymentName matches exactly what is in Azure AI Studio
  2. Check that the Endpoint URL is correct and includes https://