AI Governance¶
Initialization¶
- To generate a long-term token granting full access to AI Governance, log in with a super admin account and retrieve your JWT token from the
access-token
cookie located on the API domain. - Generate an access token using the following curl command:
curl 'https://<API_URL>/v2/user/accessTokens' \
-X POST \
-H 'content-type: application/json' \
-H "Authorization: Bearer <YOUR_JWT>" \
--data-raw '{"name":"ai-governance","expiresAt":"2027-01-24T11:53:12.116Z"}'
If you don't have remote access to the API host, you can open a shell inside prismeai-crawler
deployment of apps
namespace in order to curl
from inside the network.
- Open AI Governance.
- Edit the secrets and paste the access token into the
adminAccessToken
secret. - Copy the workspace ID from the browser address bar.
- Add the following environment variable to the CONSOLE service:
WORKSPACE_OPS_MANAGER=https://<API_URL>/v2/workspaces/<WORKSPACE ID>/webhooks/
If installed using the prismeai-core
Helm chart, the same value can be directly passed to the workspace_ops_manager
field in the prismeai-console
service's values.
Voici une version enrichie et corrigée de votre contenu en markdown :
Personalization¶
The AI Governance product offers a wide range of interface customization options to enhance user experience:
- Open the AI Governance product from the products menu on the left.
- Navigate to Interface customization to access customization options.
Privacy Policy, Help, Feedback, and Change Log¶
Prisme.ai allows seamless integration of Privacy Policy, Help, Feedback, and Change Log pages to ensure an optimal user experience.
Privacy Policy¶
To configure your own Privacy Policy page URL:
- Inside Interface customization, open the Links tab.
- Update the Privacy Policy field with your URL and click Save.
Tips
You can create a custom Privacy Policy page in the AI Studio:
1. Open AI Builder.
2. Go to the AI Governance workspace.
3. Create a new page and name it accordingly.
4. Click + to add a block.
5. Select the RichText block.
6. Open the RichText block and write your Privacy Policy.
7. Save the page.
8. Click the Share button next to Duplicate and See code actions.
9. Enable Public access and copy the page URL.
10. Return to Interface customization and paste the newly created Privacy Policy URL.
Help, Feedback, and Change Log¶
To configure custom URLs for Help, Feedback, and Change Log pages:
- Inside Interface customization, open the Links tab.
- Update the fields for Help, Feedback, and Change Log with your URLs and click Save.
Tips
You can create custom forms in the AI Studio:
1. Open AI Builder.
2. Create a new workspace named Help, Feedback, and Change Log.
3. Create new pages for each feature and name them accordingly.
4. Click + to add a block.
5. Select the Form block.
6. Customize your forms with automation specific to your ITSM or internal feedback tools.
7. Save your pages and make them public.
8. Return to Interface customization and paste the created URLs.
Customizing Sign-in/Sign-up/Forgot Password Pages¶
The logo added in the AI Governance product will appear on the Sign-in, Sign-up, and Forgot Password pages.
To customize the text on these pages:
- Go to AI Governance > Interface customization > Translations.
- Edit the wording to align with your branding and use cases.
To customize the SSO wording and icons, update the following environment variable:
ENABLED_AUTH_PROVIDERS :[
{
"name": "custom",
"label": {"fr": "Connexion avec custom", "en": "Connect with custom"},
"icon": "http://logo.png"
}
]
If you want to manage both local accounts and SSO, you can configure it like this:
ENABLED_AUTH_PROVIDERS: [
{"name": "local"},
{"name": "google", "label": "Google", "icon": "https://cdn.iconscout.com/icon/free/png-256/free-google-1772223-1507807.png"}
]
- The
local
provider allows users to create local accounts on Prisme.ai.
Custom platform's look and feel¶
You can personalize the platform's look and feel by navigating to Interface Customization in the left menu. This feature allows you to configure custom CSS for both dark and light mode. Additionally, you can manage and update all translations to ensure a fully tailored user experience.
Roles¶
After restarting the API gateway with the WORKSPACE_OPS_MANAGER
variable properly set, several products should appear at the root of the studio. Open AI Governance.
Navigate to the Users & Permissions page and then to the Roles tab:
- Refine permissions for existing roles.
- Change the default role if necessary.
To gain full platform privileges, you can assign yourself the PlatformAdmin role under the Users tab.
To grant additional privileges to another user, you can assign them one of these roles via the same page.
To give another user access to AI Governance, assign them the PlatformManager role in the Manager column.
AI Knowledge¶
Open the AI Knowledge workspace on AI Builder.
API Keys¶
To configure external provider credentials via the platform, access the workspace secrets through the three-dot menu next to the workspace name:
- Fill in all required API keys based on the desired LLM/embedding providers.
- Save.
If these credentials are to be injected at the infrastructure level via environment variables (possibly from a secret manager):
- Open the workspace configuration via the three-dot menu next to the workspace name, then click Edit Source Code.
- Search for all occurrences of
{{secret.}}
and replacesecret
withconfig
:
'{{secret.openaiApiKey}}'
becomes'{{config.openaiApiKey}}'
.
For example, the openaiApiKey
can now be injected via an environment variable for the prismeai-runtime
service: WORKSPACE_CONFIG_ai-knowledge_openaiApiKey
(where ai-knowledge is the workspace slug).
If the platform was deployed using Prisme.ai's Terraform and Helm modules, all these environment variables are automatically injected from the secret manager, leaving you to populate the secrets with the correct values.
Vector Store RAG¶
For the proper functioning of RAG, a vector database must be configured.
To do this, open the workspace configuration on the AI Builder > AI Knowledge via the three-dot menu next to the workspace name, click Edit Source Code, and configure the vectorStore
key for the desired vector database:
Redis Search¶
vectorStore:
provider: redisSearch
url: '{{secret.redisUrl}}'
OpenSearch¶
vectorStore:
provider: openSearch
url: '{{secret.opensearchUrl}}'
user: '{{secret.opensearchUser}}'
password: '{{secret.opensearchPassword}}'
As with API Keys, credentials can be configured either directly in platform secrets or via environment variables for the runtime service.
Model Activation¶
Open the raw workspace configuration via the three-dot menu next to the workspace name and click Edit Source Code.
Update the defaultModels
field by adjusting the names of the default models used by AI Knowledge projects in the right-hand section. These model names must match those configured below for OpenAI, OpenAI Azure, Bedrock, and others.
To enable or disable models from a provider:
OpenAI¶
Update the llm.openai.openai.models
field.
Example:
llm:
openai:
...
openai:
api_key: '{{secret.openaiApiKey}}'
models:
- gpt-4
- gpt-4o
- o1-preview
- o1-mini
OpenAI Azure¶
Update the llm.openai.azure.resources.*.deployments
field.
Multiple resources can be added by appending additional entries to the llm.openai.azure.resources
array.
Example:
llm:
openai:
azure:
resources:
- resource: "resource name"
api_key: '{{secret.azureOpenaiApiKey}}'
api_version: '2023-05-15'
deployments:
- gpt-4
- embedding-ada
Bedrock¶
Update the llm.bedrock.*.models
and llm.bedrock.*.region
fields.
Multiple regions can be used by appending additional entries to the llm.bedrock
array.
Example:
llm:
...
bedrock:
- credentials:
aws_access_key_id: '{{secret.awsBedrockAccessKey}}'
aws_secret_access_key: '{{secret.awsBedrockSecretAccessKey}}'
models:
- mistral.mistral-large-2402-v1:0
- amazon.titan-embed-image-v1
region: eu-west-3
- credentials:
aws_access_key_id: '{{secret.awsBedrockAccessKey}}'
aws_secret_access_key: '{{secret.awsBedrockSecretAccessKey}}'
models:
- amazon.titan-embed-text-v1
region: us-east-1
OpenAI-Compatible Providers¶
Update the llm.openailike
field.
Example:
llm:
...
openailike:
- api_key: "{{config.apiKey1}}"
endpoint: "endpoint 1"
models:
- mistal-large
- api_key: "{{secret.apiKey2}}"
endpoint: "endpoint 2"
provider: Mistral
models:
- mistral-small
- mistral-mini
options:
excludeParameters:
- presence_penalty
- frequency_penalty
- seed
Optional Parameters:
- provider: The provider name used in analytics metrics and dashboards.
- options.excludeParameters: Allows exclusion of certain OpenAI generic parameters not supported by the given model.
Advanced Model Configuration¶
Each model can be configured individually using the modelsSpecifications
object.
Example:
modelsSpecifications:
gpt-4:
maxContext: 8192
maxResponseTokens: 2000
subtitle:
fr: Modèle hébergé aux USA.
en: Model hosted in the USA.
description:
fr: Le modèle GPT-4 sur OpenAI. Vous pouvez utiliser des documents C1 et C2.
en: The GPT-4 model on OpenAI. You can use documents C1 and C2.
rateLimits:
requestsPerMinute: 1000
tokensPerMinute: 100000
failoverModel: 'gpt-4o'
text-embedding-ada-002:
type: embeddings
maxContext: 2048
subtitle: {}
description: {}
Notes:
- All LLM models (excluding those with
type: embeddings
) will automatically appear in the AI Store menu unless disabled at the agent level, with the configured titles and descriptions. maxContext
specifies the maximum token size of the context that can be passed to the model, applicable to both LLMs (full prompt size) and embedding models (maximum chunk size for vectorization).maxResponseTokens
defines the maximum completion size requested from the LLM, which can be overridden in individual agent settings.
Rate Limits¶
LLM model rate limits can currently be applied at two stages in the message processing workflow:
- When a message is received (requestsPerMinute limits for projects or users).
- After RAG stages and before the LLM call (tokensPerMinute limits for projects, users, models, or requestsPerMinute limits for models).
Embedding model rate limits are applied before vectorizing a document, per project or model.
Per Model¶
When modelsSpecifications.*.rateLimits.requestsPerMinute
or modelsSpecifications.*.rateLimits.tokensPerMinute
are defined, an error (customizable via toasts.i18n.*.rateLimit
) is returned to any user attempting to exceed the configured limits. These limits are shared across all projects/users using the models.
If these limits are reached and modelsSpecifications.*.failoverModel
is specified, projects with failover.enabled
activated (disabled by default) will automatically switch to the failover model.
Notes:
- tokensPerMinute limits apply to the entire prompt sent to the LLM, including the user question, system prompt, project prompt, and RAG context.
- Failover and tokensPerMinute limits also apply to intermediate queries during response construction (e.g., question suggestions, self-query, enhanced query, source filtering).
Per Project or User¶
requestsPerMinute and tokensPerMinute limits can also be applied per project and/or user via the limits
field in the AI Knowledge workspace configuration:
limits:
files_count: 20
llm:
users:
requestsPerMinute: 20
tokensPerMinute: 100000
projects:
requestsPerMinute: 300
tokensPerMinute: 30000
embeddings:
projects:
requestsPerMinute: 200
tokensPerMinute: 1000000
- limits.llm.users: Defines per-user message/token limits across all projects.
- limits.llm.projects: Defines default message/token limits per project. These limits can be overridden per project via the /admin page in AI Knowledge.
- limits.files_count: Specifies the maximum number of documents allowed in AI Knowledge projects. This number can also be overridden per project via the /admin page.
Notes:
- tokensPerMinute limits apply to the full prompt sent to the LLM, including the system prompt, project prompt, and RAG context.
Web Browsing (Bing Search)¶
To enable web browsing during response generation, provide a Bing API key in tools.webSearch.apiKey
.
SSO Authentication¶
If you have your own SSO configured, you need to explicitly allow SSO authenticated users to access AI Knowledge pages :
- Open AI Knowledge workspace
- Open Settings > Advanced
- Manage roles
- Add your SSO provider technical name after
prismeai: {}
at the very beginning :
free:
auth:
prismeai: {}
yourOwnSso: {}
AI Store¶
Personalization¶
To customize the welcome message for new AI Store users:
- Open the workspace configuration on the AI Builder > AI Store via the three-dot menu next to the workspace name, then click Edit Source Code.
- Update the HTML content for each translation in the
onBoarding
key.
To modify the warning message ("This agent may make mistakes...") below the chat interface, update the translations in inputGuidanceMessage
.
SSO Authentication¶
If you have your own SSO configured, you need to explicitly allow SSO authenticated users to access AI Store pages :
- Open AI Store workspace
- Open Settings > Advanced
- Manage roles
- Add your SSO provider technical name after
prismeai: {}
at the very beginning :
user:
auth:
prismeai: {}
yourOwnSso: {}
AI Insight¶
Initialization¶
- Open the AI Store workspace on AI Builder.
- Access the configuration of the Conversations Service App and copy its apiKey.
- Open the AI Insight workspace.
- Update the Conversations Service App configuration with the copied apiKey from AI Store.
- Save.
SSO Authentication¶
If you have your own SSO configured, you need to explicitly allow SSO authenticated users to access AI Insight pages :
- Open AI Insight workspace
- Open Settings > Advanced
- Manage roles
- Add your SSO provider technical name after
prismeai: {}
at the very beginning :
user:
auth:
prismeai: {}
yourOwnSso: {}