Skip to main content
AI Knowledge Analytics provides comprehensive visibility into your AI agents’ performance, usage patterns, and resource consumption. These powerful dashboards help you optimize costs, identify improvement opportunities, and ensure your AI solutions deliver maximum value to your organization.

Key Analytics Features

Usage Metrics

Track conversations, users, and document generation across all agents

Token Consumption

Monitor input and output token usage with detailed breakdowns

Cost Analysis

Track expenditure with detailed cost attribution by agent and model

Performance Trends

Identify usage patterns and performance changes over time

Agent Comparison

Compare effectiveness and efficiency across your AI agent portfolio

Custom Date Ranges

Analyze data across flexible time periods from 1 day to 12 months

Analytics Dashboards

AI Knowledge Analytics offers multiple dashboards to help you understand different aspects of your AI deployment.
  • Global Analytics
  • Usage Analytics
  • Carbon Footprint
  • Agent Analytics
The Global Analytics section provides administrators a comprehensive view of platform-wide metrics, divided into four main areas: Dashboard, Costs, Usages, and Carbon Footprint.

Main Dashboard for Admins

  • Total Users, Sessions, and Added Documents: Concerning interactions with existing agents, as well as agents created during the selected period (default is 7 days).
  • Generated Responses, Global Token Count, and Estimated Cost in USD ($): Based on the cost per million tokens of each model specified under the AI Knowledge configuration.
  • Classification on Number of Tokens and Messages per Agent: As well as a Number of Tokens per day per Agent.
  • Charts: Displaying aggregations on the number of messages, users, and tokens per day per agent.
  • Model-Specific Performances: Such as tokens and messages per model and the delay to the first token of response per model in milliseconds.
Each metric includes an evolution value, showing the change over the selected time period compared to the previous equivalent period (e.g., comparing the last 7 days to the 7 days before).Additionally, this section highlights the most frequently referenced documents, detailing the total number of citations, along with daily aggregates of answers, users, and tokens for the agent.

Cost Dashboard for Admins

The Cost Dashboard provides insights into the financial aspects of AI operations:
  • Estimated Cost in USD ($): Displayed in a main card, showing the evolution compared to the previous selected period.
  • Charts: Visualize costs in various dimensions:
    • Cost Per Agent
    • Cost Per Model
    • Cost Per Provider
    • Cost Per Feature
  • Aggregations: Line charts display cost per model and per agent.
The data displayed here can be filtered by model to specify it.

Using Analytics Effectively

1

Access Analytics

Navigate to the Analytics section from your AI Knowledge dashboard.
2

Select time period

Choose your desired timeframe for analysis using the date selectors.Options include standard periods (1 day, 7 days, 30 days, 12 months) or custom date ranges.
3

Review key metrics

Examine the main performance indicators for your agents.Pay special attention to significant changes or trends in usage and costs.
4

Drill down into specific agents

Click on individual agents to see detailed performance metrics.Compare agents to identify best practices and improvement opportunities.

Best Practices for Analytics

Regular Reviews

Schedule weekly or monthly analytics reviews to track performance trends

Benchmark Agents

Compare similar agents to establish performance benchmarks

Token Optimization

Identify and optimize high token consumption scenarios

User Feedback Correlation

Connect analytics data with user feedback for deeper insights

Cost Allocation

Use analytics to allocate AI costs to appropriate departments

Continuous Improvement

Implement regular optimizations based on analytics insights

Token Optimization Strategies

Based on analytics insights, consider these strategies to optimize token usage and costs:
1

Knowledge base refinement

Streamline knowledge bases to include only the most relevant information.
2

Prompt engineering

Refine system prompts and instructions to be more efficient.
3

Model selection

Choose the most cost-effective model for each use case.
4

Context window management

Optimize how much context is included in each interaction.

Custom usage analytics

Both LLM and embeddings usage are tracked by usage events, persisted with an aggPayload custom mapping to enable numeric aggregations in Elasticsearch/Opensearch requests. Example usage :
{
  "type": "usage",
  "aggPayload": {
    "channel": "api",
    "api": "completions",
    "tool": "genericQuery",
    "usage": {
      "firstTokenDuration": 804,
      "completion_tokens": 92,
      "prompt_tokens": 2515,
      "total_tokens": 2607,
      "duration": 3953,
      "cost": 0.00720749999999999951
    },
    "projectId": "project id",
    "model": "gpt-4o",
    "provider": "openai",
    "messageId": "message id",
    "userId": "user id",
    "finishReasons": [
      "stop"
    ]    
  }
}
Example ES/OS aggregations :
Search request body
{
  "size": 0, 
  "query": {
    "bool": {
     "must": {
      "range": {
       "@timestamp": {
        "gte": "2025-04-09T14:27:41.684Z",
        "lt": "2025-04-16T14:27:41.684Z"
       }
      }
     },
     "filter": [
      {
       "terms": {
        "type": [
         "usage"
        ]
       }
      },
      {
       "bool": {
        "should": [
         {
          "bool": {
           "must_not": [
            {
             "term": {
              "payload.api": "embeddings"
             }
            }
           ]
          }
         }
        ]
       }
      },
      {
       "terms": {
        "payload.projectId": [
         "<project id>"
        ]
       }
      }
     ]
    }
   },
   "aggs": {
    "totalUsers": {
     "cardinality": {
      "field": "aggPayload.userId"
     }
    },
    "totalMessages": {
     "filter": {
      "term": {
       "type": "usage"
      }
     }
    },
    "tokens": {
     "sum": {
      "field": "aggPayload.usage.total_tokens"
     }
    },
    "completionTokens": {
     "sum": {
      "field": "aggPayload.usage.completion_tokens"
     }
    },
    "promptTokens": {
     "sum": {
      "field": "aggPayload.usage.prompt_tokens"
     }
    }
   }
}

Next Steps

Explore more detailed guides for AI Knowledge Analytics:

Overview of Dashboards

This document provides an introduction to the three main dashboards available in our platform: General Statistics, Usage Analysis, and Carbon Footprint. These dashboards are designed to offer insights into both specific agent activities and overall platform performance.

General Statistics Dashboard

The General Statistics dashboard offers a snapshot of key metrics related to agent interactions:
  • Generated Answers: The total number of responses generated by the agent.
  • Users: The number of individuals who have interacted with the agent.
  • Sessions: The number of sessions initiated for potential interaction with the agent.
  • Added Documents: The number of documents added to the AI Knowledge project related to the agent.
  • User Satisfaction: Feedback from users, categorized as positive or negative.
Each metric includes a comparison with the previous period to show trends over time. This section also highlights the most frequently referenced documents and provides daily aggregates of answers, users, and tokens.

Usage Analysis Dashboard

The Usage Analysis dashboard provides detailed insights into how agents are used:
  • Requests per User: Average, minimum, and maximum number of requests made by users.
  • Conversations per User: Average, minimum, and maximum number of conversations initiated by users.
  • Requests per Conversation: Average, minimum, and maximum number of requests within a single conversation.
  • Dropouts: Users who have not returned to use the agent after the previous corresponding period.
This dashboard also analyzes token consumption, including total, input, and output tokens. Visual charts display requests per day, average requests per day of the week, and the most popular tools used.

Carbon Footprint Dashboard

The Carbon Footprint dashboard assesses the environmental impact of using AI models:
  • Energy Consumption: The electrical energy required to power and operate the models.
  • Global Warming Potential (GWP): The impact of greenhouse gases compared to CO₂.
  • GPU and Server Energy: Energy consumed by GPUs and servers.
  • Power Usage Effectiveness (PUE): A measure of data center energy efficiency.
  • Emission Factor: The carbon footprint per unit of computation.
Charts in this section aggregate average emissions per day.

Global Administration Analytics

The Global Administration Analytics section provides a comprehensive view of platform-wide metrics, divided into four main areas: Dashboard, Costs, Usages, and Carbon Footprint.

Main Dashboard

This section displays:
  • Total Users, Sessions, and Added Documents: Metrics for all agents, including new agents created in the selected period.
  • Generated Responses and Global Token Count: Overall platform activity.
  • Cost Estimates: Based on token usage and model configurations.
  • Performance Metrics: Tokens and messages per model, and response times.

Cost Dashboard

The Cost Dashboard breaks down financial data:
  • Estimated Costs: Displayed with comparisons to previous periods.
  • Cost Analysis: Charts show costs per agent, model, provider, and feature.
  • Aggregations: Line charts for cost per model and agent.
The data displayed here can be filtered by model for more specific insights. This document aims to provide a clear and concise understanding of the platform’s analytics capabilities, making it accessible to users unfamiliar with the system.