TL;DR: Alysium analytics works on two levels: the Space Dashboard shows your whole workspace at a glance (total conversations, unique users, average rating, active agents, weekly growth), and per-agent analytics gives you full conversation replay, helpfulness ratings, date range filtering, full-text search, and CSV export. The most useful thing in the product isn't a metric — it's reading the actual conversations.
Most analytics dashboards give you numbers without context. Pageviews. Bounce rates. Session duration. You stare at them and try to reverse-engineer what went wrong.
Alysium's analytics are different because the unit of insight is the conversation itself — not an abstraction of it. You can open any conversation your agent has ever had, read it start to finish, and understand exactly where it helped and where it didn't. That's what makes the analytics useful: it's qualitative improvement infrastructure, not just a counter.
The Space Dashboard
When you log in to Alysium, the first thing you see is the Space Dashboard — a workspace-level overview of all your agents combined. It shows total conversations across all your agents, total unique users helped, average helpfulness rating across the workspace, the number of active agents you have published, and a weekly growth percentage that tells you whether usage is trending up or down.
Each agent appears as a card in the dashboard with its name, avatar, helpfulness rating, conversation count, weekly growth trend arrow, and the timestamp of its last active conversation. Click any card and you go directly into that agent's detailed analytics view. The dashboard is your daily pulse check — thirty seconds tells you whether anything needs attention today.
One thing to note: a dedicated "top question" metric isn't currently shown in the Space Dashboard. If you want to know what your users are asking most, you'll get that picture faster by scanning conversation transcripts than by waiting for an aggregate metric.
Per-Agent Analytics
Click into any agent and you see the full breakdown for that specific agent. The header shows unique users, total conversations, total messages exchanged, helpful and unhelpful feedback counts, helpfulness rate as a percentage, and weekly growth. Below that is the full conversation list — every conversation the agent has had, with a content preview, message count, and read/unread status.
This is the layer most builders stop using too quickly. The aggregate numbers tell you if something is off. The conversation list is where you find out what. Before your weekly agent review, scan the helpfulness percentage — if it dropped this week, the conversation list will show you the cluster of conversations where users marked responses unhelpful, and reading three or four of those usually identifies the pattern.
Reading Conversation Transcripts
Click any conversation in the list and it opens as a full replay — every message in the exchange, in order, exactly as it happened. You can see what the user asked, how the agent responded, whether the user gave a thumbs up or thumbs down, and how the conversation ended.
This is the most underused feature in Alysium. Builders who review 20–30 transcripts per week consistently find the same types of issues: a document that's missing a specific detail users keep asking about, an instruction that's producing responses that are technically correct but not useful in tone, a knowledge base gap on a topic that comes up repeatedly. None of these show up as a metric. They only appear when you read the actual conversation.
The date range filter applies to the conversation list, so you can scope your review to the last seven days, last thirty days, this month, or any custom range. For agents that have been live for months, this keeps the review manageable — you're not scrolling through six months of history, just the current period.
Date Range Filtering
Every metric in Alysium analytics — helpfulness rate, conversation count, unique users, weekly growth — and the full conversation list all update instantly when you apply a date range filter. Preset options include Today, Yesterday, Last 7 days, Last 30 days, This week, This month, and Last month. For more precise comparisons, a calendar picker lets you set any custom start and end date.
The practical use is measuring the impact of agent updates. When you rewrite your instructions or upload a new document, set the date range to show the two weeks before and after the change. If helpfulness rate improved, the change worked. If it didn't move, something else is driving the rating pattern and you need to read more transcripts.
Seasonal patterns also appear clearly with date range filtering. If you're a coach whose client activity peaks in January and September, your analytics will show those spikes — and the conversation topics during those periods will tell you what your clients are working through most intensely at each time of year.
Conversation Search
The search bar in analytics does full-text search across every conversation your agent has had — both user messages and AI responses. As you type, autocomplete suggestions appear based on your recent searches for that agent, with up to 10 recent searches stored per agent locally in your browser.
The best search strategies aren't keywords — they're concepts. Try searching for a specific feature of your service or product that you suspect people misunderstand. Try searching for "I'm not sure" or "I don't know" to find conversations where the agent expressed uncertainty. Try searching for a competitor's name to see whether users are asking comparison questions and how the agent is handling them.
Combine search with date range filtering for the most targeted review. If you updated your pricing last month, search for "pricing" or "cost" filtered to the last 30 days to see every conversation where price came up and whether the agent's responses reflect the current numbers.
Exporting Data
For any agent, you can export conversation data in two formats: CSV (structured data including timestamps, conversation IDs, and message content, ready for spreadsheet analysis) or plain text (cleaner for reading and archiving). Export options work for individual conversations or as a bulk download of all conversations for an agent.
The most useful application of bulk CSV export is running your own analysis in Google Sheets. Sort by conversation length and read the shortest ones — these are usually conversations where the user didn't get what they came for and left quickly. Sort by date and look at conversations from just after a product or policy change to see whether users are encountering updated information correctly.
Export is also the right tool for compliance documentation if you're in an industry where conversation records matter, or for sharing specific conversation examples with a team member who doesn't have Alysium access.
Making Decisions From Analytics
The goal of reviewing analytics isn't to watch the numbers — it's to improve the agent. Every review session should end with at least one specific action: a document update, an instruction revision, a new conversation starter that addresses a pattern you saw.
The most effective improvement cycle is weekly: check the Space Dashboard for any surprising movements in helpfulness rate or conversation volume, open per-agent analytics for your active agents, read 15–20 transcripts filtered to the last seven days, identify the one or two most common failure patterns, and make targeted changes before next week's review.
Agents that get reviewed and updated regularly outperform agents that get configured once and left alone. The analytics are designed for exactly this — not to surface vanity metrics, but to give you the raw material to make your agent progressively more useful over time.
Open your analytics. Log in to Alysium — the Space Dashboard is the first thing you see.
Frequently Asked Questions
Related Articles
Ready to build?
Turn your expertise into an AI agent — today.
No code. No engineers. Just your knowledge, packaged as an AI that works around the clock.
Get started free