A guide to the current capabilities of AI-driven analytics
This section provides a capability-first view of the modern AI ecosystem. It is primarily intended for use during the capability-matching phase of use-case viability assessments; a key step in forming an AI strategy for your organisation.The core capabilities of AI-driven analytics include:
Example: Conversion predictionA conversion prediction model might use historical data on sales, customer demographics, and market trends to predict if a future lead will convert..
Predictive AI systems use historical data to predict future events. These systems typically rely on machine learning models to generate probabilistic estimates of outcomes such as churn, conversion, or user drop-off.When to use
Will this customer churn?
Which leads in our pipeline will convert?
Data requirementsHistorical dataset containing:
The target event or a proxy (e.g. sales or churn at a customer level)
Features known to influence the event (e.g. customer demographics, behaviour, market trends)
Implementation notes
Train models on historical data to learn feature-outcome relationships
Rows should represent consistent observation windows (e.g. one customer per month)
Use a tabular classifier model; TrueState’s default XGBoost classifier is a reliable choice
Backup approachIf the data isn’t ready, LLMs can apply a rules-based estimation approach to mimic expert judgment. This is useful early on, but lacks statistical rigor. If you must use this approach, ensure you collect data to satisfy the data requirements to facilitate a future transition to event prediction..
Example: Email categorisationAutomatically categorising incoming emails as ‘Urgent’, ‘Important’, or ‘Regular’ based on content, sender, and metadata.
Classification systems assign inputs to predefined categories. These systems work with structured data, text, and images across a wide range of scenarios. The difference between event prediction and classification more broadly is that event prediction requires a specific unit of analysis (e.g. “person + month”) to predict an outcome, while classification is a broader use of the same algorithms to classify a wider range of inputs.When to use
Tagging support tickets by priority
Flagging at-risk transactions
Identifying document type or status
Data requirementsLabeled dataset with:
Examples of input items
Corresponding class labels
Features that differentiate classes
Implementation notes
Supports binary and multi-class classification
TrueState’s default classifier is optimised for structured data while the text classifier is intended for text classification.
Backup approachZero-shot classification using LLMs is helpful for prototyping or when labeled data is limited. Performance may vary; treat as a temporary solution.
Example: Inventory demandPredicting daily demand for specific products across different store locations, factoring in seasonality and local events.
Granular forecasting estimates future numeric values—such as sales volume or traffic—at regular intervals across defined groups or entities.How it differs from Event PredictionEvent prediction focuses on if an event will occur (e.g. churn), whereas granular forecasting focuses on how much of something will occur (e.g. units sold).When to use
Demand forecasting by SKU or location
Financial projections at weekly/monthly levels
Traffic or engagement prediction by segment
Data requirementsTime series data including:
Historical target values (e.g. units sold, visits)
Temporal patterns (seasonality, trends)
Known external factors (e.g. holidays, campaigns)
2–3 full seasonal cycles preferred
Implementation notes
Include time-based features and external context
Watch for anomalies and events
Use rolling window validation for accuracy
Forecast horizon should match business needs
Backup approachIf limited data exists, simple statistical methods (e.g. moving averages) can establish a baseline.
Example: Report generationGenerating structured reports from performance data, including commentary and recommendations.
AI writing systems generate fluent, human-like text, often combining structured data with domain knowledge to explain, summarise, or narrate insights.When to use
Executive summaries from dashboards
Weekly team updates from usage metrics
Drafting internal comms or client reports
Data requirementsFor domain-specific writing:
Examples of ideal outputs
Business-relevant terminology and tone-of-voice
Structured source data (e.g. tables, charts)
Implementation notes
Use output templates to control structure and length
Layer in fact-checking and constraint validation
Define style rules for clarity and consistency
Review samples regularly for tone and factual quality
Backup approachTemplate-based systems with rule-based logic can be used for consistent, repeatable outputs before layering in generative language.