FAQ’S
PROCESS
-
CUE offers a full end-to-end framework - but it’s designed to be modular. Some firms use the whole process; others focus only on what they need. The lifecycle starts with confirming if there’s demand for the product, then moves through literature testing and rewriting, and ends with accessibility. Each stage is standalone, but when combined, they give a comprehensive understanding of how well a product is likely to land with its target audience.
-
It starts with the product itself. The first question is: does this product meet real investor needs? That’s where the efficacy pillar comes in. CUE uses large-scale surveys to map product features against investor preferences, identifying which segments (Basic, Informed, Sophisticated) show interest, and estimating the size of the opportunity - in both audience and potential wallet share.
-
That’s where readability and comprehension take over. CUE’s readability pillar reviews (and rewrites, if needed) any client-facing content - brochures, factsheets, FAQs - to ensure the language is appropriate for the intended audience. We test for structural clarity, plain English, and cognitive load, tailored to the target market segment.
-
That’s handled through CUE’s comprehension pillar. We simulate investor understanding using our Virtual Customer Personas (VCPs) - AI models calibrated to mirror how real investors read, interpret, and sometimes misinterpret documents. If a brochure doesn’t pass, we escalate to real-world surveys, and if needed, 1:1 interviews to validate what’s being misunderstood and why.
-
Yes - once content is live, accessibility becomes the focus. CUE ensures that core documents are available in alternative formats where needed - including large print, audio, or Braille - so firms can meet their obligations to support vulnerable clients or customers with additional accessibility needs.
-
You can pick and choose. Some firms only want to test comprehension. Others come in to assess a new product idea or commission rewrites on difficult paragraphs. The system is built to slot in wherever it’s needed - but when used in full, it gives you the clearest picture of how well a product is likely to be understood, by whom, and why.
COMPREHENSION
-
CUE offers a full end-to-end frameworkComprehension is about testing whether investors genuinely understand the information they’re being given. In CUE, this means identifying the key facts, risks, and product mechanics that a document is intended to communicate - and then using our Virtual Customer Personas (VCPs) to test if those messages actually land. It’s not about whether the words are readable - it’s about whether they result in real understanding.- but it’s designed to be modular. Some firms use the whole process; others focus only on what they need. The lifecycle starts with confirming if there’s demand for the product, then moves through literature testing and rewriting, and ends with accessibility. Each stage is standalone, but when combined, they give a comprehensive understanding of how well a product is likely to land with its target audience.
-
Readability looks at how easy the language is to process - sentence length, jargon, abstraction, and tone. It asks: Is the text written in a way that promotes understanding? Comprehension goes further. It asks: Did the investor actually understand it? You can pass a readability test and still fail a comprehension check - especially with vulnerable or Basic investors who might misinterpret or miss key details.
-
CUE starts by mapping out the intended messages of a document - what it’s trying to explain or disclose. We then test each message using our calibrated personas, who process the content and respond in their own words. Their answers are evaluated against what the brochure should have conveyed. This shows whether each investor type would likely walk away with the right understanding - or a dangerous misconception.
-
Foundational comprehension is defined by the firm - it captures what they believe the customer should walk away understanding. Before testing begins, we work with the client to document the key messages, risks, and features they believe the literature is designed to convey. This becomes the benchmark we test against. It's the only layer that’s bespoke to each document - and it's what ensures we’re measuring understanding of what the firm actually intended to communicate.
-
Tier 1 questions are a fixed set of core questions that CUE always asks when assessing financial documents. They cover universal areas where investors often get confused - like cost disclosures, key risks/benefits, or conditional outcomes. These questions apply whether or not the firm explicitly flagged them, and they help catch gaps in communication that might otherwise be overlooked. Think of Tier 1 as a safety net: it ensures the basics are covered even when the literature isn’t product- specific.
-
Tier 2 questions are also predefined by CUE, but they’re targeted. Each one is linked to a specific product feature or disclosure - like kickouts payoffs, emerging markets, collateralisation, death benefits . These are only triggered if the feature appears in the document. That means different Tier 2 question sets are used depending on what you’re testing - whether it's a brochure, a suitability letter, or a valuation report. They’re focused, scalable, and highly specific.
-
Absolutely. The foundational layer is always firm- defined - it reflects what they believe is material. Tier 1 and Tier 2 are then layered on top by CUE, giving firms a structured way to identify whether their intended messages landed, and whether other known risk areas were also clearly understood. This combination ensures both firm- specific goals and industry- wide risks are tested - in one integrated process.
-
Tier 1 and Tier 2 questions are built from deep industry insight - they’re not created ad hoc or guessed by AI. Tier 1 questions come from our research into FCA guidance, FOS complaint themes, academic studies, and behavioural finance reports. They target the core areas regulators consistently flag as being misunderstood - like charges, capital exposure, or product conditions. Tier 2 questions, on the other hand, are developed at the industry level - working with bodies like UKSPA and drawing from product design norms - to cover feature- specific disclosures that need to be tested consistently.
-
Tier 1 is broad and thematic - we create it by looking across regulatory sources, complaint data, and known patterns of misunderstanding. These are the questions every firm should be asking, regardless of what’s in their foundational brief. Tier 2 is narrower and technical - it’s developed by analysing common product features and risks across the market. For example, we might have a Tier 2 question bank for early withdrawal penalties, performance caps, or switching restrictions - each triggered only when relevant language appears in the document.
-
No - that’s the value of CUE. We bring a prebuilt library of rigorously developed questions so firms don’t have to start from scratch. Foundational questions come from the firm, but Tier 1 and Tier 2 are curated by us, based on what regulators care about and what industry bodies have standardised. This saves time and ensures the questions reflect real risks - not just what firms think investors need to know.
-
Tier 2 coverage is growing all the time. We already have question banks for a wide range of financial product features - not just structured products - and we’re expanding them through industry partnerships. Our goal is to ensure that for any feature that carries comprehension risk, we have a tested, standardised way of checking whether it’s understood. That allows consistency across firms, sectors, and documents.
-
CUE runs structured reviews of Financial Ombudsman Service (FOS) complaints to learn from real-world disclosure failures. We focus on complaints linked to the type of product we’re testing - for example, structured product cases when reviewing structured product literature. These insights help shape the comprehension questions we ask and fine-tune how our Virtual Customer Personas respond. It means our testing isn’t just built from theory - it’s rooted in what has already gone wrong for real consumers, and what regulators have deemed as misunderstanding.
-
FOS complaints reveal the moments where real investors genuinely failed to understand - not just what was technically disclosed. They expose patterns: unclear risk warnings, fine-print conditions that invalidate headline claims, or capital-at-risk phrasing that left investors shocked by losses. CUE analyses upheld cases to identify these disclosure breakdowns and then builds them directly into how we test literature.
These insights shape both how our personas are calibrated (e.g. sensitivity to vague reassurances) and what comprehension questions we ask (e.g. can customers spot the real risk position, despite optimistic framing?). We’ve even built specific checks - like headline consistency and risk clarity based entirely on FOS complaint trends.
It’s one of the ways CUE stays anchored in real-world customer failure points, not just theoretical best practice. We don’t just test if text is “clear” - we test whether it leads to the same misunderstandings we’ve seen time and again in FOS rulings.
-
Yes - and that’s exactly the point. Traditional testing is limited by cost and participant fatigue. CUE’s persona- driven system removes those limits. We can test every foundational point, all Tier 1 risks, and as many Tier 2 features as needed - across multiple investor types - without dropout, disengagement, or coaching. It’s the only way to comprehensively test understanding at scale.
-
VCPs are the comprehension engine. Each persona simulates a different type of retail investor - with unique strengths, blind spots, and cognitive traits. When they “read” a document, they interpret it like a real customer would. That means they sometimes miss nuance, flip conditionals, or over- rely on tone. CUE tracks these issues and tells you exactly where and why comprehension failed.
-
We log it - and show you exactly where the problem came from. You’ll see which investor type struggled, what question they misinterpreted, and which part of the text caused the confusion. From there, you can trigger a targeted rewrite if necessary. This transforms comprehension testing from a theoretical goal into a structured, actionable process.
READABILITY
-
Readability in CUE isn’t just about whether the language sounds simple - it’s about whether every element of the text works together to support understanding across different types of investors. We break this into three parts:
1) Traditional readability metrics - measuring how easy the text is to process on the surface, based on things like sentence length, structure, and word complexity.
2) Plain English Score - gauging how clearly the ideas are expressed, by looking at tone, flow, and whether each sentence is concrete, logically linked, and free from ambiguity.
3) Psycholinguistic Load Score - assessing how mentally demanding the words are to process, focusing on factors like abstraction, imageability, familiarity and typical reading age.
Together, these give a far deeper picture of clarity than any single readability test.
-
Because clarity isn’t one- dimensional. A document might score well on sentence length but still confuse people with abstract phrasing or unfamiliar terms. CUE’s three- part approach ensures we’re looking not just at how something is written, but how it’s likely to be processed. Traditional metrics show surface- level readability, the Plain English Score looks at tone and structure, and the Psycholinguistic Load Score assesses cognitive difficulty. Together, they tell a far more complete story.
-
Most tools rely on a single readability score and stop there. CUE goes further - we blend linguistic analysis, tone detection, and psycholinguistics into a combined scoring model, then map the result to real investor profiles. That means we don’t just say “this is hard to read” - we tell you which types of customers are likely to struggle, and whether the content meets the needs of the intended target market.
-
Because each test captures a different dimension of difficulty and real-world comprehension problems are rarely caused by just one thing. Flesch-Kincaid gives a basic measure of sentence and word complexity, but it can miss deeper structural issues. Gunning Fog highlights where unnecessarily formal or academic language might slow readers down. SMOG focuses on polysyllabic word density - a strong indicator of technical or inaccessible vocabulary. ARI looks at character density, helping us detect visually overloaded paragraphs that may challenge readers with lower endurance or visual processing needs. And Flesch Reading Ease gives a useful overall benchmark of how approachable the text feels. On their own, each has blind spots - but together, they provide a multi-angle view that’s far more robust. That’s why CUE blends all five into its scoring model.
-
Many of the most common readability formulas - like Flesch-Kincaid or SMOG - were created decades ago and focus on surface features like sentence length and syllables. While still useful, they have blind spots: syllable counts can be inconsistent, very short texts can be unfairly scored, and numeric or symbolic content is often ignored. CUE uses readability models such as these as a foundation but fixes their gaps, not only by including multiple formulae, and adjusting their weightings accordingly (for example with short text), but importantly by adding our Plain English Score, Psycholinguistic Load Score. This gives a modern, evidence-based view of how clear a document really is.
-
The Plain English Score looks at how clearly and directly the text is written – not just from a technical perspective, but from a structural and stylistic one. It flags issues like passive voice, cluttered or over-layered sentences, overly formal vocabulary, and poor logical flow. It also enforces rules such as Referential Cohesion, Subject Intergrity and Connective Clarity. Together, these checks ensure each sentence is concrete, concise, and logically linked – surfacing issues that might not be “hard to read” in a traditional sense, but could still cause confusion or disengagement.
-
The Psycholinguistic Load Score (PLS) looks at how mentally demanding the words are to process - especially for Basic or vulnerable readers. It focuses on three things: abstraction (how conceptual the language is), imageability (how easy the words are to picture), word familiarity (how common or recognisable the vocabulary is) and Reading Age (the typical age at which a word is learnt). These elements are key for identifying hidden complexity that isn’t captured by surface readability tools - especially for audiences with lower financial literacy or cognitive load tolerance.
-
CUE recognises that investors don’t read or interpret information in a purely rational way. Behavioural finance research shows that tone, framing, and linguistic structure shape how people perceive risk, reward, and certainty. CUE translates these insights into measurable language features. Its Plain English Score flags elements known to distort investor judgement — such as passive voice (which distances responsibility), clutter (which hides key facts), excessive layering or conditional phrasing (which increases cognitive strain), and tonal imbalance (which can over-reassure or understate risk). Its Psycholinguistic Load Score complements this by testing abstraction, imageability, familiarity, and now reading-age — all of which influence how accessible a disclosure feels in practice.
By scoring and correcting these patterns, CUE helps firms reduce the behavioural traps that arise from language itself — making written communication align more closely with how investors actually process and act on information. -
CUE’s readability model is specifically designed to reduce the kinds of linguistic complexity that can create barriers for neurodiverse or vulnerable readers. Through our Psycholinguistic Load Score, we test how abstract, imageable, and familiar the language is — key factors for people with working-memory challenges, attention differences, or low financial literacy — and apply plain-English rules to simplify tone and structure without losing accuracy. We now also include a Reading Age calibration within the PLS test, providing an accessible benchmark that estimates the minimum reading level required to process the text. This helps firms evidence whether their literature is truly suitable for its intended audience. While CUE doesn’t diagnose or label individuals, it’s calibrated to surface the text-based risks that can exclude or confuse these audiences — helping firms meet their obligations under the Consumer Duty and beyond.
-
The Cue Score is the combined result of all three readability components. It tells you, in one clear outcome, whether the text is appropriate for a Basic, Informed, or Sophisticated investor. If the score aligns with the intended audience, the document passes. If it scores higher than the target market, it’s flagged as too complex. The Cue Score gives firms a simple, defensible way to evidence that their customer communications are suitable.
-
Not all content looks like a brochure - firms may need to test single- page summaries, short disclosures, or FAQ blurbs. CUE recognises this and adapts how scores are calculated when there’s limited content to work with. Some scoring methods are adjusted or weighted differently to make sure the result is still meaningful and fair - without over- penalising short or tightly scoped content.
-
If the Cue Score indicates that a document is too complex for its intended audience, it triggers a rewrite. CUE uses a structured editing engine that applies plain- language rules - simplifying sentence structure, improving tone, and replacing abstract or unfamiliar phrasing - while preserving legal meaning. This ensures that firms don’t just identify the problem, but have a clear way to fix it.
-
Yes - that’s a key advantage of our system. CUE doesn’t just give you a score at the end; it shows which elements affected readability, and why. Whether it’s overly formal tone, complex sentence structure, or abstract language, CUE pinpoints what needs attention and helps you make precise edits - rather than rewriting everything from scratch.
EFFICIACY
-
The Efficacy pillar is designed to test one critical thing: does this product meet the needs and objectives of the target market it claims to serve? Unlike traditional demographic profiling, CUE tests this empirically - by directly surveying thousands of retail investors across the Basic, Informed, and Sophisticated spectrum. We then map their investment goals, risk appetite, and product feature preferences against the actual characteristics of the product being assessed.
-
The FCA requires firms to show that products are aligned with the “needs, characteristics and objectives of customers in the target market” (PROD 4.2.33) - and under Consumer Duty, that firms can evidence this alignment. CUE provides exactly that evidence. We create structured, needs- based assessments that directly support PRIN 2A.4 (Fair Value), PRIN 2A.5 (Consumer Understanding), and PRIN 2A.6 (Customer Support) by showing whether a product genuinely fits investor demand - not just in theory, but in practice.
-
We use large- scale surveys, typically 2,000+ respondents, drawn from a nationally representative sample of UK investors. We collect information on their financial goals (e.g. income vs. growth), time horizons, liquidity needs, risk tolerance, diversification preferences, and views on specific product features like capital protection, conditional payoffs, or ESG filters. The questions are detailed and behavioural - not generic - allowing us to build a clear picture of real- world investor preferences.
-
On the surface, CUE’s investor surveys may look similar to traditional market research - we ask about goals, risk preferences, and product features. But what makes CUE different is what we do with that data. We don’t just report on attitudes - we map investor needs to specific product designs, showing whether a product actually has a viable target market. This lets us identify which investor segments (Basic, Informed, Sophisticated) the product matches, and estimate both the size of that audience and the wallet share available. It turns market research into something actionable: a tool to confirm that demand exists - and to quantify it.
-
The process is deliberately refreshed over time. CUE runs its efficacy assessments bi- annually, using independent research partners like YouGov to survey thousands of UK retail investors. This ensures the data stays relevant and reflects current investor sentiment, market conditions, and product familiarity. While not real- time, the frequency is designed to balance robustness with practicality - keeping CUE’s product- to- need mapping accurate and representative.
ACCESSIBILITY
-
Accessibility ensures that all client- facing documents - including brochures and KIDs - are available in formats that meet the needs of vulnerable investors. With Consumer Duty placing legal obligations on manufacturers to support inclusivity, accessibility is no longer optional. CUE provides a ready- to- go solution, converting complex documents into Braille, large print, audio, and other accessible formats - with minimal effort from the manufacturer.
-
They can - but rarely do. Accessibility requests are infrequent and complex, meaning most firms don’t have a reliable process or trusted partners. That leads to delays, inconsistencies, or non- compliance. CUE solves this by offering a centralised, pre- tested service built specifically for structured product documents, with a vetted supply chain and industry- specific workflows already in place.
-
We offer Braille, audio, large print, Easy Read, British Sign Language (BSL) video, tactile graphics, and digital formats compatible with screen readers. All formats have been tested using real structured product documents - including graphs, tables, and KID templates - to ensure quality, clarity, and FCA alignment.
-
Accessibility is directly referenced across several Consumer Duty obligations - including PRIN 2A.4 (fair value), PRIN 2A.5 (consumer understanding), and PRIN 2A.6 (customer support). Firms must be able to demonstrate that their documents are accessible to all clients, including those with visual, cognitive, or linguistic challenges. CUE’s system ensures that this is done efficiently, accurately, and with full auditability.
-
No - CUE acts as a managed gateway. We work with accredited accessibility providers (such as All Formats, PIA, and A2i), ensuring every output meets strict standards. But because we understand structured products in detail, we take care of formatting, technical content, and context, so the provider receives exactly what they need to deliver a compliant result.
-
That’s changing. The FCA has made clear that firms must proactively consider the needs of vulnerable clients - not just react to requests. By embedding accessibility into product governance, CUE helps firms anticipate demand, demonstrate compliance, and ensure inclusive communication - whether or not a formal request has been made.
-
Yes. One of the key benefits of centralising accessibility through CUE is that it creates efficiencies across the market. Individual firms avoid duplicating work, and by routing requests through a common system, we reduce turnaround times, lower costs, and improve consistency - without sacrificing compliance or quality.
VIRTUAL
CUSTOMERS
-
A Virtual Customer Persona (VCP) is a simulated investor, built to reflect how real people with different levels of financial knowledge and behavioural traits actually read and interpret documents. Each persona is calibrated using real- world data - like FCA complaint trends, survey insights, and psychological research - so they react like real investors, not like generic AI.
-
CUE uses 10 distinct Virtual Customer Personas (VCPs) to simulate how different types of real investors interpret financial literature. Each persona reflects a unique mix of financial experience, confidence, comprehension style, and behavioural tendencies. Some personas represent cautious, low- experience savers; others model more confident, self- directed investors - and a few simulate highly analytical, detail- focused profiles. These personas are based on real- world segmentation models, like Experian’s Financial Strategy Segments, and calibrated using live testing data. The goal isn’t to create an average - it’s to reflect the full range of real investor behaviours, so firms can see where understanding succeeds or breaks down across the spectrum.
-
How are the personas designed? Are they just made up?
A: No - they’re grounded in real market segmentation. CUE uses Experian’s Financial Strategy Segments (FSS) and data from investor behaviour studies to define each persona’s traits. These include things like financial knowledge, reading style, inference ability, and risk sensitivity. Each persona behaves differently because they process and understand documents differently - just like real people do.
-
Yes - that’s exactly what they’re designed for. Every trait in the persona calibration matrix is tied to common comprehension failures, like struggling with abstract phrasing or flipping conditional logic. When a persona fails to understand something, it’s because their traits predict they would - not because the AI made a random mistake. That makes the output explainable and actionable.
-
Every document is tested against using all 10 Virtual Customer personas, and for each persona using three differing levels of knowledge/experience - Basic, Informed, and Sophisticated - unless a firm requests a specific audience. These are mapped to real- world segments using Industry definitions of financial experience and product knowledge, so the test always reflects the actual target market. Each persona goes through the same comprehension questions to show where understanding breaks down.
-
Basic investors typically have limited financial knowledge and little or no investment experience. These investors rely heavily on plain language, visual clarity, and short, concrete explanations. For this group, even small increases in complexity can lead to misunderstanding - which is why CUE holds materials to the highest clarity standards when they’re aimed at Basic readers.
-
Informed investors have some financial experience and a reasonable level of comfort with investment concepts. They are likely to recognise common product terms, however, they may still struggle with layered conditions, abstract risk descriptions, or overly technical language. CUE tests whether documents aimed at this group strike the right balance - informative without being overly simplified, and clear without assuming too much prior knowledge.
-
Sophisticated investors typically have substantial investment experience and understand more complex structures and risk mechanics. This group can handle detailed disclosures, conditional logic, and formal tone - but even here, CUE checks that the language is precise and unambiguous. Complexity may be acceptable, but clarity is still essential.
-
Each CUE persona is built from the ground up using a detailed matrix of behavioural and cognitive traits - things like confidence in understanding, abstract reasoning ability, familiarity with financial concepts, tolerance for document fatigue, and tone sensitivity. These traits aren’t random. They’re carefully calibrated to reflect different levels of knowledge and experience - from Basic to Informed to Sophisticated - so the personas behave like real customers in the target market.
-
Each persona’s traits are drawn from a combination of real- world behavioural insight and structured segmentation. We reference FCA research on consumer comprehension, FOS complaint patterns, and academic studies on how people process risk, complexity, and language. We also incorporate segmentation models like Experian’s Financial Strategy Segments (FSS), which help us reflect real differences in experience, confidence, and trust. Every trait is tested and refined through ongoing calibration - comparing how personas respond against real investor testing data, so we can fine- tune how each segment behaves and misunderstands in practice.
-
CUE runs each document through a set of predefined comprehension questions - but instead of treating them as a quiz, it simulates how each persona would genuinely interpret and respond in their own words. This isn’t about checking for “right answers,” it’s about seeing if the investor would naturally understand what’s being communicated based on how they think and process information.
-
We calibrate CUE’s personas using real- world testing data. That means running live comprehension trials - surveys, interviews, A/B tests - with real investors across all experience levels. We then compare how CUE’s AI personas answered the same questions. Where there's a mismatch, we fine- tune the persona’s traits and interpretation patterns until the results align. This loop of testing and calibration is what makes the personas trustworthy and grounded in behavioural reality.
-
Sure - take “Reversal Handling.” A Basic persona with low reversal handling might misinterpret a phrase like “you will not benefit if the fund underperforms” as a positive outcome. An Informed persona might catch the logic, but still miss its nuance. CUE captures these subtleties by modelling how each trait interacts with the document. Another example: a persona with low “Numerical Interpretation” may understand percentages, but fail to link them to real- world outcomes unless the document provides worked examples.
-
Yes - each trait in the calibration matrix is designed to influence how a persona interprets and responds to a document. For instance, a persona with low document endurance might disengage halfway through a long explanation, skipping or missing details toward the end. CUE reflects this by limiting how much content that persona ‘processes’ before cognitive fatigue sets in. Similarly, a persona with high confidence in understanding might provide a confident but incorrect answer - reflecting the Dunning- Kruger effect, where someone believes they’ve understood, but actually hasn’t. These behaviours aren’t random; they’re based on known investor patterns and calibrated using real- world test results.
-
They’re exactly the kind of traits that matter most - especially when assessing whether someone is likely to misunderstand. Many comprehension failures don’t come from complexity alone, but from behavioural patterns like skimming, misreading tone, or assuming you’ve understood something that wasn’t actually clear. CUE builds these risks into its personas by assigning cognitive and behavioural traits that reflect how real investors behave - including those who overclaim, lose focus, or apply false familiarity. This is what allows CUE to uncover subtle comprehension risks that surface- level readability tests would completely miss.
-
It’s grounded in real testing. CUE’s persona calibration model was developed through extensive comprehension trials - including structured testing run by UKSPA - and internal analysis of how different investor types interpret financial content. While published FCA research helped shape which traits we track (like confidence, abstraction, or tone sensitivity), the actual calibration of each persona is based on observed results from thousands of simulated and real responses. We continuously refine these traits by comparing persona answers against real- world outcomes to ensure they behave like real investors, not generic AI.
-
Yes - but how they read, interpret, and respond to those questions varies by persona. A Basic investor might ask “Is my capital protected?”, while a Sophisticated persona might frame it as “What are the capital loss scenarios under a barrier breach?” Same core topic - different language, tone, and depth. CUE is designed to capture and evaluate both approaches, so firms can see how each segment really thinks.
-
No - each answer is built from the brochure content, filtered through the persona’s specific traits like confidence, inference ability, and conceptual tolerance. If a persona fails to understand something, it’s not because the AI didn’t try - it’s because that investor type wouldn’t have understood based on how the information was written. Every failure is explainable and grounded in the traits you’ve calibrated.
-
After generating the persona’s answer, CUE compares it to the expected “correct” response based on the brochure’s intended message. The system scores each response as fully correct, partially correct, or incorrect - and always cites the specific text the persona relied on. That way, you can trace every misunderstanding back to the paragraph or phrasing that caused it.
-
With real humans, you only get a few answers before fatigue kicks in - and they might guess, disengage, or game the test. CUE’s personas never tire, never cheat, and never overclaim. They process the entire document, simulate realistic investor thinking, and surface genuine comprehension risks at scale - showing you what would confuse each type of customer, and why.
-
Real- world investor input is built into CUE at multiple levels. Our efficacy pillar is based entirely on large- scale investor surveys - run with independent research firms like YouGov and People4Research - to identify what different segments want and need. On the comprehension side, our Virtual Customer Personas (VCPs) are calibrated using a mix of real- world Q&A trials, FCA behavioural research, and complaint data from the Financial Ombudsman Service. Every year, we run A/B calibration tests comparing persona responses with actual retail investors to keep the model aligned. And if a document fails CUE’s readability or comprehension thresholds, we escalate to live testing with real customers - first via surveys, then through 1:1 interviews if needed. CUE isn’t designed to replace human testing - it’s designed to focus it, sharpen it, and explain exactly when and where it’s needed.
AI VS REAL WORLD TESTING
-
Real- person testing is powerful but impractical. It’s costly, time- consuming, and often biased. CUE replaces this with calibrated AI personas that simulate how different types of investors - from first- time savers to seasoned professionals - actually process disclosure documents. This lets firms test dozens of questions across entire brochures without the cost, fatigue, or drop- off rates that plague traditional methods.
-
Not necessarily. Real investors often overestimate their understanding, give strategic answers, or disengage midway through. CUE’s personas don’t guess, skip, or fatigue - they replicate cognitive traits (like abstraction limits or risk sensitivity) that cause real misunderstanding. This makes the results more reliable and exhaustive, especially for high- risk or complex disclosures.
-
CUE’s personas are built from real investor segmentation models (like Experian FSS) and calibrated using real- world data - including FCA complaint themes, 1:1 interviews, and industry- wide comprehension surveys. Each persona reflects known investor traits (e.g., confidence inflation, tone sensitivity, document fatigue) to ensure behavioural realism, not generic “AI logic.”
-
CUE isn’t just “AI doing a reading test.” It uses a locked calibration matrix of 18 behavioural and cognitive traits to simulate realistic investor comprehension. The AI personas follow rules - they don’t cheat or optimise - which means failures reflect authentic processing breakdowns, not hallucinated gaps. Every answer is scored, sourced, and traceable.
-
That’s exactly where it wins. Real- world testing is often limited to summaries or a few questions due to cost. CUE can test every paragraph of a brochure, against every persona, using every relevant Tier 1 and Tier 2 question. It’s the only way to exhaustively map comprehension risk at scale - something even the largest firms can’t afford to do manually.
-
Surveys often mislead. Respondents can reverse- engineer the right answers or drop out entirely - especially Basic investors. CUE avoids this by simulating comprehension, not opinions. Its personas don’t "answer questions" - they process the material step- by- step, surfacing exactly where confusion begins and why. That’s far more diagnostic than any checkbox survey.
-
CUE doesn’t replace real- world testing - it enhances it. In fact, the AI personas are calibrated using regulator- reviewed complaint data, FCA consumer trials, and industry feedback. For firms that need extra assurance, CUE’s outputs can be benchmarked against optional real- world panels - but its core value is offering a fast, affordable, and repeatable pre- launch diagnostic layer.
AUDIT
-
CUE isn’t based on intuition or black- box AI. It’s built from the ground up using established research and internationally recognised linguistic, behavioural, and regulatory standards. Our readability scoring draws from validated models like Flesch- Kincaid, Gunning Fog, and the SMOG Index - all of which are used in regulatory, legal, and academic settings worldwide. For psycholinguistic testing, we rely on sources like the CMU Pronouncing Dictionary and the MRC Psycholinguistic Database to assess word familiarity, abstraction, and imageability. And our persona modelling is informed by FCA behavioural research, FOS complaint trends, and real- world consumer testing. Everything we do is designed to stand up to scrutiny - because helping firms evidence understanding isn’t just a technical task, it’s a regulatory obligation.
-
CUE is designed to address one of the most difficult Consumer Duty expectations: evidencing that your target market is likely to understand the information provided. Traditional methods - like internal reviews or tone- of- voice checks - don’t prove whether comprehension is actually happening. CUE goes further by testing your material through simulated investor personas and structured linguistic analysis, generating a detailed audit trail that shows exactly where risks exist, what was done to fix them, and how the outcome maps to the intended audience.
-
CUE’s readability audit doesn’t just produce a score - it gives a detailed breakdown of why a section failed and what was changed to fix it. We highlight linguistic issues (e.g. passive tone, abstract phrasing, long conditionals), show the original and rewritten version, and explain the impact of each change. We also track the number of edits, the improvement in Plain English and PLS scores, and the overall shift in the Cue Score. Crucially, we confirm which target market the final version is suitable for - so firms can prove alignment with the intended reader.
-
On the comprehension side, CUE provides a transparent record of how each Virtual Customer Persona responded to the document. We show the full answer each persona gave, tag whether it was fully correct, guessed, or incorrect, and link each answer to the specific quote in the document they relied on. This allows firms to identify patterns in misunderstanding - for example, if Basic personas consistently misread a specific paragraph - and gives concrete evidence of comprehension success or failure by segment.
-
Yes. All CUE audit outputs can be exported in structured formats - including individual rule breakdowns, rewrite history, persona responses, and scoring summaries. This allows firms to include CUE results in board reports, product governance packs, value assessments, and Consumer Duty files. It also makes it easy to show regulators that testing has been done systematically - and that the firm has evidence, not just opinion, to support its assessment of consumer understanding.
USERS
-
CUE is built to support both. While many firms use CUE independently to test their own documents, industry associations use it to tackle shared challenges - creating standards, improving consumer understanding, and supporting collective compliance. It’s a scalable framework that works just as well for a single product as it does for an entire sector.
-
UKSPA uses CUE across multiple pillars. It runs industry-wide efficacy surveys to understand how different investor segments engage with structured products. It has also used CUE to test comprehension of complex product features across the market, and to consolidate glossary definitions through CUE’s readability and rewrite engine - helping all firms communicate more clearly. These shared exercises help raise standards while reducing duplication.
-
The biggest advantage is scale. Associations can commission testing or rewrites across multiple firms - driving consistency, reducing cost, and creating outputs that benefit the whole market. It also makes it easier to respond to regulatory pressure with a unified approach, especially in areas like Defined Terms, target market alignment, or disclosure clarity.
-
CUE is product-agnostic. While UKSPA has led the way in structured products, the same approach applies to any financial sector - from insurance and pensions to investment platforms and savings apps. Any industry group facing challenges around comprehension, suitability, or Consumer Duty can use CUE to develop scalable, regulator-ready solutions for its members.
COMPETITORS
-
CUE takes a fundamentally different approach. While organisations like Savanta Essentials and The Wisdom Council offer valuable insights through consumer surveys or panels, their methods rely heavily on human testing - which is time- consuming, expensive, and difficult to scale.
CUE, by contrast, uses a fully calibrated AI engine to simulate how different types of real investors process and understand disclosure documents. Instead of waiting weeks for survey results, CUE provides an instant audit of which paragraphs are too complex, why, and how to fix them. And unlike traditional testing firms, CUE doesn't stop there - it rewrites the content, then retests it using investor personas to prove that comprehension has improved.
-
Savanta runs strong, human- based comprehension tests - but the process is rigid, expensive (£1,500+ per project), and doesn’t scale well. Their “Essentials” package can tell you whether a document was understood, but it won’t explain why it failed or how to improve it line- by- line.
CUE fills that gap. We combine detailed linguistic scoring (readability, plain English, psycholinguistic load), targeted rewrites, and investor persona simulations - all in one repeatable audit. In effect, CUE is the only solution that lets you test, edit, and verify understanding in a single workflow - not just measure it once.
-
Legal firms can certainly help you interpret the rules - but they’re not in the business of testing comprehension. They may advise on disclosure wording, but they don’t simulate investor understanding, run readability diagnostics, or rewrite documents in plain English.
CUE doesn't replace legal input - it complements it. Legal reviews ensure accuracy; CUE ensures that your wording is actually understood by your target market. Under Consumer Duty, both are required - accuracy alone isn’t enough.
-
Not even close. Grammarly helps polish grammar and tone, but it has no understanding of financial regulation, investor comprehension, or retail suitability. It might flag a passive sentence - but it won’t tell you that your sentence fails to meet the FCA’s expectations for a Basic investor.
Unlike Grammarly, CUE is built from the ground up for financial communications. It scores and rewrites your content using FCA- aligned metrics, then tests understanding using AI personas trained on real- world investor traits. Grammarly improves style. CUE proves comprehension
-
Not when you consider what you're getting. CUE replaces multiple costly processes - survey testing, plain English rewrites, persona calibration, and compliance flagging - into a single monthly service priced to be lower than a single consumer focus group.
Unlike survey- only solutions like Savanta or panel- based insight from The Wisdom Council, CUE offers a scalable, auditable system for testing and rewriting customer literature. We don’t just ask people if they understand - we simulate whether they actually do.
CUE runs each paragraph through multiple readability models, applies a plain- English scoring framework, and tests comprehension using calibrated AI personas representing Basic, Informed, and Sophisticated investors. The result? You don’t just comply with Consumer Duty - you prove it.
-
Most readability tools stop at surface- level fixes - flagging long sentences, passive voice, or jargon. Some use metrics like Flesch- Kincaid or Gunning Fog to score complexity, but they rarely explain why a sentence fails, or whether it’s suitable for a specific type of investor.
CUE goes further. Our readability engine combines five industry- standard models (FKGL, SMOG, ARI, etc.) with a bespoke Plain English Score (PES) and Psycholinguistic Load Score (PLS). Together, these show not just how complex your content is - but how it lands with different segments of your target market. That’s what Consumer Duty demands: content that’s not just “clear,” but understood by the intended audience.
-
Yes - many platforms offer AI- based paraphrasing or summarisation tools. But generic AI rewrites are often blind to legal context, compliance constraints, or the investor’s level of financial knowledge. In regulated industries, that’s a risk.
CUE’s rewrite engine is purpose- built for financial disclosures. It retains core legal meaning, simplifies structure, and removes complexity - without introducing risk. Every rewritten sentence is tested using simulated investor personas to verify whether comprehension has actually improved. You’re not just making things “shorter” or “friendlier” - you’re ensuring they’re truly accessible to a Basic or Informed investor.
-
They’re a useful starting point - but not the full picture. Models like Flesch- Kincaid and Gunning Fog were never designed for financial literature. They assess sentence length and syllables - not abstraction, ambiguity, or conditional complexity.
CUE builds on those foundations with domain- specific scoring. Our Plain English Score captures regulatory red flags (e.g., overengineered noun phrases, vague conditionals). Our Psycholinguistic Load Score models how different brains process the text - scoring abstraction, imageability, and word familiarity. That layered view gives you a far more accurate sense of whether a real- world investor would actually grasp what you’ve written.
-
Best practice is a great start - and many firms already apply plain English principles. But how do you prove it worked? How do you know that simplification helped a Basic investor actually understand the document?
That’s the gap CUE fills. We don’t just help you simplify - we measure the result. Our rewrite engine is backed by persona testing, so every change is tested for real- world comprehension using FCA- aligned investor profiles. That turns best effort into evidence - exactly what the FCA wants to see.
DEFINITIONS
-
Clause Count looks at how many separate ideas are packed into a single sentence. Each new clause adds processing effort — especially when commas, dashes, or “and/which/that” chains keep extending a thought. Fewer clauses mean clearer meaning.
Example:
Crowded: “The Fund, which invests mainly in UK equities and may hold cash at times of market stress, aims to deliver long-term growth.”
Clear: “The Fund invests mainly in UK shares. It may hold cash when markets are volatile. Its goal is long-term growth.” -
Nesting means placing one idea inside another, often using brackets, embedded clauses, or multiple conditions. It forces readers to “hold” unfinished thoughts in their memory.
Example:
Nested: “If, after reviewing market conditions, the Bank decides to increase the savings rate (unless inflation remains below target), the new rate will apply from 1 November.”
Plain: “The Bank will increase the savings rate on 1 November if market conditions justify it. This will not happen if inflation stays below target.” -
Clutter is unnecessary language — filler words, duplications, or formal phrases that add weight but no meaning. Removing clutter improves focus and flow.
Example:
Cluttered: “In the event that the counterparty bank is unable to meet its obligations under the terms of this agreement…”
Clear: “If the counterparty bank cannot meet its obligations…” -
Subject Integrity checks that each paragraph introduces its main idea clearly, with a full subject and verb in the first sentence. Fragments or dangling phrases can leave readers unsure what’s being discussed.
Example:
Poor: “Focused on sustainable growth across global markets. Designed to perform in all conditions.”
Clear: “The Fund focuses on sustainable growth across global markets. It’s designed to perform in all conditions.” -
Referential Cohesion ensures that words like “it,” “this,” or “they” clearly refer to the right thing. Ambiguous references are a major cause of misunderstanding.
Example (fund factsheet):
Unclear: “It may outperform over time if conditions improve.” (What is “it”?)
Clear: “The Fund may outperform over time if market conditions improve.” -
Reversal and Negation test whether negative or double-negative phrasing could flip the meaning. Phrases like “not unlikely” or “unless not triggered” can confuse even experienced readers.
Example:
Confusing: “Interest will not be reduced unless the borrower does not meet all payment dates.”
Clear: “We’ll only reduce the interest rate if all payments are made on time.” -
Abstraction measures how conceptual or concrete a word is. Highly abstract language (e.g. “exposure”, “volatility”, “performance”) demands more mental effort than concrete terms like “price” or “payment”. Research in cognitive linguistics shows that abstract words activate less sensory imagery and rely more on prior knowledge — increasing comprehension load for Basic readers. CUE benchmarks abstraction using the Brysbaert et al. concreteness database and related psycholinguistic norms, giving each document a quantitative score for how tangible its vocabulary feels.
-
Imageability assesses how easily a reader can form a mental picture from the language. Sentences filled with verbs and nouns that evoke sensory imagery (“the value drops below a set line”) are easier to process than conceptual phrases (“a downward market adjustment”). CUE draws on the Lancaster Sensorimotor Norms and MRC Psycholinguistic Databaseto measure the degree of sensory engagement in each paragraph. High imageability supports comprehension and recall — especially for investors with lower working-memory capacity.
-
Word Familiarity captures how commonly a word appears in everyday English. Frequent, high-exposure words are processed faster and with less effort. Technical or rarely used terms can block fluency even when they’re short. CUE uses frequency norms from the SUBTLEX-UK corpus and Brysbaert et al. familiarity ratings to quantify this. The resulting score highlights where specialist or uncommon vocabulary may require explanation, definition, or substitution.
-
Reading Age estimates the typical age at which a word is first learned and comfortably understood. It gives a simple, real-world benchmark for accessibility that complements CUE’s deeper linguistic analysis. Drawing on Age-of-Acquisition (AoA) datasets from Kuperman et al. and Brysbaert & Cortese, CUE maps each text to an equivalent UK school-year level. This helps firms demonstrate whether their materials are pitched appropriately for their intended investor segment — a practical link between psycholinguistics and consumer understanding.
SECURITY
-
Yes – and importantly, CUE never asks for personal or customer-level data. We only review client-facing materials such as product brochures, marketing templates, and factsheets – the same documents your firm already distributes to investors.
Documents are stored securely for the duration of the engagement so we can prepare your analysis and audit trail, but we never share them with any third party or reuse them beyond the scope of your report. Where we use AI models for analysis, we’ve also formally opted out of AI model training – meaning nothing you send us is ever used to improve thie underlying system.
-
CUE uses an OpenAI GPT environment with model training disabled and no persistent retention of your data by. The documents are handled securely within CUE and never made available to third parties.
We run all GPT-based processes internally on our side. You simply send us your approved literature (brochures, factsheets, etc.), and we return a structured report showing readability scores, comprehension gaps, and recommended rewrites.
CUE, by contrast, uses a fully calibrated AI engine to simulate how different types of real investors process and understand disclosure documents. Instead of waiting weeks for survey results, CUE provides an instant audit of which paragraphs are too complex, why, and how to fix them. And unlike traditional testing firms, CUE doesn't stop there - it rewrites the content, then retests it using investor personas to prove that comprehension has improved.
-
Where such policies exist, they usually apply to employees using systems like ChatGPT directly inside the organisation and are driven by concerns around the potential sharing of sensitive data, client information, or regulated content with an external AI model, potentially breaching internal data policies, GDPR obligations, or confidentiality agreements.
CUE doesn’t work that way.
Your data never passes through your systems to OpenAI, only through ours. And what we review is public-facing templated literature intended for distribution to end clients, meaning it contains no personal or regulated information. That significantly reduces the risk profile, and keeps your internal systems entirely outside the process.
TONE /
BRAND VOICE
-
No – and that’s intentional. Your tone is yours. You know your voice, your audience, and your brand better than anyone else, and it’s not CUE’s place to impose or mimic it.
In practice, accurately recreating a firm’s tone would require detailed documentation, bespoke training, and likely thousands of samples across different document types. That’s not a small task and even then, tone replication can be hit-or-miss, especially in high-stakes financial comms.
CUE ‘s role is different. We don’t try to replicate your tone. Instead, we focus on something even more important: making sure your tone , whatever it is, is clear, comprehensible, and accessible to your audience. If your current tone relies on jargon or abstraction that creates misunderstanding, we’ll flag and simplify those areas. But where your tone is working, we leave it intact.
You own your voice. CUE helps ensure it’s understood.
-
Not unless your tone depends on complexity that confuses your audience.
CUE’s rewrite engine only changes the specific sentences or phrases that fail our readability and comprehension thresholds. Where clarity issues are flagged, we apply plain-English rewrites that preserve the core meaning, legal intent, and document structure, while simplifying the language for better investor understanding.
If your tone is defined by clarity, accessibility, and trust, CUE enhances it. If it relies on technicality, abstraction, or layered jargon, then yes – our rewrites will soften that, because tone should never come at the expense of understanding.
You also have the option to apply a specific tone to the whole document during the rewrite process. This is done through a light-touch tone overlay, designed with strict guardrails so the way the message sounds is adjusted without undermining clarity or changing the facts.
Our goal is the same whatever tone you choose: you own your voice, and we help ensure it’s understood.
-
Yes. CUE’s tone detection layer analyses each paragraph and tags the dominant tone we observe, using a number of clearly defined tone types (e.g. Reassuring, Cautionary, Sympathetic), all tailored to financial services. For longer content like brochures or client letters, we also provide a high-level tone profile that shows whether the tone remains consistent or shifts across sections.
This isn’t about judging your style, it’s about giving you visibility. Tone detection can help firms identify unintentional tone drift (e.g. a blunt warning in an otherwise supportive message), or highlight where tonal inconsistency might affect how a message lands.
It complements our core clarity testing but it doesn’t overlap with it. Readability measures whether something is understood. Tone detection helps you check whether your message sounds the way you meant it to and whether it stays on-brand from start to finish.
-
Yes, if you want it to, and only in a controlled way.
Tone overlays are part of CUE’s rewrite process. By default, your document stays in its original register apart from the edits needed to make it clear and comprehensible. But there are situations where firms may want a different tone applied consistently across an entire document.
For example:
• A bereavement information pack might use an Empathetic tone to show care and understanding.
• A high-risk investment factsheet might use a Cautionary tone to make risks and consequences stand out.When a tone is selected, CUE applies a micro-edit overlay. These are not wholesale rewrites, ratehr they’re targeted, paragraph-by-paragraph adjustments that change tone without altering meaning, legal accuracy, or structure. Each overlay has its own set of rules, and all overlays share generic guardrails to protect clarity, sentence length, and investor comprehension.
Tone can shape how a message is received, and in certain contexts – especially sensitive or high-risk ones – the right tone supports better understanding and trust. It’s there if you need it, and always applied with the same precision as the rest of CUE’s rewrite process.
MULTI-
JURISDICTIONS
-
Yes - CUE was designed with international scalability in mind. While the UK regulatory framework shaped its foundations, CUE’s methodology is modular and adaptable. Its core pillars - multi- language readability scoring, psycholinguistic analysis, and simulated comprehension testing via AI personas - can be applied to any market, provided local inputs (e.g. language rules or regulatory expectations) are properly integrated.
-
Not exclusively. While UK standards (like Consumer Duty and FCA Handbook references) were used to train and benchmark the system, CUE’s architecture is standards- agnostic. Its readability metrics (e.g., Flesch- Kincaid, Gunning Fog, PLS) are based on globally recognised linguistic rules. These can be swapped for local equivalents - such as GULPEASE in Italy, LIX in Sweden, or Wiener Sachtextformel in Germany - to meet jurisdiction- specific standards without changing the core engine.
-
CUE can process and rewrite materials in multiple languages, provided two conditions are met:
Local readability scoring formulas are implemented in Python (e.g., GULPEASE, Flesch- Doumaï, CLIB).
A psycholinguistic dictionary exists (or can be sourced) for that language to support imageability and abstraction scoring.
For example, German readability can use WSF, and PLS- style scores can be derived from Stuttgart’s psycholinguistic datasets. CUE’s AI rewrite engine also supports localized plain- language rules for each jurisdiction.
-
Yes - CUE doesn’t force UK- style “plain English” onto other languages. Instead, it maps its rewrite logic to local equivalents - such as Klartext in Germany, Direct Duidelijk in the Netherlands, or Linguaggio Chiaro in Italy. The goal isn’t to import British tone - it’s to make complex material more accessible within the cultural and regulatory expectations of that country.
-
There are four components needed:
Language- specific readability logic (e.g., syllable counting, sentence segmentation, scoring formula).
Localized psycholinguistic norms (for abstraction, imageability, and familiarity).
Rewrite rules tuned for local tone and formatting norms.
Persona calibration for that market’s investor profiles.
Once these are in place, CUE can simulate comprehension for Basic, Informed, and Sophisticated investors in the local language - without needing live consumer panels.
-
CUE allows for both options. Its AI Personas are calibrated to reflect real- world investor traits and have been validated against behavioural data. But hybrid deployments are possible - firms can layer in real participant testing if required by regulators, using local recruitment partners like Toluna, YouGov, or GfK.
-
CUE maps its outputs (e.g. readability scores, rewrites, comprehension gaps) to local regulatory expectations, using country- specific rules on fairness, transparency, and product disclosure. Where required, outputs can be reviewed by legal counsel, adapted to specific regimes (e.g., AMF guidelines in France, CONSOB in Italy), and exported in bilingual formats for internal sign- off.
-
Yes - the Cue Calibration Matrix allows for tailoring persona traits to reflect local investor behaviours, such as preference for narrative structures (Italy), formality (France), or directness (Netherlands). Personas can be tuned to reflect jurisdiction- specific risk sensitivity, document tolerance, and trust in provider tone.
FCA
ALIGNMENT
-
Yes. The FCA’s Consumer Duty makes customer understanding a formal regulatory obligation. Firms must demonstrate that their communications enable retail clients to understand and act on information - especially for complex or risk-bearing products. The Duty doesn’t just encourage plain English; it requires evidence that customers actually understand. CUE exists to provide that evidence. It translates regulatory expectations into practical, testable solutions - so firms can validate that their target market really gets it.
-
The FCA is clear: firms must go beyond just “clear writing.” Communications must be designed and tested for real comprehension - especially when the audience includes retail or vulnerable clients. CUE provides the structure to do that: we assess readability using five independent tests, apply plain English and psycholinguistic scoring, and simulate actual investor comprehension using realistic personas. This is what the FCA calls “robust evidence of customer understanding” (FG22/5 §8.55). We don’t just make writing clearer - we prove it works.
-
CUE is directly aligned with a range of named FCA frameworks, including:
· FG22/5 Consumer Duty Guidance, especially Section 8 on testing for customer understanding.
· PRIN 2A: Covers fair value, accessibility, and customer understanding.
· PROD 4.2.33: Requires manufacturers to identify and mitigate risks for target markets, including those with vulnerabilities.
· FS16/10: Encourages industry-wide standardisation of defined terms.
We also incorporate published guidance from regulators (e.g. Fairer Finance, Behavioural Insights Team research, Plain English Campaign).
-
The FCA repeatedly stresses the need to tailor communication to the actual capabilities of the target audience - especially where vulnerability may be present (FG22/5 §§4.14, 8.31–8.35). CUE addresses this in multiple ways:
· Our readability engine flags content likely to cause difficulty for low-literacy or neurodiverse audiences.
· Our accessibility module ensures documents are suitable for alternate formats (braille, audio, large print).
· Our AI personas reflect varied vulnerability traits - like document fatigue, abstraction limits, or misinterpretation of tone - which often trigger complaints.
This ensures our testing reflects real diversity in investor capability, not just theoretical targets. -
The FCA is explicit: jargon must be avoided where possible, and where unavoidable, clearly explained (FG22/5 §8.13). CUE’s glossary module benchmarks defined terms across the industry and flags:
· Terms that are overly complex or inconsistently defined.
· Definitions that don’t align with FCA expectations of plain language.
· Opportunities to consolidate or simplify terms across products (as done by UKSPA).
We go beyond spotting jargon - we test if people actually understand it. -
Actual testing is required. FG22/5 §8.5 and §8.55 both state that testing with representative users is essential - and that findings should be used to improve future communications. CUE enables this with:
· Pre-release readability testing.
· Comprehension testing using AI personas based on FCA consumer profiles.
· Optional live surveys and interviews to validate results in the real world.
This isn’t a “nice to have” - it’s part of the FCA’s definition of compliance under Consumer Duty. -
Inclusive design means communications must work for as many people as possible - not just the average. CUE reflects this principle in three key ways:
· By simulating investor personas who process information differently (e.g. someone who skims vs. someone who struggles with abstraction).
· By identifying layout and visual issues that could undermine comprehension (e.g. font size, misplaced disclaimers).
· By offering alternate formats under our accessibility pillar, which aligns directly with Equality Act 2010 and PRIN 2A.6 requirements.
-
Yes. The FCA wants firms to embed processes of continuous improvement (FG22/5 §8.55). CUE supports this in two ways:
· Every review is logged in a formal CUE audit trail, showing what was tested, what was changed, why it was changed, and what impact it had.
· Firms can rerun reviews as product documents evolve - with updated scoring and persona tests to track progress.
This transforms compliance into a trackable improvement process - not a one-off box-tick.