FAQ’S

PROCESS

  •  CUE is a modular framework designed to support customer understanding across a firm’s customer communications.

    Firms can use CUE at different points in their lifecycle, depending on what they need to evidence. Some use it to review draft product literature before launch. Others apply it to existing brochures, terms, marketing materials, onboarding documents, or customer communications to assess whether they remain clear and appropriate for their target market.

    While the framework is modular, CUE typically focuses on three connected areas: identifying what customers need to understand, assessing whether the language used supports that understanding, and evidencing how well the communication would be understood by different types of retail investors.

  • That’s where readability and comprehension take over. CUE’s readability pillar reviews (and rewrites, if needed) any client-facing content - brochures, factsheets, FAQs - to ensure the language is appropriate for the intended audience. We test for structural clarity, plain English, and cognitive load, tailored to the target market segment.

  • That’s handled through CUE’s comprehension pillar. We simulate investor understanding using our Virtual Customer Personas (VCPs) - AI models calibrated to mirror how real investors read, interpret, and sometimes misinterpret documents. If a brochure doesn’t pass, we escalate to real-world surveys, and if needed, 1:1 interviews to validate what’s being misunderstood and why.

  • Yes - once content is live, accessibility becomes the focus. CUE ensures that core documents are available in alternative formats where needed - including large print, audio, or Braille - so firms can meet their obligations to support vulnerable clients or customers with additional accessibility needs.

  • You can pick and choose. Some firms only want to test comprehension. Others come in to assess a new product idea or commission rewrites on difficult paragraphs. The system is built to slot in wherever it’s needed - but when used in full, it gives you the clearest picture of how well a product is likely to be understood, by whom, and why.

READABILITY

  • We log it - and show you exactly where the problem came from. You’ll see which investor type struggled, what question they misinterpreted, and which part of the text caused the confusion. From there, you can trigger a targeted rewrite if necessary. This transforms comprehension testing from a theoretical goal into a structured, actionable process.

  • The Plain English Score looks at how clearly and directly the text is written – not just from a technical perspective, but from a structural and stylistic one. It flags issues like passive voice, cluttered or over-layered sentences, overly formal vocabulary, and poor logical flow. It also enforces rules such as Referential Cohesion, Subject Intergrity and Connective Clarity. Together, these checks ensure each sentence is concrete, concise, and logically linked – surfacing issues that might not be “hard to read” in a traditional sense but could still cause confusion or disengagement.

  • The Psycholinguistic Load Score (PLS) looks at how mentally demanding the words are to process - especially for Basic or vulnerable readers. It focuses on three things: abstraction (how conceptual the language is), imageability (how easy the words are to picture), word familiarity (how common or recognisable the vocabulary is) and Reading Age (the typical age at which a word is learnt). These elements are key for identifying hidden complexity that isn’t captured by surface readability tools - especially for audiences with lower financial literacy or cognitive load tolerance.

  • CUE’s Psycholinguistic Load Score does more than measure abstraction, familiarity and imageability. It operates as a structured rewrite framework that assesses every word in context and tests whether clearer alternatives would reduce cognitive effort without altering meaning.

    As part of the CUE Readability pillar, we operate an internal lexical simplification library of over 450 identified high-friction terms and constructions commonly found in financial documentation. These include archaic drafting conventions, unnecessary formality, Latinate expressions, and complex financial terminology. Where appropriate, terms are replaced or clarified using clearer, everyday language, subject to strict semantic guardrails to ensure legal and regulatory precision is preserved.

    This substitution library forms one component of the wider PLS process. The system also evaluates word familiarity, abstraction level, imageability, sentence structure and conceptual density, ensuring that simplification is evidence-based rather than stylistic.

  • CUE recognises that investors don’t read or interpret information in a purely rational way. Behavioural finance research shows that tone, framing, and linguistic structure shape how people perceive risk, reward, and certainty. CUE translates these insights into measurable language features. It’s Plain English Score flags elements known to distort investor judgement — such as passive voice (which distances responsibility), clutter (which hides key facts), excessive layering or conditional phrasing (which increases cognitive strain), and tonal imbalance (which can over-reassure or understate risk). Its Psycholinguistic Load Score complements this by testing abstraction, imageability, familiarity, and now reading-age — all of which influence how accessible a disclosure feels in practice.
    By scoring and correcting these patterns, CUE helps firms reduce the behavioural traps that arise from language itself — making written communication align more closely with how investors actually process and act on information.

  • CUE’s readability model is specifically designed to reduce the kinds of linguistic complexity that can create barriers for neurodiverse or vulnerable readers. Through our Psycholinguistic Load Score, we test how abstract, imageable, and familiar the language is — key factors for people with working-memory challenges, attention differences, or low financial literacy — and apply plain-English rules to simplify tone and structure without losing accuracy. We now also include a Reading Age calibration within the PLS test, providing an accessible benchmark that estimates the minimum reading level required to process the text. This helps firms evidence whether their literature is truly suitable for its intended audience. While CUE doesn’t diagnose or label individuals, it’s calibrated to surface the text-based risks that can exclude or confuse these audiences — helping firms meet their obligations under the Consumer Duty and beyond.

  • The Cue Score is the combined result of all three readability components. It tells you, in one clear outcome, whether the text is appropriate for a Basic, Informed, or Sophisticated investor. If the score aligns with the intended audience, the document passes. If it scores higher than the target market, it’s flagged as too complex. The Cue Score gives firms a simple, defensible way to evidence that their customer communications are suitable.

  • Not all content looks like a brochure - firms may need to test single- page summaries, short disclosures, or FAQ blurbs. CUE recognises this and adapts how scores are calculated when there’s limited content to work with. Some scoring methods are adjusted or weighted differently to make sure the result is still meaningful and fair - without over- penalising short or tightly scoped content.

  • If the Cue Score indicates that a document is too complex for its intended audience, it triggers a rewrite. CUE uses a structured editing engine that applies plain-language rules - simplifying sentence structure, improving tone, and replacing abstract or unfamiliar phrasing - while preserving legal meaning. This ensures that firms don’t just identify the problem but have a clear way to fix it.

  • Yes - that’s a key advantage of our system. CUE doesn’t just give you a score at the end; it shows which elements affected readability, and why. Whether it’s overly formal tone, complex sentence structure, or abstract language, CUE pinpoints what needs attention and helps you make precise edits - rather than rewriting everything from scratch.

ACCESSIBILITY

  • Accessibility ensures that all client- facing documents - including brochures and KIDs - are available in formats that meet the needs of vulnerable investors. With Consumer Duty placing legal obligations on manufacturers to support inclusivity, accessibility is no longer optional. CUE provides a ready- to- go solution, converting complex documents into Braille, large print, audio, and other accessible formats - with minimal effort from the manufacturer.

  • They can - but rarely do. Accessibility requests are infrequent and complex, meaning most firms don’t have a reliable process or trusted partners. That leads to delays, inconsistencies, or non- compliance. CUE solves this by offering a centralised, pre- tested service built specifically for structured product documents, with a vetted supply chain and industry- specific workflows already in place.

  • We offer Braille, audio, large print, Easy Read, British Sign Language (BSL) video, tactile graphics, and digital formats compatible with screen readers. All formats have been tested using real structured product documents - including graphs, tables, and KID templates - to ensure quality, clarity, and FCA alignment.

  • Accessibility is directly referenced across several Consumer Duty obligations - including PRIN 2A.4 (fair value)PRIN 2A.5 (consumer understanding), and PRIN 2A.6 (customer support). Firms must be able to demonstrate that their documents are accessible to all clients, including those with visual, cognitive, or linguistic challenges. CUE’s system ensures that this is done efficiently, accurately, and with full auditability.

  • No - CUE acts as a managed gateway. We work with accredited accessibility providers (such as All Formats, PIA, and A2i), ensuring every output meets strict standards. But because we understand structured products in detail, we take care of formatting, technical content, and context, so the provider receives exactly what they need to deliver a compliant result.

  • That’s changing. The FCA has made clear that firms must proactively consider the needs of vulnerable clients - not just react to requests. By embedding accessibility into product governance, CUE helps firms anticipate demand, demonstrate compliance, and ensure inclusive communication - whether or not a formal request has been made.

  • Yes. One of the key benefits of centralising accessibility through CUE is that it creates efficiencies across the market. Individual firms avoid duplicating work, and by routing requests through a common system, we reduce turnaround times, lower costs, and improve consistency - without sacrificing compliance or quality.

COMPREHENSION

  • Comprehension is about testing whether investors genuinely understand the information they’re being given. In CUE, this means identifying the key facts, risks, and product mechanics that a document is intended to communicate - and then using our Virtual Customer Personas (VCPs) to test if those messages actually land. It’s not about whether the words are readable - it’s about whether they result in real understanding.

  • Readability looks at how easy the language is to process - sentence length, jargon, abstraction, and tone. It asks: Is the text written in a way that promotes understanding? Comprehension goes further. It asks: Did the investor actually understand it? You can pass a readability test and still fail a comprehension check - especially with vulnerable or Basic investors who might misinterpret or miss key details.

  • CUE starts by mapping out the intended messages of a document - what it’s trying to explain or disclose. We then test each message using our calibrated personas, who process the content and respond in their own words. Their answers are evaluated against what the brochure should have conveyed. This shows whether each investor type would likely walk away with the right understanding - or a dangerous misconception.

  • Foundational comprehension is defined by the firm - it captures what they believe the customer should walk away understanding. Before testing begins, we work with the client to document the key messages, risks, and features they believe the literature is designed to convey. This becomes the benchmark we test against. It's the only layer that’s bespoke to each document - and it's what ensures we’re measuring understanding of what the firm actually intended to communicate.

  • Tier 2 questions are also predefined by CUE, but they’re targeted. Each one is linked to a specific product feature or disclosure - like kickouts payoffs, emerging markets, collateralisation, death benefits. These are only triggered if the feature appears in the document. That means different Tier 2 question sets are used depending on what you’re testing - whether it's a brochure, a suitability letter, or a valuation report. They’re focused, scalable, and highly specific.

  • Absolutely. The foundational layer is always firm- defined - it reflects what they believe is material. Tier 1 and Tier 2 are then layered on top by CUE, giving firms a structured way to identify whether their intended messages landed, and whether other known risk areas were also clearly understood. This combination ensures both firm- specific goals and industry- wide risks are tested - in one integrated process.

  • Tier 1 and Tier 2 questions are built from deep industry insight - they’re not created ad hoc or guessed by AI. Tier 1 questions come from our research into FCA guidance, FOS complaint themes, academic studies, and behavioural finance reports. They target the core areas regulators consistently flag as being misunderstood - like charges, capital exposure, or product conditions. Tier 2 questions, on the other hand, are developed at the industry level - working with bodies like UKSPA and drawing from product design norms - to cover feature- specific disclosures that need to be tested consistently.

  • Tier 1 is broad and thematic - we create it by looking across regulatory sources, complaint data, and known patterns of misunderstanding. These are the questions every firm should be asking, regardless of what’s in their foundational brief. Tier 2 is narrower and technical - it’s developed by analysing common product features and risks across the market. For example, we might have a Tier 2 question bank for early withdrawal penalties, performance caps, or switching restrictions - each triggered only when relevant language appears in the document.

  • No - that’s the value of CUE. We bring a prebuilt library of rigorously developed questions, so firms don’t have to start from scratch. Foundational questions come from the firm, but Tier 1 and Tier 2 are curated by us, based on what regulators care about and what industry bodies have standardised. This saves time and ensures the questions reflect real risks - not just what firms think investors need to know.

  • Tier 2 coverage is growing all the time. We already have question banks for a wide range of financial product features - not just structured products - and we’re expanding them through industry partnerships. Our goal is to ensure that for any feature that carries comprehension risk, we have a tested, standardised way of checking whether it’s understood. That allows consistency across firms, sectors, and documents.

  • CUE runs structured reviews of Financial Ombudsman Service (FOS) complaints to learn from real-world disclosure failures. We focus on complaints linked to the type of product we’re testing - for example, structured product cases when reviewing structured product literature. These insights help shape the comprehension questions we ask and fine-tune how our Virtual Customer Personas respond. It means our testing isn’t just built from theory - it’s rooted in what has already gone wrong for real consumers, and what regulators have deemed as misunderstanding.

  • FOS complaints reveal the moments where real investors genuinely failed to understand - not just what was technically disclosed. They expose patterns: unclear risk warnings, fine-print conditions that invalidate headline claims, or capital-at-risk phrasing that left investors shocked by losses. CUE analyses upheld cases to identify these disclosure breakdowns and then builds them directly into how we test literature.

    These insights shape both how our personas are calibrated (e.g. sensitivity to vague reassurances) and what comprehension questions we ask (e.g. can customers spot the real risk position, despite optimistic framing?). We’ve even built specific checks - like headline consistency and risk clarity based entirely on FOS complaint trends.

    It’s one of the ways CUE stays anchored in real-world customer failure points, not just theoretical best practice. We don’t just test if text is “clear” - we test whether it leads to the same misunderstandings we’ve seen time and again in FOS rulings.

  • Yes - and that’s exactly the point. Traditional testing is limited by cost and participant fatigue. CUE’s persona- driven system removes those limits. We can test every foundational point, all Tier 1 risks, and as many Tier 2 features as needed - across multiple investor types - without dropout, disengagement, or coaching. It’s the only way to comprehensively test understanding at scale.

  • VCPs are the comprehension engine. Each persona simulates a different type of retail investor - with unique strengths, blind spots, and cognitive traits. When they “read” a document, they interpret it like a real customer would. That means they sometimes miss nuance, flip conditionals, or over- rely on tone. CUE tracks these issues and tells you exactly where and why comprehension failed.

  • We log it - and show you exactly where the problem came from. You’ll see which investor type struggled, what question they misinterpreted, and which part of the text caused the confusion. From there, you can trigger a targeted rewrite if necessary. This transforms comprehension testing from a theoretical goal into a structured, actionable process.

VIRTUAL

CUSTOMERS

  •  CUE uses a two-layer persona framework.

    At the core of all comprehension testing are three primary Virtual Customer Personas representing the industry-standard levels of retail investor capability: Basic, Informed, and Sophisticated. These are the personas used to test every comprehension question and determine whether language would be understood by the intended target market.

    Alongside this, CUE maintains a wider set of 10 research personas, informed by real-world segmentation models such as Experian’s Financial Strategy Segments. These research personas are not used to test documents directly. Instead, they help CUE identify the kinds of questions and misunderstandings that different types of real investors are likely to have, which then informs the design of our comprehension question banks.

    This approach ensures that testing remains consistent and regulator-aligned, while still being grounded in the diversity of real investor behaviour.

  • Virtual Customer Personas are calibrated using CUE’s readability, plain-English, and psycholinguistic scoring framework. For each persona, CUE tests whether the language used is appropriate for that level of financial knowledge and reading capability, based on established research into readability, cognitive load, and consumer understanding reflected in FCA guidance and academic studies. This ensures that differences between personas reflect genuine differences in what each audience is likely to understand from the same text.

  • Every document tested in CUE is assessed using the three primary Virtual Customer Personas: Basic, Informed, and Sophisticated.

    All comprehension questions, whether they come from the firm’s own foundational concepts or from CUE’s Tier 1 and Tier 2 question banks, are tested against these three personas. This ensures that results are directly comparable across documents and clearly mapped to the intended target market.

    CUE’s wider set of research personas is used upstream, to help shape and refine the questions that are asked, but they are not used in live document testing. This separation avoids unnecessary complexity while keeping the testing framework firmly rooted in real-world investor behaviour.

  •  CUE maintains a set of Virtual Customer Research Personas that are used to inform the design of its comprehension question banks, not to test documents directly.

    These research personas reflect different real-world investor profiles identified through market segmentation models such as Experian’s Financial Strategy Segments. They help CUE anticipate the types of questions, misunderstandings, and decision points that different investors are likely to encounter when reading financial communications.

    Importantly, these research personas are used during question development. All live document testing is carried out using CUE’s three primary personas: Basic, Informed, and Sophisticated.

  • CUE does not rely on random or generic AI behaviour. Instead, misunderstanding is identified when the language used in a document exceeds what a given target market is likely to understand.

    Each Virtual Customer Persona applies the same comprehension questions, but with different readability, plain-English, and psycholinguistic thresholds. Where the explanation of a concept is too abstract, too complex, or too linguistically demanding for that audience, CUE records this as a partial or failed understanding.

    This means that when a persona struggles, the result is explainable and repeatable, and can be traced back to specific features of the language used, rather than unpredictable AI behaviour.

  • Every document is tested against using all 10 Virtual Customer personas, and for each persona using three differing levels of knowledge/experience - Basic, Informed, and Sophisticated - unless a firm requests a specific audience. These are mapped to real- world segments using Industry definitions of financial experience and product knowledge, so the test always reflects the actual target market. Each persona goes through the same comprehension questions to show where understanding breaks down.

  • Basic investors typically have limited financial knowledge and little or no investment experience. These investors rely heavily on plain language, visual clarity, and short, concrete explanations. For this group, even small increases in complexity can lead to misunderstanding - which is why CUE holds materials to the highest clarity standards when they’re aimed at Basic readers.

  • Informed investors have some financial experience and a reasonable level of comfort with investment concepts. They are likely to recognise common product terms, however, they may still struggle with layered conditions, abstract risk descriptions, or overly technical language. CUE tests whether documents aimed at this group strike the right balance - informative without being overly simplified, and clear without assuming too much prior knowledge.

  • Sophisticated investors typically have substantial investment experience and understand more complex structures and risk mechanics. This group can handle detailed disclosures, conditional logic, and formal tone - but even here, CUE checks that the language is precise and unambiguous. Complexity may be acceptable, but clarity is still essential.

  • CUE runs each document through a set of predefined comprehension questions - but instead of treating them as a quiz, it simulates how each persona would genuinely interpret and respond in their own words. This isn’t about checking for “right answers,” it’s about seeing if the investor would naturally understand what’s being communicated based on how they think and process information.

  •  CUE’s personas are calibrated using established research into readability, cognitive load, and consumer understanding, much of which underpins FCA guidance and industry good practice. Rather than attempting to recreate this research from scratch, CUE operationalises it by applying consistent, testable thresholds to real client communications.

    Where simulated testing highlights potential gaps in understanding, CUE can support further investigation, including targeted real-world surveys or interviews if a firm chooses to validate those findings. Over time, insights from this work inform how CUE’s scoring and thresholds are refined.

    This approach ensures the personas are grounded in proven behavioural research, while allowing calibration to improve as additional evidence is gathered.

  • No - each answer is built from the brochure content, filtered through the persona’s specific traits. If a persona fails to understand something, it’s not because the AI didn’t try - it’s because that investor type wouldn’t have understood based on whether the information was found and how the information was written. Every failure is explainable and grounded in the traits you’ve calibrated.

  • After generating the persona’s answer, CUE compares it to the expected “correct” response based on the brochure’s intended message. The system scores each response as fully correct, partially correct, or incorrect - and always cites the specific text the persona relied on. That way, you can trace every misunderstanding back to the paragraph or phrasing that caused it.

  • With real humans, you only get a few answers before fatigue kicks in - and they might guess, disengage, or game the test. CUE’s personas never tire, never cheat, and never overclaim. They process the entire document, simulate realistic investor thinking, and surface genuine comprehension risks at scale - showing you what would confuse each type of customer, and why.

  • Real-world investor input is built into CUE at multiple levels. On the comprehension side, our Virtual Customer Personas (VCPs) are calibrated using a mix of real- world Q&A trials, FCA behavioural research, and complaint data from the Financial Ombudsman Service. Every year, we run A/B calibration tests comparing persona responses with actual retail investors to keep the model aligned. And if a document fails CUE’s readability or comprehension thresholds, we escalate to live testing with real customers - first via surveys, then through 1:1 interviews if needed. CUE isn’t designed to replace human testing - it’s designed to focus it, sharpen it, and explain exactly when and where it’s needed.

AI VS REAL

WORLD TESTING

  • Real-person testing is powerful but impractical. It’s costly, time- consuming, and often biased. CUE replaces this with calibrated AI personas that simulate how different types of investors - from first- time savers to seasoned professionals - actually process disclosure documents. This lets firms test dozens of questions across entire brochures without the cost, fatigue, or drop-off rates that plague traditional methods.

  • Not necessarily. Real investors often overestimate their understanding, give strategic answers, or disengage midway through. CUE’s personas don’t guess, skip, or fatigue - they replicate cognitive traits (like abstraction limits or risk sensitivity) that cause real misunderstanding. This makes the results more reliable and exhaustive, especially for high- risk or complex disclosures.

  • CUE’s personas are designed to be directionally accurate rather than predictive of any individual investor. They reflect well-established differences in how retail customers with different levels of financial knowledge and reading capability are likely to understand written information.

    Accuracy in CUE comes from applying consistent readability, plain-English, and psycholinguistic thresholds grounded in FCA guidance and academic research, rather than attempting to replicate individual behaviour. Where language exceeds what a target market is likely to cope with, CUE reliably flags a risk of misunderstanding.

    This makes the results repeatable, explainable, and suitable for evidencing customer understanding, even though they are not a substitute for live testing with real investors.

  • CUE is designed to avoid the common weaknesses of general-purpose AI tools. Rather than asking a model to make broad judgements in a single step, CUE breaks the task of assessing customer understanding into many small, rule-based checks.

    Each step is constrained by predefined questions, target-market thresholds, and evidence requirements. AI outputs are not free form: they must be grounded in specific source text, scored against clear criteria, and recorded in an auditable trail showing how and why each conclusion was reached.

    This approach avoids heuristic guessing and hallucination, and ensures that any identified misunderstanding can be traced back to specific features of the language used, rather than opaque model behaviour.

  • That’s exactly where it wins. Real- world testing is often limited to summaries or a few questions due to cost. CUE can test every paragraph of a brochure, against every persona, using every relevant Tier 1 and Tier 2 question. It’s the only way to exhaustively map comprehension risk at scale - something even the largest firms can’t afford to do manually.

  • Surveys often mislead. Respondents can reverse- engineer the right answers or drop out entirely - especially Basic investors. CUE avoids this by simulating comprehension, not opinions. Its personas don’t "answer questions" - they process the material step- by- step, surfacing exactly where confusion begins and why. That’s far more diagnostic than any checkbox survey.

TONE / BRAND

VOICE

  • No – and that’s intentional. Your tone is yours. You know your voice, your audience, and your brand better than anyone else, and it’s not CUE’s place to impose or mimic it.

    In practice, accurately recreating a firm’s tone would require detailed documentation, bespoke training, and likely thousands of samples across different document types. That’s not a small task and even then, tone replication can be hit-or-miss, especially in high-stakes financial comms.

    CUE ‘s role is different. We don’t try to replicate your tone. Instead, we focus on something even more important: making sure your tone, whatever it is, is clear, comprehensible, and accessible to your audience. If your current tone relies on jargon or abstraction that creates misunderstanding, we’ll flag and simplify those areas. But where your tone is working, we leave it intact.

    You own your voice. CUE helps ensure it’s understood.

  • Not unless your tone depends on complexity that confuses your audience.

    CUE’s rewrite engine only changes the specific sentences or phrases that fail our readability thresholds. Where clarity issues are flagged, we apply plain-English rewrites that preserve the core meaning, legal intent, and document structure, while simplifying the language for better investor understanding.

    If your tone is defined by clarity, accessibility, and trust, CUE enhances it. If it relies on technicality, abstraction, or layered jargon, then yes – our rewrites will soften that, because tone should never come at the expense of understanding.

    You also have the option to apply a specific tone to the whole document during the rewrite process. This is done through a light-touch tone overlay, designed with strict guardrails so the way the message sounds is adjusted without undermining clarity or changing the facts.

    Our goal is the same whatever tone you choose: you own your voice, and we help ensure it’s understood.

  • Yes. CUE’s tone detection layer analyses each paragraph and tags the dominant tone we observe, using a number of clearly defined tone types (e.g. Reassuring, Cautionary, Sympathetic), all tailored to financial services. For longer content like brochures or client letters, we also provide a high-level tone profile that shows whether the tone remains consistent or shifts across sections.

    This isn’t about judging your style, it’s about giving you visibility. Tone detection can help firms identify unintentional tone drift (e.g. a blunt warning in an otherwise supportive message), or highlight where tonal inconsistency might affect how a message lands.

    It complements our core clarity testing but it doesn’t overlap with it. Readability measures whether something is understood. Tone detection helps you check whether your message sounds the way you meant it to and whether it stays on-brand from start to finish.

AUDIT

  • CUE isn’t based on intuition or black- box AI. It’s built from the ground up using established research and internationally recognised linguistic, behavioural, and regulatory standards. Our readability scoring draws from validated models like Flesch- Kincaid, Gunning Fog, and the SMOG Index - all of which are used in regulatory, legal, and academic settings worldwide. For psycholinguistic testing, we rely on sources like the CMU Pronouncing Dictionary and the MRC Psycholinguistic Database to assess word familiarity, abstraction, and imageability. And our persona modelling is informed by FCA behavioural research, FOS complaint trends, and real- world consumer testing. Everything we do is designed to stand up to scrutiny - because helping firms evidence understanding isn’t just a technical task, it’s a regulatory obligation.

  • CUE is designed to address one of the most difficult Consumer Duty expectations: evidencing that your target market is likely to understand the information provided. Traditional methods - like internal reviews or tone- of- voice checks - don’t prove whether comprehension is actually happening. CUE goes further by testing your material through simulated investor personas and structured linguistic analysis, generating a detailed audit trail that shows exactly where risks exist, what was done to fix them, and how the outcome maps to the intended audience.

  • CUE’s readability audit doesn’t just produce a score - it gives a detailed breakdown of why a section failed and what was changed to fix it. We highlight linguistic issues (e.g. passive tone, abstract phrasing, long conditionals), show the original and rewritten version, and explain the impact of each change. We also track the number of edits, the improvement in Plain English and PLS scores, and the overall shift in the CUE Score. Crucially, we confirm which target market the final version is suitable for - so firms can prove alignment with the intended reader.

  • On the comprehension side, CUE provides a transparent record of how each Virtual Customer Persona responded to the document. We show the full answer each persona gave, tag whether it was fully correct, guessed, or incorrect, and link each answer to the specific quote in the document they relied on. This allows firms to identify patterns in misunderstanding - for example, if Basic personas consistently misread a specific paragraph - and gives concrete evidence of comprehension success or failure by segment.

  • Yes. All CUE audit outputs can be exported in structured formats - including individual rule breakdowns, rewrite history, persona responses, and scoring summaries. This allows firms to include CUE results in board reports, product governance packs, value assessments, and Consumer Duty files. It also makes it easy to show regulators that testing has been done systematically - and that the firm has evidence, not just opinion, to support its assessment of consumer understanding.

USERS

  • CUE is built to support both. While many firms use CUE independently to test their own documents, industry associations use it to tackle shared challenges - creating standards, improving consumer understanding, and supporting collective compliance. It’s a scalable framework that works just as well for a single product as it does for an entire sector.

  •  Industry bodies such as UKSPA use CUE as a shared framework to raise standards and promote consistency across the market.

    In practice, this includes using CUE to help identify the foundational concepts that retail investors need to understand for complex products, and to develop and test best-practice retail language that explains those concepts clearly. By doing this at an industry level, firms can align on what “good” looks like, rather than each firm reinventing the same analysis in isolation.

    UKSPA has also used CUE to test comprehension of complex product features across the market, and to consolidate glossary definitions using CUE’s readability and rewrite engine. These shared exercises help improve clarity, support Consumer Duty expectations, and reduce duplication of effort for individual firms.

  • The biggest advantage is scale. Associations can commission testing or rewrites across multiple firms - driving consistency, reducing cost, and creating outputs that benefit the whole market. It also makes it easier to respond to regulatory pressure with a unified approach, especially in areas like Defined Terms, target market alignment, or disclosure clarity.

  • CUE is product-agnostic. While UKSPA has led the way in structured products, the same approach applies to any financial sector - from insurance and pensions to investment platforms and savings apps. Any industry group facing challenges around comprehension, suitability, or Consumer Duty can use CUE to develop scalable, regulator-ready solutions for its members.

COMPETITORS

  • CUE is designed to evidence customer understanding in a way that is systematic, scalable, and aligned with Consumer Duty expectations.

    Rather than relying on one off surveys, high level reviews, or generic AI tools, CUE combines three elements into a single framework. First, it helps identify the foundational concepts that retail customers need to understand for a product not to mislead them. Second, it applies a structured readability and plain English assessment to test whether the language used is appropriate for the intended target market. Third, it tests comprehension using consistent Virtual Customer Personas representing Basic, Informed, and Sophisticated investors.

    This means CUE does not simply check whether information is present, or ask customers whether they feel they understand something. It assesses whether the wording used would be likely to be understood by the intended audience, and it explains where and why that understanding may break down.

    Where issues are identified, CUE supports firms in improving their communications through targeted rewrites and retesting, creating a clear audit trail that shows how customer understanding has been considered and strengthened.

    The result is a repeatable, evidence-based process that complements legal review and real-world testing, while avoiding the cost, delay, and inconsistency of relying on those approaches alone.

  • They’re a useful starting point - but not the full picture. Models like Flesch- Kincaid and Gunning Fog were never designed for financial literature. They assess sentence length and syllables - not abstraction, ambiguity, or conditional complexity.

    CUE builds on those foundations with domain- specific scoring. Our Plain English Score captures regulatory red flags (e.g., overengineered noun phrases, vague conditionals). Our Psycholinguistic Load Score models how different brains process the text - scoring abstraction, imageability, and word familiarity. That layered view gives you a far more accurate sense of whether a real- world investor would actually grasp what you’ve written.

  • Best practice is a great start - and many firms already apply plain English principles. But how do you prove it worked? How do you know that simplification helped a Basic investor actually understand the document?

    That’s the gap CUE fills. We don’t just help you simplify - we measure the result. Our rewrite engine is backed by persona testing, so every change is tested for real- world comprehension using FCA- aligned investor profiles. That turns best effort into evidence - exactly what the FCA wants to see.

DEFINITIONS

  • Clause Count looks at how many separate ideas are packed into a single sentence. Each new clause adds processing effort — especially when commas, dashes, or “and/which/that” chains keep extending a thought. Fewer clauses mean clearer meaning.
    Example:
    Crowded: “The Fund, which invests mainly in UK equities and may hold cash at times of market stress, aims to deliver long-term growth.”
    Clear: “The Fund invests mainly in UK shares. It may hold cash when markets are volatile. Its goal is long-term growth.”

  • Nesting means placing one idea inside another, often using brackets, embedded clauses, or multiple conditions. It forces readers to “hold” unfinished thoughts in their memory.
    Example:
    Nested: “If, after reviewing market conditions, the Bank decides to increase the savings rate (unless inflation remains below target), the new rate will apply from 1 November.”
    Plain: “The Bank will increase the savings rate on 1 November if market conditions justify it. This will not happen if inflation stays below target.”

  • Clutter is unnecessary language — filler words, duplications, or formal phrases that add weight but no meaning. Removing clutter improves focus and flow.
    Example:
    Cluttered: “In the event that the counterparty bank is unable to meet its obligations under the terms of this agreement…”
    Clear: “If the counterparty bank cannot meet its obligations…”

  • Subject Integrity checks that each paragraph introduces its main idea clearly, with a full subject and verb in the first sentence. Fragments or dangling phrases can leave readers unsure what’s being discussed.
    Example:
    Poor: “Focused on sustainable growth across global markets. Designed to perform in all conditions.”
    Clear: “The Fund focuses on sustainable growth across global markets. It’s designed to perform in all conditions.”

  • Referential Cohesion ensures that words like “it,” “this,” or “they” clearly refer to the right thing. Ambiguous references are a major cause of misunderstanding.
    Example (fund factsheet):
    Unclear: “It may outperform over time if conditions improve.” (What is “it”?)
    Clear: “The Fund may outperform over time if market conditions improve.”

  • Reversal and Negation test whether negative or double-negative phrasing could flip the meaning. Phrases like “not unlikely” or “unless not triggered” can confuse even experienced readers.
    Example:
    Confusing: “Interest will not be reduced unless the borrower does not meet all payment dates.”
    Clear: “We’ll only reduce the interest rate if all payments are made on time.”

  • Abstraction measures how conceptual or concrete a word is. Highly abstract language (e.g. “exposure”, “volatility”, “performance”) demands more mental effort than concrete terms like “price” or “payment”. Research in cognitive linguistics shows that abstract words activate less sensory imagery and rely more on prior knowledge — increasing comprehension load for Basic readers. CUE benchmarks abstraction using the Brysbaert et al. concreteness database and related psycholinguistic norms, allowing us to quantify how abstract or concrete the vocabulary across a document is.

  • Imageability assesses how easily a reader can form a mental picture from the language. Sentences filled with verbs and nouns that evoke sensory imagery (“the value drops below a set line”) are easier to process than conceptual phrases (“a downward market adjustment”). CUE draws on the Lancaster Sensorimotor Norms and MRC Psycholinguistic Databaseto assess how strongly the language in a document supports mental imagery, based on the imageability of the words it uses. High imageability supports comprehension and recall — especially for investors with lower working-memory capacity.

  • Word Familiarity captures how commonly a word appears in everyday English. Frequent, high-exposure words are processed faster and with less effort. Technical or rarely used terms can block fluency even when they’re short. CUE uses frequency norms from the SUBTLEX-UK corpus and Brysbaert et al. familiarity ratings to quantify this. The resulting score highlights where specialist or uncommon vocabulary may require explanation, definition, or substitution.

  • Reading Age estimates the typical age at which a word is first learned and comfortably understood. It gives a simple, real-world benchmark for accessibility that complements CUE’s deeper linguistic analysis. Drawing on Age-of-Acquisition (AoA) datasets from Kuperman et al. and Brysbaert & Cortese, CUE uses word-level age-of-acquisition data to assess whether the language in a document is pitched above or below the expected reading level for its intended investor segment. This helps firms demonstrate whether their materials are pitched appropriately for their intended investor segment — a practical link between psycholinguistics and consumer understanding.

SECURITY

  • Yes – and importantly, CUE never asks for personal or customer-level data. We only review client-facing materials such as product brochures, marketing templates, and factsheets – the same documents your firm already distributes to investors.

    Documents are stored securely for the duration of the engagement so we can prepare your analysis and audit trail, but we never share them with any third party or reuse them beyond the scope of your report. Where we use AI models for analysis, we’ve also formally opted out of AI model training – meaning nothing you send us is ever used to improve the underlying system.

  • CUE uses an OpenAI GPT environment with model training disabled and no persistent retention of your data by. The documents are handled securely within CUE and never made available to third parties.

    We run all GPT-based processes internally on our side. You simply send us your approved literature (brochures, factsheets, etc.), and we return a structured report showing readability scores, comprehension gaps, and recommended rewrites.

  • Where such policies exist, they usually apply to employees using systems like ChatGPT directly inside the organisation and are driven by concerns around the potential sharing of sensitive data, client information, or regulated content with an external AI model, potentially breaching internal data policies, GDPR obligations, or confidentiality agreements.

    CUE doesn’t work that way.

    Your data never passes through your systems to OpenAI, only through ours. And what we review is public-facing templated literature intended for distribution to end clients, meaning it contains no personal or regulated information. That significantly reduces the risk profile and keeps your internal systems entirely outside the process.

MULTI

JURISDICTIONS

  • Yes - CUE was designed with international scalability in mind. While the UK regulatory framework shaped its foundations, CUE’s methodology is modular and adaptable. Its core pillars - multi- language readability scoring, psycholinguistic analysis, and simulated comprehension testing via AI personas - can be applied to any market, provided local inputs (e.g. language rules or regulatory expectations) are properly integrated.

  • Not exclusively. While UK standards (like Consumer Duty and FCA Handbook references) were used to train and benchmark the system, CUE’s architecture is standards- agnostic. Its readability metrics (e.g., Flesch- Kincaid, Gunning Fog, PLS) are based on globally recognised linguistic rules. These can be swapped for local equivalents - such as GULPEASE in Italy, LIX in Sweden, or Wiener Sachtextformel in Germany - to meet jurisdiction- specific standards without changing the core engine.

  • CUE can process and rewrite materials in multiple languages, provided two conditions are met:

    1. Local readability scoring formulas are implemented in Python (e.g., GULPEASE, Flesch- Doumaï, CLIB).

    2. A psycholinguistic dictionary exists (or can be sourced) for that language to support imageability and abstraction scoring.

    For example, German readability can use WSF, and PLS- style scores can be derived from Stuttgart’s psycholinguistic datasets. CUE’s AI rewrite engine also supports localized plain- language rules for each jurisdiction.

  • Yes - CUE doesn’t force UK- style “plain English” onto other languages. Instead, it maps its rewrite logic to local equivalents - such as Klartext in Germany, Direct Duidelijk in the Netherlands, or Linguaggio Chiaro in Italy. The goal isn’t to import British tone - it’s to make complex material more accessible within the cultural and regulatory expectations of that country.

  • There are four components needed:

    1. Language- specific readability logic (e.g., syllable counting, sentence segmentation, scoring formula).

    2. Localized psycholinguistic norms (for abstraction, imageability, and familiarity).

    3. Rewrite rules tuned for local tone and formatting norms.

    4. Persona calibration for that market’s investor profiles.

    Once these are in place, CUE can simulate comprehension for Basic, Informed, and Sophisticated investors in the local language - without needing live consumer panels.

  • CUE allows for both options. Its AI Personas are calibrated to reflect real- world investor traits and have been validated against behavioural data. But hybrid deployments are possible - firms can layer in real participant testing if required by regulators, using local recruitment partners like Toluna, YouGov, or GfK.

  • CUE maps its outputs (e.g. readability scores, rewrites, comprehension gaps) to local regulatory expectations, using country- specific rules on fairness, transparency, and product disclosure. Where required, outputs can be reviewed by legal counsel, adapted to specific regimes (e.g., AMF guidelines in France, CONSOB in Italy), and exported in bilingual formats for internal sign- off.

  • Yes - the Cue Calibration Matrix allows for tailoring persona traits to reflect local investor behaviours, such as preference for narrative structures (Italy), formality (France), or directness (Netherlands). Personas can be tuned to reflect jurisdiction- specific risk sensitivity, document tolerance, and trust in provider tone.

FCA

ALIGNMENT

  • Yes. The FCA’s Consumer Duty makes customer understanding a formal regulatory obligation. Firms must demonstrate that their communications enable retail clients to understand and act on information - especially for complex or risk-bearing products. The Duty doesn’t just encourage plain English; it requires evidence that customers actually understand. CUE exists to provide that evidence. It translates regulatory expectations into practical, testable solutions - so firms can validate that their target market really gets it.

  • The FCA is clear: firms must go beyond just “clear writing.” Communications must be designed and tested for real comprehension - especially when the audience includes retail or vulnerable clients. CUE provides the structure to do that: we assess readability using five independent tests, apply plain English and psycholinguistic scoring, and simulate actual investor comprehension using realistic personas. This is what the FCA calls “robust evidence of customer understanding” (FG22/5 §8.55). We don’t just make writing clearer - we prove it works.

  • CUE is directly aligned with a range of named FCA frameworks, including:

    ·       FG22/5 Consumer Duty Guidance, especially Section 8 on testing for customer understanding.

    ·       PRIN 2Covers fair value, accessibility, and customer understanding.

    ·       PROD 4.2.33: Requires manufacturers to identify and mitigate risks for target markets, including those with vulnerabilities.

    ·       FS16/10: Encourages industry-wide standardisation of defined terms.

    We also incorporate published guidance from regulators (e.g. Fairer Finance, Behavioural Insights Team research, Plain English Campaign).

  • The FCA repeatedly stresses the need to tailor communication to the actual capabilities of the target audience - especially where vulnerability may be present (FG22/5 §§4.14, 8.31–8.35). CUE addresses this in multiple ways:

    ·       Our readability engine flags content likely to cause difficulty for low-literacy or neurodiverse audiences.

    ·       Our accessibility module ensures documents are suitable for alternate formats (braille, audio, large print).

    ·       Our AI personas reflect varied vulnerability traits - like document fatigue, abstraction limits, or misinterpretation of tone - which often trigger complaints.
    This ensures our testing reflects real diversity in investor capability, not just theoretical targets.

  • The FCA is explicit: jargon must be avoided where possible, and where unavoidable, clearly explained (FG22/5 §8.13). CUE’s glossary module benchmarks defined terms across the industry and flags:

    ·       Terms that are overly complex or inconsistently defined.

    ·       Definitions that don’t align with FCA expectations of plain language.

    ·       Opportunities to consolidate or simplify terms across products (as done by UKSPA).
    We go beyond spotting jargon - we test if people actually understand it.

  • Actual testing is required. FG22/5 §8.5 and §8.55 both state that testing with representative users is essential - and that findings should be used to improve future communications. CUE enables this with:

    ·       Pre-release readability testing.

    ·       Comprehension testing using AI personas based on FCA consumer profiles.

    ·       Optional live surveys and interviews to validate results in the real world.
    This isn’t a “nice to have” - it’s part of the FCA’s definition of compliance under Consumer Duty.

  • Inclusive design means communications must work for as many people as possible - not just the average. CUE reflects this principle in three key ways:

    ·       By simulating investor personas who process information differently (e.g. someone who skims vs. someone who struggles with abstraction).

    ·       By identifying layout and visual issues that could undermine comprehension (e.g. font size, misplaced disclaimers).

    ·       By offering alternate formats under our accessibility pillar, which aligns directly with Equality Act 2010 and PRIN 2A.6 requirements.

  • Yes. The FCA wants firms to embed processes of continuous improvement (FG22/5 §8.55). CUE supports this in two ways:

    ·       Every review is logged in a formal CUE audit trail, showing what was tested, what was changed, why it was changed, and what impact it had.

    ·       Firms can rerun reviews as product documents evolve - with updated scoring and persona tests to track progress.
    This transforms compliance into a trackable improvement process - not a one-off box-tick.