Customer Experience / AI Copilot / Wealth Ops

AI Customer Service Copilot for Wealth Operations

I designed a secure, context aware support assistant that empowers banking agents to resolve complex financial queries faster with verifiable, policy backed answers.

-40%
Handling Time
85%
First Contact Resolution
99%
Policy Compliance
+32%
Agent Satisfaction
Role
Lead Product Designer
Timeline
5 Months
Focus Area
Service Design, Conversational UX, Prototyping

Service design

Copilot anatomy: how assistance stays grounded

Instead of a single long scroll of problems and solutions, this case study opens with the four beats agents felt in production: listen, retrieve, compose, and prove. Each beat maps to UI affordances we validated in the wealth desk pilot.

Design principle

Every suggestion ships with a citation trail—agents never guess which policy version applies.

  • Confidence surfaced as tier, not a vague percentage.
  • Handoff packets preserve customer intent verbatim.
Interactive note: the deep-dive sections below unpack research, task model, and UI. Use the right-rail dots to jump—keyboard-friendly and mirrored for screen readers.
  1. Listen

    Live transcript + CRM context fused so the copilot knows product mix and risk tier.

  2. Retrieve

    Policy graph search returns clauses, not PDF pages—reducing hunt time dramatically.

  3. Compose

    Drafts mirror brand voice; agents edit, not rewrite from scratch.

  4. Prove

    Audit cards bundle sources, model version, and human approval for compliance review.

The Problem

Fragmented Knowledge and Frustrated Customers

When I joined the wealth operations team, customer support was struggling under the weight of complex financial queries. Agents had to navigate a fragmented landscape of legacy knowledge bases, PDF policy documents, and disjointed CRM tools just to answer questions about tax wrappers, portfolio rebalancing, or fund fees.

This operational friction led to excessively long hold times and inconsistent answers. On the customer side, the existing self serve chatbot was rigid and frustrating, often looping users through generic responses before unceremoniously dumping them into an agent queue with zero context. We needed a system that empowered our human experts rather than replacing them with a flawed bot.

The Support Bottleneck

Time Spent Searching Policies 45%
First Contact Resolution 52%
Handoffs Missing Context 80%

Strategic Vision: Empowering the Human Agent

The Knowledge Gap

Agents are forced to act as search engines across disparate systems, degrading the customer experience. They need immediate, contextual access to policy data without breaking the conversation flow.

The Strategy

Context Aware Assistance. I proposed an AI copilot that listens to the live chat, retrieves relevant policies, and drafts suggested responses, allowing agents to focus entirely on empathy and resolution.

AI Suggestions Seamless Handoffs Policy Grounding
[SCREEN SLOT 02: Fragmented support journey]

The Goals

Designing a Trustworthy Assistant Experience

My objective was to design an AI copilot that acted as a highly knowledgeable whisperer to our human agents, while also handling safe, bounded self serve queries directly with customers. The core challenge was establishing trust: ensuring every AI generated answer was grounded in verifiable policy documents and clearly indicating when human escalation was required.

Heuristic Evaluation Findings

Severity 1
Flexibility and Efficiency of Use

Agents had to manually copy and paste customer details into separate search tools to find relevant policies.

Severity 1
Error Prevention

No safeguards existed to prevent agents from providing outdated or incorrect financial advice.

Severity 2
Help and Documentation

The knowledge base was disconnected from the chat interface, making it difficult to reference during live conversations.

[SCREEN SLOT 03: Service agent and customer needs]

Research & Validation

Mapping the Escalation Journey

Research Scale

15
Agent Shadowing Sessions
6
Prototype Iterations
4
Compliance Reviews

Discovery Phases

1

Contextual Inquiry

I shadowed support agents during high volume shifts, mapping the exact moments where conversations stalled and friction occurred.

2

Boundary Definition

Working closely with legal and compliance teams, I defined strict safe response boundaries for autonomous AI answers.

3

Concept Testing

I tested UI concepts with senior agents, focusing heavily on how to display AI confidence scores and source citations without cluttering the workspace.

I mapped out the entire agent journey, from the moment a customer query arrives to the final resolution. It became clear that agents were suffering from cognitive overload. They had to maintain the context of the customer conversation while simultaneously executing complex search queries across multiple internal systems.

This analysis revealed that a successful AI copilot needed to be proactive rather than reactive. Instead of waiting for the agent to ask a question, the copilot needed to listen to the live conversation, anticipate the required information, and surface it seamlessly within the agent's primary workspace.

Core UX Actions

  • Shadowed support agents to identify friction points in their daily workflows.
  • Defined strict safe response boundaries with legal and compliance stakeholders.
  • Tested UI concepts for showing AI confidence and source citations with the ops team.

Key Research Insight

"I don't trust the AI to talk directly to my clients about their wealth. But if it can find the exact policy clause I need while I'm on the chat, it would save me hours every day."

- Senior Support Agent, Wealth Operations

This insight validated our approach: the AI should augment the human agent, not replace them. Transparency and source verification were non negotiable requirements for adoption.

AI Techniques & Final Design

A Context Aware Support Workspace

Retrieval

RAG Architecture

I designed the interface to leverage Retrieval Augmented Generation, grounding every AI response in verified internal policy documents rather than generic knowledge.

Generation

Contextual Suggestions

I implemented a real time drafting system that analyzed the live chat feed and proactively suggested complete, accurate responses for the agent to review.

Validation

Confidence Scoring

I created a visual hierarchy for confidence indicators, ensuring agents could instantly gauge the reliability of a suggestion and access the source citation.

I designed a unified conversation workspace that brought the customer's portfolio data, the live chat feed, and the AI copilot into a single, cohesive view. This eliminated the need for agents to constantly switch tabs and lose context during critical support moments.

[SCREEN SLOT 04: Suggested reply panel]
AI Assistance

Suggested Reply & Confidence Panel

Instead of forcing agents to type out complex policy answers, the copilot drafted suggested replies in real time. Crucially, I designed a confidence indicator next to each suggestion. If the AI was highly confident, the agent could send the reply with one click. If confidence was low, the UI prompted the agent to review and edit.

Verification

Verifiable Source Citations

To build trust in the AI's outputs, every drafted response included a direct link to the internal policy document it referenced. Agents could hover over the citation to see a snippet of the source text, ensuring they never sent unverified information to a customer.

[SCREEN SLOT 06: Confidence and source panel]
[SCREEN SLOT 05: Product explainer card]
Modular Content

Product Explainer Cards

Financial products are notoriously difficult to explain via text chat. I designed modular Explainer Cards that the copilot could surface for the agent. These cards provided bite sized, visual summaries of complex topics like ISA allowances or fund fee structures, which the agent could seamlessly drop into the customer chat.

Workflow

Seamless Escalation Logic

When a customer interacted with the self serve mode and hit a policy boundary, the transition to a human agent was designed to be frictionless. The UI passed the entire conversation history, the customer's intent, and the AI's attempted resolution directly into the agent's workspace, completely eliminating the dreaded repetition of issues.

[SCREEN SLOT 07: Escalation and handoff flow]
Engineering Handover

Bridging Design and Engineering

I collaborated closely with the engineering and data science teams to ensure the copilot features were technically feasible. We established clear API contracts to define how RAG responses, confidence scores, and source citations would be passed to the frontend components without latency issues.

API Contracts Component Specs Latency Optimization Storybook Assets
Zero
Implementation Delays

Validation & Results

Empowered Agents, Satisfied Customers

-40%
Average Handling Time
85%
First Contact Resolution
99%
Policy Compliance
+32%
Agent Satisfaction

By treating the AI not as a replacement for human support, but as a highly capable assistant, we fundamentally transformed the wealth operations floor. Agents spent less time digging through documentation and more time providing empathetic, accurate guidance. The clear source citations and confidence indicators ensured that our brand trust was protected at every touchpoint.

[SCREEN SLOT 08: Results dashboard]

Achievements & Lessons Learned

Key Achievements

  • Successfully deployed across the entire wealth operations team, establishing a new paradigm for AI integration.
  • Created a reusable design system for AI assisted internal tools that was adopted by other departments.

Lessons Learned

  • Design for Trust: Showing confidence scores and source citations was critical to overcoming initial agent skepticism.
  • Collaborative AI: AI is most effective when it augments human expertise rather than attempting to replace it entirely.
"I no longer dread complex policy questions. The copilot finds the exact clause I need instantly, letting me focus on actually helping the customer."

Senior Support Agent, Wealth Operations