Case Study (in progress)

Clarifying System Errors on Member Home

Clarifying System Errors on Member Home

Context

Company, Team

USAA, Bank Omnichannel Design Team

Industry

Enterprise Financial Services

Year

2023

Role

UX Strategy, Research, Behavioral Analysis

Tools

Mural, Figma, UserZoom, Box, Wiki, Zoom, Slack

The Problem Wasn't What We Assumed It Was

Every month, roughly 1.2 million USAA members encountered a vague system error on the Member Home screen. When backend issues caused account balances or tools to be unavailable, members saw a single line of text: "Due to a technical issue, some of your information may be missing or rearranged." The Product team had already drawn a conclusion from that message. The word "missing" was believed to be causing members to worry their account balances were gone. Three candidate messages had already been written to address that, all of them built around reassuring members their balances were safe: "Some of your display preferences may be unavailable right now. Your account balances are not affected. Please try again later." "Some of your information may be slow to load. Your account balances aren't affected. Please try again later." "We're unable to show some of your information right now. This doesn't affect your account balances. Please try again later." The request was to test which version performed best. Before testing anything, I needed to find out if the assumption driving all three was actually correct.

Discovery

What the Data Actually Said

My first step was to analyze six months of mobile MSAT feedback, member satisfaction verbatims from January through July. I wasn't looking for confirmation. I was looking for what members were actually saying.

Out of everything I analyzed, only three members expressed concern about their balances being affected. One had actually moved money because they thought their balance was wrong. The vast majority were simply annoyed and blocked. Some were refreshing but frustrated they had to. Others were stopping at the message entirely, closing the app, and walking away blocked.

The balance reassurance in the proposed new messages wasn't wrong exactly. It addressed a real concern for a small subset of members. But it was being treated as the primary problem when it was actually a secondary one. The larger problem, friction and lack of guidance during disruption, wasn't being solved at all.

That distinction changed how we framed the problem.

Discovery

With the verbatim analysis done I built a hybrid journey and empathy map, combining what members were doing, thinking, and feeling from the data I had. I didn't have complete information. The original ask had been to go test, not to research first. So I quickly mapped what I knew before our scheduled kickoff meeting and flagged what I didn't, surfacing the questions the data couldn't answer on its own.

One of those questions was about the error itself. My instinct was to find the root cause so that I could advocate for solving that instead. I started scouring internal wikis looking for what was actually causing the system errors and whether that was something we could address. What I found was that another team was already working on it. The root cause was not ours to solve right now.

If we couldn't fix the underlying problem, what could we actually do for members in the moment? The message was the answer. And if the message was going to do real work, it needed to tell members something useful, not just reassure them.

I coordinated getting that team's Product owner into a meeting with my Product partner and Design Lead.

What she told us was a gold nugget of information. The error message was intentionally generic because the system error could be caused by a variety of issues. But for most members, refreshing the app would resolve it. That was true, well understood internally, and nowhere in the original message or in any of the three proposed replacements.

That gap was the real problem. Not the word missing. Not member anxiety about balances. Members weren't being told that something as simple as a refresh could fix what they were seeing. Some figured it out on their own and were annoyed they had to. Others gave up entirely.

Discovery

I wanted to understand what happened if a member didn't figure it out on their own. So I went further. I turned off my wifi, triggered the error myself, and initiated a chat with a member service rep to see what guidance a real member would receive.

The rep was helpful and professional. She let me know USAA was aware of the issue and working on a fix. She provided reassurance. What she did not do was tell me to refresh the app.

That confirmed the gap ran all the way through the support system. The error message didn't tell members to refresh. The three proposed replacement messages didn't tell members to refresh. And if a member called or chatted for help, the rep didn't tell them to refresh either. A member hitting this error had no path to self-resolution anywhere. The only people resolving it were the ones who already knew to try refreshing from prior experience.

That is what the message needed to fix.

Reframing

Getting the Problem Statement Right Before Anything Else

With discovery done, I had a complete enough picture to bring my Product partner along. Rather than jumping to message testing, I led a set of human-centered design exercises to walk her through what the research was actually showing. Together we co-developed a problem statement and a change statement that would serve as the evaluation criteria for everything that followed.


Problem statement: ~1.2M members per month receive a vague page-level system error on Member Home, causing frustration and concern when account information, tools, and functionality are unavailable or incorrect, resulting in member complaints.


Change statement: We'd like to change members' sentiment and actions when encountering unavailable information or tools in order to increase confidence in USAA and fulfill our mission.


The change statement was the critical addition. It made refresh behavior the measurable proxy for success, not sentiment ratings alone.

I then asked us to evaluate the three message options that were originally brought to Design to see how those aligned with our problem and change statements. It was clear we needed to make updates. Partnering with the Content team, the messages were udpated.

With our new messages I was ready to test.

Research

What We Tested and Why

Working with the Design Research team, I developed a research plan to test three message variants across 150 participants, 50 per message, via unmoderated UserZoom testing on mobile. All three tested messages retained the balance reassurance from the original product candidates because our research had confirmed that addressed a real, if minority, concern. What changed was the addition of refresh guidance. The three messages were now designed to isolate two variables: the presence of a Refresh CTA, and the presence of explicit refresh instructions in the copy itself.

Message A: Instructions to refresh, no CTA button

Message B: Instructions to refresh + Refresh CTA (the hypothesis was that both together would drive behavior)

Message C: No instructions, Refresh CTA only

Two scenarios were tested per message: one requiring members to find their credit card balance, one requiring them to move money between accounts. The money transfer scenario was particularly important because it simulated the highest-stakes context where acting on incomplete information could create real problems for members.

I created the prototypes and we were ready to test in UserZoom.

The Analysis Problem

What the Dashboard Didn't Show

When the usability test data came in, the initial read of the aggregate quantitative output suggested the three messages performed similarly. At a glance, the refresh rates looked close enough to question whether any message was clearly better.

I wasn't satisfied with that conclusion.

I went into the UserZoom click path data, the part of the platform that shows the actual sequence of screens each participant reached, and manually mapped every participant's path through every condition. It was painstaking work. But it revealed something the aggregate numbers had completely obscured.

What the Click Paths Revealed

The Finding That Changed the Recommendation

The aggregate refresh rates were hiding behavioral sequencing data that was decisive.

Scenario 2, transfer money:

Message A: 48% refreshed, 36% refreshed first

Message B: 79% refreshed, 66% refreshed first

Message C: 74% refreshed, 40% refreshed first

Message B and C had similar overall refresh rates. But only Message B caused the majority of members to refresh before attempting any other action. Test C participants refreshed at nearly the same rate, but they did it after tapping Transfer first. That means they were already operating on incomplete information when they attempted a financial transaction.

In a banking app, that ordering gap is the difference between a contained error state and a member who transfers money on a broken screen, sees a wrong confirmation, and calls the contact center.

This wasn't visible in the summary dashboard. It required tracing individual paths.

Message B also led across every other dimension:

85% refresh rate in Scenario 1 versus 64% for A and 75% for C.

82% participant confidence versus 57% and 63%.

81% ease rating versus 59% and 65%.

The recommendation was clear, and it was defensible with evidence that was undeniable.

Outcome

Message B was selected for implementation. It retained the balance reassurance from the original candidate messages, which our research confirmed addressed a real if minority concern, and added explicit refresh guidance and a Refresh CTA to serve the majority who simply needed to know what to do next.

That combination made sense given the underlying system reality. 80% of errors were caused by low internet connectivity where a refresh could resolve the issue. The remaining 20% had various unknown causes, but refreshing could still help members in those moments too. "Please refresh or try again later" covered both scenarios. The Refresh CTA made the guidance impossible to miss.

The research was presented to Product leadership including a Product executive, giving organizational visibility to a finding that had originally been scoped as a simple copy test.

The USAA app has since moved toward preventing sign-on entirely when there is no internet connectivity, a more complete resolution of the most common cause of the error.

What I Learned

The most important design decision on this project happened before any design work started. I chose to analyze existing member feedback before accepting the problem statement I was handed. The assumption that members were worried about their balances was reasonable. It was already shaping the solution. It was also incomplete.

Going deeper into the data is honestly the part of this work I find most exciting. Coming back with something surprising, something that gets to the real source of member frustration and reframes the whole conversation, that's where I feel most useful as a designer. In this case that meant connecting member guidance during a system disruption to real business outcomes: fewer calls to the contact center, higher task completion, and members who could help themselves without needing anyone's assistance.