Who is Asian? hero image
Case study

Who is Asian?

Uncovering bias in racial categorization through behavioral data.

Project Overview

Role

Lead Researcher

Timeline

2 years (academic project)

Methods

Closed card sort x3, quantitative analysis

Tools

R, Qualtrics

Problem

When products collapse "Asian" into a single checkbox, they assume a shared meaning, distorting user data and reinforcing East-Asian defaults.

Goal

Quantify perceptual bias and test whether granular sub-categories improve accuracy and inclusion in demographic design.

Impact

+65 %

Accuracy improvement

x0.5

Misclassification reduction

$280 K

NSF grant informed by findings

UX Use Cases

  • UX/UI Design for Forms: Improves demographic forms by using granular categories that make users feel represented.
  • Marketing & Segmentation: Enables product and marketing teams to design region-specific personas rather than treating “APAC” as a single market.
  • Data Science & AI Bias: Supports fairer ML training data by preventing collapsed or inaccurate demographic labels.
  • DEI & HR Analytics: Provides frameworks for employee surveys that reveal nuanced representation patterns across groups.
  • Global Product Expansion: Demonstrates why localized demographic design is key to equitable, scalable growth.

Process

Setup

We formulated 3 closed card sorting tasks to assess how granularity of labeling affected accurate categorization of Asian photos.

Synthesis

Compared outcomes across studies to isolate where labeling taxonomy drove error and where guidance improved inclusion.

Study 1 - Baseline

96 photos sorted into four racial categories to establish a perception baseline.

Study 1 baseline animation

Study 2 - Binary

Asian vs Non-Asian classification to test in-group boundaries and default associations.

Study 2 binary animation

Study 3 - Granular

East, South, Southeast, or Other Asian options increased accuracy and representation.

Study 3 granular animation

Research insights

Research insights visual
Colloquial interpretations of "Asian" distort the collection of demographic data.

Asian is not East Asian

Default mental models equated "Asian" with East Asian. Granular categories broadened inclusion.

Boxes erase identity

Broad labels hid South- and Southeast-Asian representation and reduced data fidelity.

Information adds depth

More specific options raised correct classifications and reduced misclassification.

Deliverables

Key graph from the study
Key result graph.

NSF Grant

The details of the award have been removed from the NSF website. Check out the coverage of the NSF grant on the University of Washington website.

Read the article

Theoretical Contribution

This project is grounded in social psychological research. Read more about the primary scholarly framework that underpins the study design.

Open the PDF

Reflection

This project underscored how taxonomy design shapes representation. Granular, transparent category systems are small UI choices with large equity impact.

Want the survey taxonomy or study materials?

I am happy to share a sanitized survey playbook and the coding rubric.

Email me ->