Setup
We formulated 3 closed card sorting tasks to assess how granularity of labeling affected accurate categorization of Asian photos.

Uncovering bias in racial categorization through behavioral data.
Lead Researcher
2 years (academic project)
Closed card sort x3, quantitative analysis
R, Qualtrics
When products collapse "Asian" into a single checkbox, they assume a shared meaning, distorting user data and reinforcing East-Asian defaults.
Quantify perceptual bias and test whether granular sub-categories improve accuracy and inclusion in demographic design.
Accuracy improvement
Misclassification reduction
NSF grant informed by findings
We formulated 3 closed card sorting tasks to assess how granularity of labeling affected accurate categorization of Asian photos.
Compared outcomes across studies to isolate where labeling taxonomy drove error and where guidance improved inclusion.
96 photos sorted into four racial categories to establish a perception baseline.
Asian vs Non-Asian classification to test in-group boundaries and default associations.
East, South, Southeast, or Other Asian options increased accuracy and representation.
Default mental models equated "Asian" with East Asian. Granular categories broadened inclusion.
Broad labels hid South- and Southeast-Asian representation and reduced data fidelity.
More specific options raised correct classifications and reduced misclassification.
The details of the award have been removed from the NSF website. Check out the coverage of the NSF grant on the University of Washington website.
Read the articleThis project is grounded in social psychological research. Read more about the primary scholarly framework that underpins the study design.
Open the PDFThis project underscored how taxonomy design shapes representation. Granular, transparent category systems are small UI choices with large equity impact.
I am happy to share a sanitized survey playbook and the coding rubric.
Email me ->