Background
Hero image
Case study

Bias Amplification in AI Models

UX research to identify and mitigate amplification of human perceptual bias in generative models.

Project Overview

Role

Lead researcher - study design, audit workflow, synthesis.

Timeline

Proposal to ongoing analysis - audit integrated pre-launch.

Methods

Human-centered AI, prompt ladders, human baseline comparison.

Tools

Gemini, ChatGPT, DALL·E, R, Qualtrics.

Problem

Models amplify perceptual defaults - lowering trust and creating product risk.

Goal

Move from reactive fixes to a pre-launch audit - quantify amplification, diagnose drivers, add guardrails.

Impact metrics

Amplification score

Computed after the comparison dataset is complete.

Mitigation coverage

Audit steps integrated into pre-launch checks.

Decision outcomes

Prompts and copy reviewed before ship.

Process

1. Human baseline

Use prior categorization data to set a measurable baseline.

2. Model prompts

Test vague-to-specific prompts to observe prototype defaulting.

3. Compare and quantify

Compare outputs to the baseline to estimate amplification.

Research insights

Prototype defaulting

Vague prompts aligned with an East-Asian prototype - echoing human defaults.

Prompt specificity

Enumerating sub-groups increased diversity - stronger at higher specificity.

Audit before launch

A lightweight loop flags amplification early so teams choose fairer defaults.

Model results

ChatGPT results
ChatGPT Output
Google Gemini results
Google Gemini Output

Deliverables

Bias audit framework

Standardizes prompt ladders, review gates, documentation.

Fairness checklist

Pre-launch decision gates for prompts, data notes, UI copy.

Reproducible workflow

Repeatable documentation for cross-team audits.

Reflection

Bias amplification is a product-quality issue. A light audit loop helps teams ship with clarity.

Ready to audit bias in your AI launch?

Use this framework to compare human and model outputs and reduce risk before release.

Download My Sample Pre-Launch Audit