How multilingual researchers build understanding & trust with conversational AI

**Ongoing research & design for fall of 2025

Research Paper

Problem

Multilingual researchers often struggle to express nuanced academic ideas through English-dominant AI systems. When switching between languages to clarify meaning, they experience mistranslations, hallucinations, and uneven tone or conceptual depth. These breakdowns reduce trust, increase cognitive load, and discourage the use of native languages — reinforcing inequities in global research communication.

Our research draws on survey data (n=24) and contextual inquiries with multilingual researchers across disciplines, revealing patterns of code-switching, verification loops, and adaptive prompting. Findings informed a set of design principles for AI text-entry interfaces that support language fluidity, conceptual accuracy, and user trust.

Overview

As AI tools like ChatGPT become essential in academia, multilingual researchers face barriers when using them for research and writing. Most AI systems are trained primarily in English, leading to misinterpretations, uneven translation quality, and cognitive fatigue when switching between languages. This project investigates how multilingual users navigate these breakdowns and identifies design opportunities for equitable, cross-lingual interaction.

  • How do multilingual researchers use generative AI to read, write, and translate academic texts?

  • What cognitive and linguistic challenges arise when switching between English and native languages?

  • How can interface design better support bilingual prompting and trust during AI-assisted research?

Methods

Mixed-Methods Studies…

Tools: ChatGPT, Qualtrics, Figma, Google Docs, Miro

Survey (n=24)

Quantified multilingual AI use patterns, challenges, and coping strategies.

Contextual Inquiries (n=5)

Quantified multilingual AI use patterns, challenges, and coping strategies. Results were tracked in an affinity diagram.

Affinity Mapping & Flow Analysis

Identified recurring breakdowns and user adaptations across participants.

Key Findings

  • 1. English Bias in Interaction: 80% of multilingual users avoid using native languages with AI due to lower accuracy and poor translation handling.

  • 2. Trust and Verification Loops: Participants frequently double-check AI outputs with original sources or manually translate key terms.

  • 3. Cognitive Overhead in Language Switching: Switching between languages interrupts workflow and increases fatigue.

  • 4. Adaptive Workarounds: Users develop strategies like bilingual scaffolding, example-based re-prompting, and simplified phrasing to regain control.

Design Implications

  1. Fluid Language Switching: Enable seamless transitions between languages within the same prompt.

  2. Transparent Verification Tools: Integrate mechanisms for fact-checking and translation comparison.

  3. Context-Aware Responses: Calibrate tone, detail, and explanation depth based on user intent.

  4. Multilingual Confidence Cues: Indicate model reliability for each language to reduce anxiety and mistrust.

Next Steps

  • Develop low-fidelity prototypes for text-entry scaffolds supporting bilingual interaction.

  • Conduct heuristic evaluation and usability testing to validate design feasibility and cognitive impact.

  • Explore partnerships with educational or research AI platforms to pilot multilingual prompt assistance.

My Role

UX Researcher & Designer

  • Led contextual inquiries and affinity mapping.

  • Synthesized breakdowns into actionable design requirements.

  • Co-authored and structured user requirements for equitable AI design.

  • Contributed to persona and storyboarding for multilingual researcher scenarios.

Keywords

UX Research • Generative AI • Bilingual Interaction • Cognitive Load • Human-AI Collaboration • Text Entry Design