projects

Summary
With recent advancements in generative AI, we witness AIs playing an increasing role in areas such as news writing and publication. Some people find human-written articles more credible compared to AI-written ones, while others do not. To investigate this variability, we draw on the concept of machine heuristics---a mental shortcut where individuals apply common stereotypes about machines when making judgments about an interaction’s outcome. We conduct an online experiment with 381 participants, asking them to assess the credibility of science news articles that are labeled as either being written by a human journalist or generative AI (labeled author), while both articles were actually written by either a human or the AI (actual author). Our findings reveal that on average, participants considered labeled-human authors as more credible than labeled-AI authors, regardless of the actual authorship of these articles. However, this effect is moderated by machine heuristics; the stronger the machine heuristic, the more credible the labeled-AI authors were perceived to be. Understanding these dynamics is critical for designing transparent communication and labeling practices for fostering appropriate trust in AI-generated content.
My role
Project lead
Collaborators
Katelyn Mei, Donghoon Shin, Spencer Williams, Lucy Lu Wang, Gary Hsieh
Summary
The rise of LLMs has ushered in a wave of conversational search engines. These interfaces allow people to engage in dialogues with LLM-infused chatbots to seek information. However, as people tend to infer personalities from digital social interactions, and given that personalities have been shown to affect credibility, these perceptions of chatbot design may affect assessments of information credibility. In this study, we conducted a controlled online study with 190 participants. We found that in conversational search, perceived conscientiousness and agreeableness of a chatbot can increase credibility, while perceived extraversion and neuroticism can decrease the credibility of the search results. This research contributes to our understanding of how conversational interfaces and their personality and persona designs can impact credibility. We also provide design implications for conversational search interfaces based on our findings.
My role
Project lead
Collaborators
Uran Oh, Gary Hsieh

Sample conversational exchanges




Summary
This project examines the role of interactivity enabled by LLM-based agents in communication skills training, focusing on patient communication in healthcare. Conversational agents can provide realistic role-play and feedback, yet prior work has largely emphasized training providers rather than patients. As patient–doctor relationships shift toward a cooperative model, patients’ active communication skills have become increasingly important but remain under-supported. Grounded in the validated PACE framework, we propose a scalable, interactive intervention system to address this gap in patient communication training.
My role
Project lead
Collaborators
Sean A. Munson, Gary Hsieh
Summary
The dissemination of scholarly research is critical, yet researchers often lack the time and skills to create engaging content for popular media like short-form videos. To bridge this gap, we explore the use of generative AI to help researchers transform their academic papers into accessible, short-form videos. Informed by our formative study (N=8) with science communicators and content creators, we designed PaperTok, an end-to-end system that generates an integrated first draft through automating the initial creative labor by generating script options and corresponding audiovisual content from a source paper. Then, the researcher refines based on their preferences with further prompting. A mixed-methods user study (N=18) and crowdsourced evaluation (N=100) demonstrate that PaperTok's workflow can help researchers create engaging and informative short-form videos. We also identified the need for more fine-grained controls in the creation process. To this end, we offer implications for future generative tools that support science outreach.
My role
Examined credibility and the role of human input in AI-assisted science communication. Conducted interviews with science communicators and content creators, and usability studies with HCI researchers to understand how human involvement shapes trust and quality.
Collaborators
Members of the Prosocial Computing Lab with equal contribution
Summary
This project explores a Virtual Study Assistant (VSA) that uses generative AI to improve accessibility and participation in research studies, particularly for historically underrepresented populations. The bilingual (English–Spanish) chatbot supports recruitment, inquiries, and screening while reducing the labor required for translation and adaptation. Designed in collaboration with translational health teams, the system critically examines how AI-driven language support can expand access without reinforcing existing biases or inequities.
My role
Project lead, Part of the Social Entrepreneurship Fellowship
Collaborators
Weichao Yuwen, Jennifer Shannon, Psychedabout.ai
Summary
This project explores how theory-based intervention exercises can be translated into lightweight, everyday conversations through a text-based chatbot. Rather than standalone clinical tools, we investigate conversational designs that reduce burden and fit naturally into daily routines. Focusing on adolescent and young adult (AYA) cancer survivors, we examine how intervention content should be tailored to a specific population without over-personalizing or increasing interactional complexity. Grounded in established psychosocial theories, the work bridges research findings and practical conversational interventions.
My role
Project lead
Collaborators
Seattle Children's Hospital, Nancy Lau, Gary Hsieh
Summary
The client-clinician relationship is crucial for the success of mental health care treatments. However, the process of finding a well-matched clinician remains challenging for many individuals, particularly those new to mental healthcare. In this study, we explore the dynamic journey of experiences and challenges clients face in navigating the clinician selection and engagement process. Through interviews with 22 participants who have interacted with multiple clinicians in the U.S., we identified key factors influencing their decisions to continue or change clinicians, as well as the resources they utilized during their search. While participants acknowledged the importance of factors previous research noted in their clinician match and therapeutic alliance, they also experienced difficulty assessing these characteristics before engagement. Our findings highlight opportunities to improve upon the process by which people match clinicians, including the limited effectiveness of existing technology and the need for personalized guidance.
My role
Project lead
Collaborators
John C. Fortney, Sean A. Munson
Abstract
Many digital applications offer avatar customization options, positively affecting user experience. However, the adoption of auditory aspects in avatar customization has often been neglected and may have been understudied for its potential. Inspired by prior research that uncovers end-user’s demands for voice customization, we seek to apply the identified implications into practice and discover enduser’s voice preferences and behavior towards voice customization systems. To this end, we designed and deployed AVOCUS, a web application that enables users to search for specific voices or manipulate voice-related parameters to generate a voice similar to a target voice. Our findings suggest that (1) searching for specific voice using hashtags were perceived to be easy, (2) customized voices generated from voice reflection and voice parameter control functions had high satisfaction, and (3) participants tend to reflect the features of their desired voices when customizing their own voice.
Overview
As a follow-up study to our CHI 22 publication, we developed a voice customizing web app and conducted user testing. In this app, users can upload a target voice they wish to resemble, and either upload their own voice or record it directly through the interface. By adjusting the similarity to the target voice and fine-tuning specific attribute values, the app generates a customized voice output within seconds. Although traditional voice generation typically requires AI models and large datasets, this study is significant in presenting an alternative approach that does not rely on such resources.
My role
Project lead
Collaborators
Seungjin Ha, Uran Oh

System Overview



Abstract

Although there is a potential demand for customizing voices, most customization is limited to the visual appearance of a figure (e.g., avatars). To better understand users’ needs, we first conducted an online survey with 104 participants. Then, we conducted a semi-structured interview with a prototype involving 14 participants to identify design considerations for supporting voice customization. The results show that there is a desire for voice customization, especially for non-face-to-face conversations with unfamiliar individuals. Additionally, findings revealed that different voices are favored for different contexts, from an enhanced version of one’s own voice for improving delivery to a completely different voice for securing identity. As future work, we plan to extend this study by investigating voice synthesis techniques for end-users who wish to design their own voices for various contexts.


Project Overview

We confirmed through a survey study that people tend to have different personas suited to various situations. Next, we identified the demand for customized voices and used Amazon mTurk crowdsourcing to collect labels for 2,140 voice samples. After designing the UI in Figma, we developed a web app that allows users to search for voices based on these labels. Finally, through a user study, we gathered feedback on the system’s usability and identified specific personas of preferred voices for different situations.


Prototype Design

Search Engine Interface



Abstract

Song signing is a method practiced by people who are d/Deaf and non-d/Deaf individuals to visually represent music and make music accessible through sign language and body movements. Although there is growing interest in song signing, there is a lack of understanding on what d/Deaf people value about song signing and how to make song signing productions that they would consider acceptable. We conducted semi-structured interviews with 12 d/Deaf participants to gain a deeper understanding of what they value in music and song signing. We then interviewed 14 song signers to understand their experiences and processes in creating song signing performances. From this study, we identify three complex, interrelated layers of the song signing creation process and discuss how they can be supported and completed to potentially bridge the cultural divide between the d/Deaf and non-d/Deaf audiences and guide more culturally responsive creation of music.

Layers of work in song signing



Abstract

Patients often share information about their symptoms online by posting on web communities and SNS. While these posting data have been proven to be useful for improving psychological therapy experiences, little has known if and how the same approach can be applied to Korean. This paper investigates the performance of bidirectional language models. Results show that both multi-lingual BERT model and KoBERT (Korean BERT) model perform well on binary sentiment classification, reaching an accuracy of 90%. In addition, bcLSTM models outperformed on emotion recognition that classifies casual texts into Paul Ekman’s six emotions than positive/neutral/negative sentiment analysis. Through this research, we concluded that in order to utilize sentiment analysis models in psychological therapies, additional layer that detects certain psychological symptoms are necessary. As our future task, we are looking forward to propose a new deep learning model that detects emotion disorders.



Overview

Episode, Ewha project focuses on enhancing the accessibility and enjoyment of college festivals through an online platform.


Visit Our Website

For a detailed experience of our project and its features, visit the website:

Episode, Ewha - (Note: This link is currently closed)


Image Showcase

  • Screenshot of the first page of Episode, Ewha


  • Detailed functions of Episode, Ewha


My role

  • I took part as a member of the back-end development team.