AI Journalism & Credibility
Summary
With recent advancements in generative AI, we witness AIs playing an increasing role in areas such as news writing and publication. Some people find human-written articles more credible compared to AI-written ones, while others do not.
To investigate this variability, we draw on the concept of machine heuristics---a mental shortcut where individuals apply common stereotypes about machines when making judgments about an interaction’s outcome. We conduct an online experiment with 381 participants, asking them to assess the credibility of science news articles that are labeled as either being written by a human journalist or generative AI (labeled author), while both articles were actually written by either a human or the AI (actual author).
Our findings reveal that on average, participants considered labeled-human authors as more credible than labeled-AI authors, regardless of the actual authorship of these articles. However, this effect is moderated by machine heuristics; the stronger the machine heuristic, the more credible the labeled-AI authors were perceived to be. Understanding these dynamics is critical for designing transparent communication and labeling practices for fostering appropriate trust in AI-generated content.
My role
Project lead
Collaborators
Katelyn Mei, Donghoon Shin, Spencer Williams, Lucy Lu Wang, Gary Hsieh