People remain “blissfully ignorant” of AI use in everyday messages, new research shows

A recent study published in Computers in Human Behavior has found that people evaluate others harshly when they know a message was written using artificial intelligence. Yet, individuals tend to remain completely unaware of potential artificial intelligence use in everyday situations. When left in the dark about how a message was created, recipients assume a human wrote it and form positive impressions of the sender.

Generative artificial intelligence refers to computer programs that can produce realistic, human-like text based on simple user instructions. People increasingly use these tools (such as Claude, ChatGPT, and Gemini) to draft emails, social media posts, and text messages. Scientists Jiaqi Zhu and Andras Molnar wanted to explore how relying on these programs affects how we view one another in daily life.

Usually, writing a thoughtful message requires time and mental energy. These efforts signal a sender’s sincerity and investment in a relationship. Because text-generating programs remove this effort, the researchers wanted to know if the availability of these tools makes people more suspicious of the messages they receive.

Past studies have shown that people judge communicators negatively when they know a message was generated by artificial intelligence. However, in the real world, people rarely admit that they used a computer program to write their emails. Zhu and Molnar conducted their research to see how people form impressions in realistic situations where artificial intelligence use is kept secret or remains uncertain.

“In academic settings, discussion of generative AI has become unavoidable since ChatGPT’s release in late 2022. For most instructors, detection and regulation of AI use are now part of the job, and in this climate, it’s easy for vigilance to slide into full-on paranoia. Some instructors may even become overzealous, reading AI into writing that may be entirely human, as evidenced by the growing number of high-profile lawsuits against colleges over students who were failed or expelled based on suspected AI use,” said study author Andras Molnar, an assistant professor of psychology at the University of Michigan.

“But in my conversations with people outside academia, I realized we might be living in a bubble: what feels routine in academia may not reflect how people think elsewhere. That’s what motivated our study: we wanted to understand whether people suspect AI use in everyday contexts like emails, text messages, and social media profiles.”

To investigate these questions, Zhu and Molnar conducted a pair of online experiments. In the first experiment, the researchers recruited 647 adults in the United States and asked them to read a hypothetical email. The participants were randomly assigned to read one of four types of messages. These included a gratitude email from a friend, a job application from a nanny, a cover letter from a data analyst, or project feedback from a colleague.

The scientists divided the participants into four groups, giving each group different information about how the email was written. One group was told the sender wrote the message entirely on their own. Another group was told the sender used an artificial intelligence chatbot to write the exact text.

A third group was told they could not be certain whether the message was human-written or generated by artificial intelligence. The final group received no information about the source of the message. This last group mimics how we usually receive emails in real life.

After reading the email, participants rated their social impression of the sender based on ten personal traits. These traits included friendliness, sincerity, authenticity, and trustworthiness. The researchers found that participants evaluated the sender much more negatively when they knew artificial intelligence was used to write the message.

This finding confirms that an explicit disclosure of artificial intelligence use damages a person’s social reputation. The researchers also analyzed the words participants used to describe their first impressions of the sender. When artificial intelligence was disclosed, participants used fewer positive words and more negative words to describe the sender.

Yet, when participants received no information about how the message was created, they evaluated the sender just as positively as when they knew a human wrote it. The scientists noted that participants in this group showed no natural suspicion. Even in the uncertain group, where the possibility of computer assistance was highlighted, participants formed impressions that were much closer to the human-written group than to the artificial intelligence group.

“In these ordinary, everyday interactions, people really dislike receiving AI-generated messages from others,” Molnar told PsyPost. “For example, we don’t want AI-generated apologies, no matter how polished they are, because they sound inauthentic and hollow; outsourcing deeply personal communication to AI may even feel like a betrayal and signal disrespect.”

“However, this ‘AI penalty’ seems to apply only when we know or strongly suspect that someone used AI to write the message. What our work shows is that without explicit disclosure (for example, a label indicating AI use), people generally don’t suspect AI in everyday situations and treat these messages as if they were fully human-written.”

The researchers conducted a second experiment seven months later to see if rising public familiarity with these text-generating programs would increase natural skepticism. They recruited a new sample of 654 adults in the United States. This time, they updated the scenarios to include a wider variety of communication styles. The new scenarios featured a social media post about a summer internship, text messages apologizing for a canceled dinner, and a detailed online dating profile.

In this second experiment, the scientists asked participants to estimate how much time and mental effort the sender put into the message. The researchers also asked how accurately the text reflected the sender’s true feelings. Participants who were told the text was generated by a computer program gave lower ratings on all three of these measures.

For the group that received no information about the source of the message, participants assumed the sender invested the same amount of mental effort as a confirmed human writer. The researchers found that the lack of mental effort and reflection accuracy completely explained why participants penalized the artificial intelligence users. The results of the second experiment fully replicated the findings of the first study, showing that people remain blissfully ignorant of artificial intelligence use.

“What surprised us most was that people who themselves are heavy users of generative AI (who frequently send AI-generated or AI-edited messages) were not any more likely to suspect that others were using AI,” Molnar said. “We expected that more experience with these tools would make people more skeptical, but it didn’t. In other words, familiarity with AI doesn’t automatically translate into greater suspicion in everyday communication.”

“This finding matters because it suggests that people can outsource their writing to AI with relatively little risk of being detected, or even suspected. This creates an uneven playing field: people who don’t want to use AI, or can’t use it, may be at a disadvantage, while heavy users can come across as more articulate, polished, and effective without incurring negative perceptions — unless they admit that they used AI. And why would they?”

When discussing their findings, the scientists highlighted a potential misinterpretation regarding what the participants were actually evaluating. Molnar explained that the study was designed to measure how people judge the author of a message, not how they judge the quality or effectiveness of the text itself. The focus was entirely on the social impression formed about the person behind the screen.

The study also has a few limitations that provide avenues for future research. The experiments relied on hypothetical scenarios, which means participants might react differently in real-life situations with actual stakes. The researchers also tested a complete use of artificial intelligence rather than a partial use, where a person might simply use a program to edit a few sentences.

Because the research focused on one-way communication, it is unknown how people might react during a live, back-and-forth conversation. Additionally, the study only included participants from the United States. The researchers are particularly interested in exploring what specific situations trigger suspicion in everyday life.

“Our next step is to understand what triggers vigilance and suspicion: what flips the switch between everyday communication and contexts like academia, where people are much more aware of possible AI use? Our current studies already suggest it’s not simply a matter of exposure or familiarity with these tools, since even heavy AI users aren’t more likely to suspect others,” Molnar said.

“So we’re now testing other explanations: for example, whether high-stakes situations (grades, hiring, evaluations) reliably increase vigilance, and whether people become more skeptical only after personally relevant negative experiences that teach them to watch for AI use. I would also love to collect data in other countries (our current experiments were conducted in the US) to see if there are any differences in skepticism and vigilance.”

The study, “Blissful (A)Ignorance: Despite the widespread adoption of AI in communication, people do not suspect AI use in realistic contexts,” was authored by Jiaqi Zhu and Andras Molnar.

Leave a comment
Stay up to date
Register now to get updates on promotions and coupons
Optimized by Optimole

Shopping cart

×