People Prefer the Truth on Social Media

People Prefer the Truth on Social Media



People Prefer the Truth on Social Media

There is a lot of concern about the impact of social media on the kind of information that people encounter that shapes their beliefs about the world. Misinformation about science and politics can influence voting behavior, support for research, and people’s health-related actions. Because of the importance of this topic, it should not be surprising that this issue has become the target of psychology research.

An interesting paper by Nicholas Fay, Keith Ransom, Bradley Walker, Piers Howe, Andrew Perfors, and Yoshihisa Kashima in a 2026 issue of the Journal of Personality and Social Psychology explores whether people find messages that are true more persuasive than messages that are false.

The researchers did four studies that were focused on differences in reactions people have to true and false information. The first two studies focused on persuasion. In one study, they started by having a group of people generate social media messages designed to be persuasive about a set of topics. Some participants were asked to generate only messages they believed to be true. Others were asked to generate messages they believed to be untrue. The second study used a large language model to generate messages on the same topics using the same three sets of instructions.

Then, a second group of participants encountered some of the statements generated by humans and/or a large language model. They started by rating their degree of belief in the topics about which the statements were made. Then, they saw one of the statements and rated it along several dimensions such as whether they thought the messages were true or untrue, whether they would share the message online, and also how relevant and familiar they were and how much positive and negative emotion the messages evoked. Finally, they rated their belief in the topic again after seeing the statement, with the difference in belief from before seeing the sentence to after being a measure of the persuasiveness of the statement.

For both statements generated by humans and by large language models, participants rated the messages designed to be true as more truthful than those designed to be untrue. People rated themselves as more willing to share messages designed to be true online than messages designed to be untrue. The true messages increased people’s belief in the topic. The one difference between the human-generated messages and LLM-generated messages was that untrue messages generated by people led to a small decrease in belief in the topic, while the untrue messages generated by LLMs led to a small and nonsignificant increase in belief in the topic. A statistical analysis of what drove the persuasiveness of the messages found that the truth of the message increased its persuasiveness.

A second set of studies paralleled these first two experiments, but the instructions specifically asked the people (or LLM) generating messages to make them as attention-getting as possible. When people rated these messages along the same dimensions as in the previous studies, the same pattern of results was obtained. Again, participants were able to distinguish the true from the false messages and found the true messages more persuasive than the false messages. The one difference in the results of this study is that the false messages generated by LLMs reliably decreased belief in the topics just as the human-generated messages did.

One final result of interest. In the experiments in which people generated the persuasive statements, a third group of people were asked to generate persuasive or attention-getting statements but weren’t asked specifically to make those statements true or false. Of interest, participants generating persuasive statements tended to write statements that were rated by other people as true. When the instructions were to generate attention-getting statements, there was a greater tendency for those statements to be false (but the false statements were not as persuasive as the true statements).

These results are somewhat hopeful in the face of all of the concerns about social media. People seemed reasonably good at distinguishing between true and false statements, even ones that are generated by AI models. Not only are they able to distinguish between true and untrue statements, they are more persuaded by true statements than by false ones.

The complexity with studies like these is that they focus only on what happens when people see a particular statement once. This aspect matters, because part of the way we learn about what is true and what is not is through the consistency of the things that we hear from others. After all, most of us are not able to evaluate technical claims for ourselves. Instead, we have to trust other people. And so, the claims we hear made often by others (and particularly by experts) are ones that we are likely to believe to be true. That means that even if we are initially persuaded more by true statements than by false ones, if we are surrounded by false information, we may eventually have trouble distinguishing between things that are actually true and those we believe to be true because we encounter them often.



Source link

Recommended For You

About the Author: Tony Ramos

Leave a Reply

Your email address will not be published. Required fields are marked *

Home Privacy Policy Terms Of Use Anti Spam Policy Contact Us Affiliate Disclosure DMCA Earnings Disclaimer