“Generative AI tools have offered novel opportunities to produce propaganda at scale,” the report said.
The UN's Intergovernmental Panel on Climate Change serves as the model for the work of an expert committee examining misinformation, and they conclude that the largest risks to a reliable online news ecosystem are social media owners, politicians, and governments.
Owners of social media platforms came out on top in a study of worries, according to the International Panel on the Information Environment (IPIE), followed by politicians and governments from both home and abroad.
The research demonstrated that the global information ecosystem was at a "critical juncture," according to co-founder of the IPIE Philip Howard, an internet studies professor at the University of Oxford.
“One of the most pressing concerns highlighted by our survey is the influence of social media platform owners. Their control over content distribution and moderation policies significantly impacts the quality and integrity of information. The unchecked power of these entities poses a grave risk to the health of our global information environment,” he said.
Based on responses from 412 academic researchers in a variety of subjects, including the humanities, social sciences, and computer sciences, the panel's conclusions were primarily drawn from the US and western Europe, however contributions were also received from China, India, Nigeria, and Brazil. According to Howard, the organisations, individuals, and information that supported millions of people's "daily news diet" were referred to as the "information environment."
The report does not identify specific tech platform owners but Howard, a co-author of the report, said Elon Musk’s ownership of X – the platform formerly known as Twitter – had raised concerns including reports of excessive promotion of Musk’s own tweets, while Frances Haugen, a whistleblower at Mark Zuckerberg’s Meta, has claimed that the owner of Facebook and Instagram gives a lower priority to moderating non-English-language content. Howard added that TikTok, which is owned by Beijing-based ByteDance, was another source of concern after lawmakers on both sides of the Atlantic voiced fears it could be susceptible to pressure from the Chinese government.
TikTok’s CEO, Shou Zi Chew, said last year that the platform’s owner was “not an agent of China or any other country”, while Meta has said it reviews content on Facebook and Instagram in more than 70 languages. X has been approached for comment.
About two-thirds of respondents to the survey expected the information environment to worsen in the future, compared with just over half in the previous survey. The IPIE launched as a non-governmental organisation last year after warning that biased algorithms, manipulation and misinformation were a “global and existential threat”.
The report warns that many politicians have “instrumentalised” conspiracy theories and misinformation for political gain, with the knock-on effect of eroding trust in reliable sources of information and democratic institutions.
Nearly two-thirds of the experts surveyed felt that AI-generated videos, voice, images and text had had a negative effect on the information environment, with the same proportion “convinced” that it magnified the problem of misinformation.
“Generative AI tools have offered novel opportunities to produce propaganda at scale,” the report said.
The top AI-related concern was AI-generated video, followed by voice. The survey found that experts in developing countries were more concerned about the negative effects of generative AI than experts in developed countries. A clear majority of respondents also found a positive side to AI, being “moderately hopeful” that it could bring benefits such as helping detect misleading content and aiding journalists to sift through large banks of data.
The panel’s findings were based on responses from 412 academic researchers in fields including the social sciences, humanities and computer sciences.
Asked how to counter the problems highlighted in the report, respondents recommended promoting a free and independent media; implementing digital literacy campaigns; encouraging factchecking, and labelling misleading content.