July 02 2024
https://www.media4change.co/methodology/expert-interview-are-newsrooms-successfully-regulating-ai-usage?page&methodology=expert-interview-are-newsrooms-successfully-regulating-ai-usage&post_type=methodology&name=expert-interview-are-newsrooms-successfully-regulating-ai-usage
Expert interview: are newsrooms successfully regulating AI usage?

Technology giants and the buzz around AI in other industries are pushing newsrooms to integrate these technologies into journalists’ work, even if newsrooms are not fully prepared. Researcher Justin Wang Ngai Yeung from the Oxford Internet Institute, together with colleagues, analyzed the AI usage guidelines of 37 newsrooms in 17 countries and surveyed their journalists about AI use in their work.

Expert interview: are newsrooms successfully regulating AI usage?

Technology giants and the buzz around AI in other industries are pushing newsrooms to integrate these technologies into journalists' work, even if newsrooms are not fully prepared. Researcher Justin Wang Ngai Yeung from the Oxford Internet Institute, together with colleagues, analyzed the AI usage guidelines of 37 newsrooms in 17 countries and surveyed their journalists about AI use in their work.

Justin Wang Ngai Yeung

The study shows that most guidelines address aspects important to journalistic practice: ensuring ethical AI use, guaranteeing that AI-generated content is reviewed and checked by a journalist, the need to disclose which content is created using these tools, and protecting user data. Yeung told BENDRA.lt that he noticed several gaps. For example, AI usage guidelines are often prepared by chief editors or the media organization’s administration without consulting journalists, so the guidelines do not always reflect the reality of journalists’ work.

“The biggest problem with the guidelines is that they’re really performative. So they’re not really about what you should do when you are handling a certain kind of, for example, sensitive data. Imagine you are going to do a search with AI. What are the ethics of doing these kinds of searches? Are you sure you’re protecting data? There are a lot of questions, but these guidelines tend not to answer them at all.”

The researcher notices that the newsrooms lack knowledge about AI and its integration. Most of the interviewed journalists said that AI is important, but few could define exactly what AI is and what its systems are already used in the newsroom. It was not uncommon for journalists to point out that their data journalism colleagues in another department could answer these questions.

According to Yeung, knowledge barriers are not bad themselves. For an organisation to function effectively, each department has different responsibilities and delve into information that not every employee needs to know. “The knowledge silos become a problem when they obstruct people from knowing more about AI or knowing more about how to use AI ethically. We saw that many journalists kind of outsource the responsibility to their data science team. I think it is one of the biggest challenges.” says the scientist.

According to the researcher, there are many ways to reduce knowledge barriers. First, there should be ongoing discussions in newsrooms about generative AI and how it is changing the work of journalists. Lectures and practical seminars on programs already used by journalists would also help to acquire knowledge. According to Mr. Yeung, collaboration between reporters, editors and a team of data journalists, who can often introduce the latest tools that make journalists’ daily work easier, and advise on how to use them ethically and legally, is crucial.

“Journalists need to know the basics of AI. So far, we’ve observed very little awareness of that. In a way, it is not their problem, but a matter of the education system of journalism in general.” says Mr. Yeung.

He notes that while the number of generative AI tools in the industry will increase, and so will the pressure to use these tools, it’s important not to succumb to the pressure to create. The mere fact that a media outlet uses AI is not a sign of quality or trust. Trust in the media has declined and audiences are suspicious of the media’s use of AI.

Before rushing to integrate generative AI tools into the work of newsrooms, the researcher first calls for raising questions as to whether these tools help solve media problems – questions of objectivity, financial independence, and trust in the media. The scientist also advises to pay attention to the owners of AI programs. Most AI products are developed by for-profit companies. “For the newsrooms, it is important to know who is in control of these tools and what kind of data they are being trained on.” says Yeung.


Read more about the research conducted by Yeung and his colleagues in Guiding the Way: A Comprehensive Examination of AI Guidelines in Global Media

Also listen to an in-depth review of the results and a discussion with industry experts following the study.

Midjourney illustration