Skip to main content

AI in the Newsroom: Professor Studying Responsible Uses

Journalists are increasingly using artificial intelligence (AI) to assist with news production. For example, speech recognition tools can accurately transcribe audio, saving reporters valuable time while conducting interviews. However, the explosion of generative AI (tools that can generate their own text, images, etc.) has brought one pressing question to the forefront: how can the industry use these tools responsibly?

Nick DiakopoulosNick Diakopoulos, associate professor of communication studies in Northwestern’s School of Communication and (by courtesy) associate professor of computer science in Northwestern Engineering, is studying this issue through his “Generative AI in the Newsroom Project.” He is gathering case studies to showcase how the technology is — or isn’t — working.

“There needs to be some careful contemplation on how exactly this is going to work for news production, considering ethics and the limitations of these AI models,” Diakopoulos said. “Basic groundwork needs to be done to see how useful these kinds of tools are for news production.”

News outlets have taken different approaches with generative AI. Some are using large language models like ChatGPT to write articles (human editors then review the content), while others have restricted their use. Diakopoulos predicts the majority of news operations will take an approach that will require a human in the loop before any AI-created content is published.

“There might be some use cases that are low risk, like if you have a 1 in 100 error, and the risk from that error is very low in terms of the harm it could cause,” he said. “I’m not suggesting journalists lower their accuracy standards, but I do think every use case is a little different in terms of how rigorously you need the human in the loop.”

Inaccuracy is perhaps the greatest limitation with generative AI. Diakopoulos performed a quick audit of Microsoft’s new Bing chatbot, which was integrated with ChatGPT in February. He asked it questions about recent news stories, like the Chinese spy balloon and the train derailment in Ohio. In his analysis, Diakopoulos found nearly half (47%) of the chatbot’s 15 responses were inaccurate.

Another concern with generative AI is the fear it could replace journalists. Diakopoulos doesn’t think this will happen, pointing to the unoriginal texts chatbots produce. There is also the need for human journalists to gather and collect data (such as during interviews) that an AI model can synthesize.

Rather than taking away jobs, Diakopoulos believes AI will complement humans and create new jobs involving tasks such as editing, story gathering, and factchecking. AI-generated transcripts could also give journalists more time for other things.

“Whether or not these technologies really impact labor depends on how these technologies are deployed by management,” Diakopoulos said. “Management could come in and say, ‘you’re going to write 10 times as many stories.’ Or they could say, ‘you’re going to write the same number of stories with 10 times as many sources because you can do 10 times as many interviews in the same amount of time.’ I think option B is an awesome vision for what the future could be if we could improve the quality of the news media, rather than just the quantity.”

Coding is not required to work with generative AI. However, it’s helpful to be familiar with application programming interfaces to systematically explore prompts, compare datasets, and collect data about how the model is responding, Diakopoulos said. A level of expertise in natural language is also required to generate the best responses with ChatGPT. This developing practice is known as prompt engineering.

Diakopoulos created a notebook to provide examples of how newsrooms can use ChatGPT and is also developing examples based on research in his lab. His goal is to share updates on the various case studies being developed as part of the project by late April.

Back to top