Governance of a City-State
Journalism in the age of AI: Balancing Innovation and Integrity

We live in a world of high risk and rapid changes, so it’s vital to be well-informed. Will Artificial Intelligence (AI) help or hinder you in your life? Especially in your search for reliable information and explanations that analyse such information and situations that you may encounter? Would it bother you if you found out that what you are reading now had been written by AI or with the help of AI?

My work involves exploring in cooperation with news organisations around the world how to use this rapidly-growing technology called AI responsibly to support the information that we convey through our journalism. The short answer is that this technology is very powerful, but it is humans — you and I — who will decide if it is a force for good or bad.

As a newspaper reader, you enjoy extensive coverage of AI in all aspects of your lives from its use in financial markets to health services, security and even entertainment such as streaming TV or music. But how is it changing the quality of all this information and how you receive it? And how is it changing journalism itself?

Innovative modern news organisations have been making full use of new communications technologies. Most of them moved online and shared their content on social media relatively early. Some use AI to automate simple data stories, personalise content, and to help moderate reader comments. Many are now also using Generative AI tools (like ChatGPT) for translation, transcription and for tweaking content to make it more appealing. Most of the time, most people won’t notice it, but should readers who consume this news worry about how this formidable and rapidly evolving technology is taking up residence in newsrooms and in what people read?

Over the last six years at the London School of Economics and Political Science (LSE), we have researched globally into the opportunities and risks of AI for journalism. In our JournalismAI project, we have created education and training resources as well as innovation programmes. With more than 12,000 people in our network, we work with newsrooms from Argentina to Australia — in English, Arabic, Spanish, and Portuguese.

We have seen how AI technologies can supplement many kinds of work, including that of journalists, usually doing relatively simple tasks but at great speed and scale. In journalism, for instance, AI can impact on every aspect from news gathering to content creation and distribution and revenue-raising. But the risks that come with it — such as tech dependency, and flaws in the algorithms or data bases used — have also become evident.

As has become evident in many fields around the world, the explosion of Generative AI since 2022 might accelerate some of those trends and create new paradigms for news media everywhere.  Generative AI is a step up on previous types of AI machine learning, automation and data-driven software. It can create content from a simple prompt and to learn how to do it better. So, it appears to be “intelligent”. Used responsibly it is going to have a huge impact.

Leading the GenAI charge in the region’s media industry, for instance, are some of Asia’s biggest news organisations. You might have seen stories about GenAI-created newsreaders such as ‘Joon’ and ‘Monica’ who present on Malaysian news channel Astro Awani. Those automated newscasters are the most visible sign of GenAI in newsrooms but many other uses are being adopted all the time. In India, for example, the Hindustan Times has created a team that will focus on creating a GenAI-driven newsbot, as well as behind-the-scenes tools to increase personalisation and boost their recruitment of subscribers.

The huge Japanese publisher Nikkei which produces vast amounts of content every day for its subscribers is using GenAI to create a product called Minutes. This condenses some stories into easily digested nuggets of news for readers in a hurry, opening up a whole new audience and bringing in fresh revenue.

But what about the impact on customers? It might sound like good news for hard-pressed media organisations but what about you? Are you worried about these technologies? Would you trust journalism that has been created using AI? Do you worry about being targeted by the algorithms? How can you avoid disinformation and a flood of poor quality articles generated by AI bots? How will you even be able to tell if something is created by humans or AI? Trust in journalism is already fragile and surveys have shown that the public is even more sceptical about newsrooms using AI.

But is it the fault of AI or the businesses and people applying it?

In fact, most of the mistakes with GenAI, including in journalism, have been made by humans who use AI tools in dumb ways or for malicious reasons. They use it to create poor quality ‘slime’ in ‘content farms’. They care only about clicks that earn them revenue.

Then there are journalists who use GenAI in stupid and deceitful ways such as the German sports magazine which used AI to make up a fictitious interview with the F1 legend Michael Schumacher. Others want to spread false information or content that creates confusion and fear. There have always been people and governments who create bad journalism and disinformation. GenAI allows them to do that more easily and to make it harder to detect.

There are inherent risks in using Generative AI in many fields and this applies wherever it is used, including in media. It is a language machine, not a truth machine. It does not ‘know’ anything. It can be inaccurate, and it can make things up. So for everyone, including journalists and anyone consuming news, GenAI must be treated with caution.

So, for news organisations, it is important that they draw up and follow guidelines on how their staff are going to use AI tools in a responsible way. Those guidelines might also help to reassure readers. It is also important that publishers are transparent with their audiences. Tell readers if you use AI extensively. Most people will be pleased that the journalists they respect are being super-powered by AI tools. But those journalists should explain what they are doing and what measures they take to make sure their work remains trustworthy and reliable.

Ultimately this is about human choices, as it always has been with journalism. You will trust journalism if it delivers authentic reporting over time. You will trust it because the humans make the important judgements, not machines. New technologies like the Internet, social media and smartphones have delivered amazing products and services and AI can help do more.  So please continue to be cautious about AI-driven journalism. In fact, it’s sensible to be careful about any information online. But for the news industry and news consumers this technology represents a potential game-changer. It is important that the best news organisations use it to continue their good work at a time when we need strong, independent and effective journalism more than ever. 

Oh, and yes, I did use ChatGPT in the research and writing of this article. It saved me time and threw up a couple of fresh ideas for me. You should try it yourself.

 

Charlie Beckett is a professor in the Department of Media and Communications. He is the founding director of Polis, the London School of Economics’ international journalism think-tank. Professor Beckett is currently leading the Polis Journalism and AI project. He was recently in Singapore to speak at the 15th anniversary reunion of the Asia Journalism Fellowship, a programme funded by Temasek Foundation and hosted by the Institute of Policy Studies, National University of Singapore.

 Top photo from Freepik

  • Tags:

Subscribe to our newsletter

Sign up to our mailing list to get updated with our latest articles!