Artificial intelligence technologies have been taking over the job market ever since they were introduced. Factory workers, editors, retail employees, photographers, and now journalists are at risk as the market for their careers disappears.
AI has made its way into journalism through a new broadcasting station called Channel One.
Channel One is a fully AI-generated news station that streamed its first broadcast at the beginning of this year. Adam Mosam, a tech entrepreneur, launched the channel with director and producer Scott Zabielski, aiming to change the way people interact with news.
The pair market Channel One as a “personalized” news source, with its algorithm able to gather user information and target specific news stories based on preferences. This type of media personalization can put viewers at risk of being caught in a feedback loop.
Feedback loops occur when personal data is used to inform the types of media content delivered, preventing users from encountering new viewpoints or ideologies.
The current model of news journalism is built around fact-checked information delivered without bias. Anything else would be considered fake news.
The definition of journalism sparks scholarly debate because it is such a multifaceted concept. However, one main criterion for a good journalist is universally agreed upon: media credibility. Media credibility can be boiled down to a few key ideas — believability, accuracy, and trustworthiness.
Local news anchors gain popularity because they build connections with their viewers. They foster trust by being active in the community and addressing local issues.
There is also a level of emotion newscasters bring that AI cannot replicate. For example, a Fox News reporter cried on air while discussing the severe damage and loss of life caused by a recent hurricane in Florida.
Ever since generative AI applications like ChatGPT gained worldwide popularity, writers and journalists have faced a threat as their work is used to train these applications. Channel One follows this trend by using stories and live feeds from other news stations in its broadcasts.
The media is already full of generative artificial intelligence, with Instagram’s addition of Meta and Snapchat’s My AI.
Artificial intelligence applications like Sora, which produce high-quality videos, pose a threat as to how they could be used to generate and spread fake news. I’m sure we’ve all seen AI-generated videos or deepfakes, but with never-ending technological innovation, these videos are becoming harder and harder to distinguish from reality.
Public opinion surrounding generative AI has been adverse, to say the least. In a study by the Reuters Institute, 52 percent of Americans polled said they are uncomfortable relying on news outlets that are mostly AI-generated.
I am not arguing that AI is evil or has no place in the newsroom. Using AI to generate closed captions for news broadcasts, for example, makes reports more accessible and is a valuable resource for the hearing-impaired.
Using artificial intelligence responsibly can aid broadcasters and journalists, allowing them to work more efficiently by using applications like ChatGPT for story ideas or chatbots for data collection.
However, having a fully AI-generated news station calls the channel’s credibility into question, as there is no way for viewers to detect algorithmic bias. Additionally, there is a lack of trustworthiness and believability for audience members who value the personal connection of a local community member delivering their daily news.
Artificial intelligence will not take over journalism because of the public’s widespread distrust of AI and the fact that machines cannot replicate human emotion.